Vis enkel innførsel

dc.contributor.authorSmistad, Erik
dc.contributor.authorØstvik, Andreas
dc.contributor.authorPedersen, Andrè
dc.date.accessioned2019-10-07T10:34:40Z
dc.date.available2019-10-07T10:34:40Z
dc.date.created2019-09-23T10:18:06Z
dc.date.issued2019
dc.identifier.issn2169-3536
dc.identifier.urihttp://hdl.handle.net/11250/2620609
dc.description.abstractDeep convolutional neural networks have quickly become the standard for medical image analysis. Although there are many frameworks focusing on training neural networks, there are few that focus on high performance inference and visualization of medical images. Neural network inference requires an inference engine (IE), and there are currently several IEs available including Intel’s OpenVINO, NVIDIA’s TensorRT, and Google’s TensorFlow which supports multiple backends, including NVIDIA’s cuDNN, AMD’s ROCm and Intel’s MKL-DNN. These IEs only work on specific processors and have completely different application programming interfaces (APIs). In this paper, we presents methods for extending FAST, an open-source high performance framework for medical imaging, to use any IE with a common programming interface. Thereby making it easier for users to deploy and test their neural networks on different processors. This article provides an overview of current IEs and how they can be combined with existing software such as FAST. The methods are demonstrated and evaluated on three performance demanding medical use cases: real-time ultrasound image segmentation, computed tomography (CT) volume segmentation, and patch-wise classification of whole slide microscopy images. Runtime performance was measured on the three use cases with several different IEs and processors. This revealed that the choice of IE and processor can affect performance of medical neural network image analysis considerably. In the most extreme case of processing 171 ultrasound frames, the difference between the fastest and slowest configuration were half a second vs. 24 seconds. For volume processing, using the CPU or the GPU, showed a difference of 2 vs. 53 seconds, and for processing an whole slide microscopy image, the difference was 81 seconds vs. almost 16 minutes. Source code, binary releases and test data can be found online on GitHub at https://github.com/smistad/FAST/ .nb_NO
dc.language.isoengnb_NO
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)nb_NO
dc.relation.urihttps://dx.doi.org/10.1109/ACCESS.2019.2942441
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleHigh Performance Neural Network Inference, Streaming, and Visualization of Medical Images Using FASTnb_NO
dc.typeJournal articlenb_NO
dc.typePeer reviewednb_NO
dc.description.versionpublishedVersionnb_NO
dc.source.volume7nb_NO
dc.source.journalIEEE Accessnb_NO
dc.identifier.doi10.1109/ACCESS.2019.2942441
dc.identifier.cristin1727669
dc.relation.projectNorges forskningsråd: 270941nb_NO
dc.description.localcodeThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/nb_NO
cristin.unitcode194,65,25,0
cristin.unitnameInstitutt for sirkulasjon og bildediagnostikk
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal