Vis enkel innførsel

dc.contributor.authorPatsanis, Alexandros
dc.contributor.authorSunoqrot, Mohammed R. S.
dc.contributor.authorBathen, Tone Frost
dc.contributor.authorElschot, Mattijs
dc.date.accessioned2023-03-27T08:06:25Z
dc.date.available2023-03-27T08:06:25Z
dc.date.created2023-03-23T14:57:20Z
dc.date.issued2023
dc.identifier.citationJournal of Medical Imaging. 2023, 10 (2), 1-19.en_US
dc.identifier.issn2329-4302
dc.identifier.urihttps://hdl.handle.net/11250/3060486
dc.description.abstractPurpose To bypass manual data preprocessing and optimize deep learning performance, we developed and evaluated CROPro, a tool to standardize automated cropping of prostate magnetic resonance (MR) images. Approach CROPro enables automatic cropping of MR images regardless of patient health status, image size, prostate volume, or pixel spacing. CROPro can crop foreground pixels from a region of interest (e.g., prostate) with different image sizes, pixel spacing, and sampling strategies. Performance was evaluated in the context of clinically significant prostate cancer (csPCa) classification. Transfer learning was used to train five convolutional neural network (CNN) and five vision transformer (ViT) models using different combinations of cropped image sizes (64 × 64, 128 × 128, and 256 × 256 pixels2), pixel spacing (0.2 × 0.2, 0.3 × 0.3, 0.4 × 0.4, and 0.5 × 0.5 mm2), and sampling strategies (center, random, and stride cropping) over the prostate. T2-weighted MR images (N = 1475) from the online available PI-CAI challenge were used to train (N = 1033), validate (N = 221), and test (N = 221) all models. Results Among CNNs, SqueezeNet with stride cropping (image size: 128 × 128, pixel spacing: 0.2 × 0.2 mm2) achieved the best classification performance (0.678 ± 0.006). Among ViTs, ViT-H/14 with random cropping (image size: 64 × 64 and pixel spacing: 0.5 × 0.5 mm2) achieved the best performance (0.756 ± 0.009). Model performance depended on the cropped area, with optimal size generally larger with center cropping (∼40 cm2) than random/stride cropping (∼10 cm2). Conclusion We found that csPCa classification performance of CNNs and ViTs depends on the cropping settings. We demonstrated that CROPro is well suited to optimize these settings in a standardized manner, which could improve the overall performance of deep learning models.en_US
dc.language.isoengen_US
dc.publisherSPIEen_US
dc.relation.urihttps://doi.org/10.1117/1.JMI.10.2.024004
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleCROPro: a tool for automated cropping of prostate magnetic resonance imagesen_US
dc.title.alternativeCROPro: a tool for automated cropping of prostate magnetic resonance imagesen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber1-19en_US
dc.source.volume10en_US
dc.source.journalJournal of Medical Imagingen_US
dc.source.issue2en_US
dc.identifier.doi10.1117/1.JMI.10.2.024004
dc.identifier.cristin2136483
dc.relation.projectNorges forskningsråd: 295013en_US
dc.relation.projectKreftforeningen: 215951en_US
dc.relation.projectSamarbeidsorganet mellom Helse Midt-Norge og NTNU: 81770928en_US
dc.relation.projectSamarbeidsorganet mellom Helse Midt-Norge og NTNU: 90265300en_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal