Vis enkel innførsel

dc.contributor.authorGrøndahl, Aurora Rosvoll
dc.contributor.authorKnudtsen, Ingerid Skjei
dc.contributor.authorHuynh, Bao Ngoc
dc.contributor.authorMulstad, Martine
dc.contributor.authorMoe, Yngve Mardal
dc.contributor.authorKnuth, Franziska
dc.contributor.authorTomic, Oliver
dc.contributor.authorIndahl, Ulf Geir
dc.contributor.authorTorheim, Turid Katrine Gjerstad
dc.contributor.authorDale, Einar
dc.contributor.authorMalinen, Eirik
dc.contributor.authorFutsæther, Cecilia Marie
dc.date.accessioned2022-12-20T08:19:09Z
dc.date.available2022-12-20T08:19:09Z
dc.date.created2021-07-01T09:37:59Z
dc.date.issued2021
dc.identifier.citationPhysics in Medicine and Biology. 2021, 66 (6), .en_US
dc.identifier.issn0031-9155
dc.identifier.urihttps://hdl.handle.net/11250/3038716
dc.description.abstractTarget volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen–Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.en_US
dc.language.isoengen_US
dc.publisherIOP Publishingen_US
dc.titleA comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancersen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber19en_US
dc.source.volume66en_US
dc.source.journalPhysics in Medicine and Biologyen_US
dc.source.issue6en_US
dc.identifier.doi10.1088/1361-6560/abe553
dc.identifier.cristin1919744
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel