Vis enkel innførsel

dc.contributor.authorStenwig, Eline
dc.contributor.authorSalvi, Giampiero
dc.contributor.authorSalvo Rossi, Pierluigi
dc.contributor.authorSkjaervold, Nils Kristian
dc.date.accessioned2023-03-02T12:19:35Z
dc.date.available2023-03-02T12:19:35Z
dc.date.created2022-05-02T14:57:13Z
dc.date.issued2022
dc.identifier.citationBMC Medical Research Methodology. 2022, 22 (1), .en_US
dc.identifier.issn1471-2288
dc.identifier.urihttps://hdl.handle.net/11250/3055363
dc.description.abstractBackground Machine learning (ML) holds the promise of becoming an essential tool for utilising the increasing amount of clinical data available for analysis and clinical decision support. However, the lack of trust in the models has limited the acceptance of this technology in healthcare. This mistrust is often credited to the shortage of model explainability and interpretability, where the relationship between the input and output of the models is unclear. Improving trust requires the development of more transparent ML methods. Methods In this paper, we use the publicly available eICU database to construct a number of ML models before examining their internal behaviour with SHapley Additive exPlanations (SHAP) values. Our four models predicted hospital mortality in ICU patients using a selection of the same features used to calculate the APACHE IV score and were based on random forest, logistic regression, naive Bayes, and adaptive boosting algorithms. Results The results showed the models had similar discriminative abilities and mostly agreed on feature importance while calibration and impact of individual features differed considerably and did in multiple cases not correspond to common medical theory. Conclusions We already know that ML models treat data differently depending on the underlying algorithm. Our comparative analysis visualises implications of these differences and their importance in a healthcare setting. SHAP value analysis is a promising method for incorporating explainability in model development and usage and might yield better and more trustworthy ML models in the future.en_US
dc.language.isoengen_US
dc.publisherBioMed Centralen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleComparative analysis of explainable machine learning prediction models for hospital mortalityen_US
dc.title.alternativeComparative analysis of explainable machine learning prediction models for hospital mortalityen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.volume22en_US
dc.source.journalBMC Medical Research Methodologyen_US
dc.source.issue1en_US
dc.identifier.doi10.1186/s12874-022-01540-w
dc.identifier.cristin2020756
dc.source.articlenumber53en_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal