Vis enkel innførsel

dc.contributor.authorCoakley, Kevin
dc.contributor.authorKirkpatrick, Christine
dc.contributor.authorGundersen, Odd Erik
dc.date.accessioned2023-02-27T12:32:00Z
dc.date.available2023-02-27T12:32:00Z
dc.date.created2023-02-23T10:19:38Z
dc.date.issued2022
dc.identifier.isbn978-1-6654-6124-5
dc.identifier.urihttps://hdl.handle.net/11250/3054245
dc.description.abstractReproducing published deep learning papers to validate their conclusions can be difficult due to sources of irreproducibility. We investigate the impact that implementation factors have on the results and how they affect reproducibility of deep learning studies. Three deep learning experiments were ran five times each on 13 different hardware environments and four different software environments. The analysis of the 780 combined results showed that there was a greater than 6% accuracy range on the same deterministic examples introduced from hardware or software environment variations alone. To account for these implementation factors, researchers should run their experiments multiple times in different hardware and software environments to verify their conclusions are not affected.en_US
dc.description.abstractExamining the Effect of Implementation Factors on Deep Learning Reproducibilityen_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.ispartof2022 IEEE 18th International Conference on e-Science (e-Science)
dc.titleExamining the Effect of Implementation Factors on Deep Learning Reproducibilityen_US
dc.title.alternativeExamining the Effect of Implementation Factors on Deep Learning Reproducibilityen_US
dc.typeChapteren_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber397-398en_US
dc.identifier.cristin2128479
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel