• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Fakultet for informasjonsteknologi og elektroteknikk (IE)
  • Institutt for datateknologi og informatikk
  • View Item
  •   Home
  • Fakultet for informasjonsteknologi og elektroteknikk (IE)
  • Institutt for datateknologi og informatikk
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Research method in AI - Reproducibility of results

Kjensmo, Sigbjørn
Master thesis
Thumbnail
View/Open
15995_FULLTEXT.pdf (1.101Mb)
15995_ATTACHMENT.zip (840.4Kb)
15995_COVER.pdf (1.556Mb)
URI
http://hdl.handle.net/11250/2478230
Date
2017
Metadata
Show full item record
Collections
  • Institutt for datateknologi og informatikk [3776]
Abstract
Reproducibility of published computational research has seen increased interest the last twenty years. Regardless of academic field and the impact-factor of journals, studies of reproducibility of computational research have found low rates of reproducibility. Common issues relate to the availability of source code and data, even when original authors attempt to reproduce their own published research.

In this thesis, we investigate the state of reproducibility in artificial intelligence research. The objective is not to reproduce experiments, but to investigate and quantify the state of reproducibility in artificial intelligence research. Two hypotheses were investigated: 1) Documentation of AI research is not good enough to reproduce results, and 2) Documentation practices have improved in recent years. 400 research papers from two instalments of two top AI conference series, IJCAI and AAAI, have been surveyed to investigate the hypotheses. The results of our survey support the first hypothesis, but not the second. While common usage of public datasets is widespread, sharing of code is lagging behind. Facilitating sharing of source code, and data without disrupting the peer review process are necessary to improve the situation.

The contribution efforts of the research in this thesis are: (i) a survey design for evaluating documentation of published papers, (ii) an evaluation of two leading AI conference series, and (iii) suggested incentives to facilitate the reproducibility of AI research.
Publisher
NTNU

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit