Vis enkel innførsel

dc.contributor.advisorGambäck, Björn
dc.contributor.authorSlotte, Hans Olav
dc.date.accessioned2018-11-16T15:01:04Z
dc.date.available2018-11-16T15:01:04Z
dc.date.created2018-03-15
dc.date.issued2018
dc.identifierntnudaim:18285
dc.identifier.urihttp://hdl.handle.net/11250/2573226
dc.description.abstractThis thesis looks at using Multilayer Perceptron (MLP) classifiers for Native Language Identification both on their own and as part of an ensemble classifier. This approach has been used less than more traditional statistical classifiers, and this thesis compares MLP classifiers to the more traditional classifier type used in NLI, Support Vector Machines (SVM). The experiments find improved results when using a two-layer MLP classifier compared to a linear SVM classifier, and gives mid range results compared to the 2017 Shared Task results, while using fairly small feature vectors. Systems have had trouble generalizing to unseen topics, and therefore this thesis also looks at whether a simpler preprocessing method can give better results for such unseen topics than the more traditional input preprocessing methods. Two different methods are initially compared, and experiments show that for a word n-gram feature vector, the simpler method gives better results, while the opposite is true for character and part-of-speech feature vectors. This leads to a mixed preprocessing method that gives better results than the traditional method for most prompts.
dc.languageeng
dc.publisherNTNU
dc.subjectDatateknologi, Kunstig intelligens
dc.titleA Shallow Neural Network Architecture for Native Language Identification
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel