A Shallow Neural Network Architecture for Native Language Identification
Master thesis
Permanent lenke
http://hdl.handle.net/11250/2573226Utgivelsesdato
2018Metadata
Vis full innførselSamlinger
Sammendrag
This thesis looks at using Multilayer Perceptron (MLP) classifiers for Native Language Identification both on their own and as part of an ensemble classifier. This approach has been used less than more traditional statistical classifiers, and this thesis compares MLP classifiers to the more traditional classifier type used in NLI, Support Vector Machines (SVM). The experiments find improved results when using a two-layer MLP classifier compared to a linear SVM classifier, and gives mid range results compared to the 2017 Shared Task results, while using fairly small feature vectors.
Systems have had trouble generalizing to unseen topics, and therefore this thesis also looks at whether a simpler preprocessing method can give better results for such unseen topics than the more traditional input preprocessing methods. Two different methods are initially compared, and experiments show that for a word n-gram feature vector, the simpler method gives better results, while the opposite is true for character and part-of-speech feature vectors. This leads to a mixed preprocessing method that gives better results than the traditional method for most prompts.