Automatic Grading of Programming Exams
Abstract
The fields of Computer Science and IT are more needed than ever before, and the number of students enrolled in programming courses is rising. An increase in students leads to an increased demand for more teachers and student assistants, and for the final exam, more graders.
To resolve this need, this thesis explored the possibility of an automated grading system, which would learn grading patterns from human graders by extracting features from students' exam submissions and using them to train a machine learning classification system.
A variety of source code evaluation strategies, classification algorithms, parameter ranges and feature sets were assessed, and an experiment was conducted using the continuation exam of 2017 for the course TDT4100 - Object-Oriented Programming as a dataset. The experiment lays the foundation for further research and development within the fields of source code analysis and quality checks, while results were inconclusive because of what we believe to be a sub-optimal dataset.