AI Technology: Threats and Opportunities for Assessment Integrity in Introductory Programming
Peer reviewed, Journal article
Published version
Permanent lenke
https://hdl.handle.net/11250/3125654Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Originalversjon
NIKT: Norsk IKT-konferanse for forskning og utdanning. 2023, .Sammendrag
Recent AI tools like ChatGPT have prompted worries that assessment integrity in education will be increasingly threatened. From the perspective of introductory programming courses, this paper poses two research questions: 1) How well does ChatGPT perform on various assessment tasks typical of a CS1 course? 2) How does this technology change the threat profile for various types of assessments? Question 1 is analyzed by trying out ChatGPT on a range of typical assessment tasks, including code writing, code comprehension and explanation, error correction, and code completion (e.g., Parson’s problems, fill-in tasks, inline choice). Question 2 is addressed through a threat analysis of various assessment types, considering what AI chatbots would be adding relative to pre-existing assessment threats. Findings indicate that for simple questions, answers tend to be perfect and ready-to-use, though might need some re-phrasing work from the student if the task partly consists of images. For more difficult questions, solutions might not be perfect on the first try, but the student could be able to get a more precise answer via follow-up questions. The threat analysis indicates that chatbots might not introduce any entirely new threats, rather they aggravate existing threats. The paper concludes with some thoughts on the future of assessment, reflecting that practitioners will likely use bots in the workplace, meaning that students must also be prepared for this. AI Technology: Threats and Opportunities for Assessment Integrity in Introductory Programming