A Case for Bayesian Grading
Published in SIGCSE Virtual 2024: Proceedings of the 2024 on ACM Virtual Global Computing Education Conference V. 1, 2024
Abstract: Academic integrity continues to be an issue in education. Students’ grades are often computed using a collection of evidence that varies in its trustworthiness (e.g., a proctored exam can be trusted more than an out-of-class programming project) due to practical constraints. When a student cheats, their trusted and less trustworthy scores are inconsistent, which presents instructors a choice between rewarding the cheating behavior and the burden of investigating / making cheating allegations.
In this position paper, we propose that Bayesian inference might be a useful tool in assigning grades derived from trusted and less trusted evidence. Rather than compute grades by performing arithmetic on both trusted and untrusted assessments, we instead try to infer a latent variable, the student’s mastery of the course material, from these observed performances and their potential for cheating. Key to this approach is that grades can be assigned that discount suspicious work without needing to explicitly make a cheating allegation. A logical conclusion of this approach is that the needed amount of trusted assessments for a given student depends on how inconsistent are their trusted and untrusted assessments.
Recommended citation: Craig Zilles, Chenyan Zhao, Yuxuan Chen, Evan Michael Matthews, and Matthew West. 2024. A Case for Bayesian Grading. In Proceedings of the 2024 on ACM Virtual Global Computing Education Conference V. 1 (SIGCSE Virtual 2024). Association for Computing Machinery, New York, NY, USA, 275–278. https://doi.org/10.1145/3649165.3703624
Download Paper