Explainable Autograder

Current software assessment tools or “autograders” automatically evaluate and score student programming assignments with output-based feedback, but they do not offer conceptual guidance to help students or instructors improve. Students and instructors get a grade ,showing which test cases pass or fail, but not a clear rationale for why those outcomes occurred. There is a need to improve feedback for students to increase efficient learning [1], and for instructors to inform pedagogy and system evaluation [2]. The goal of this project is to address this gap by developing an explainable AI (XAI) autograder system that identifies the conceptual strengths and weaknesses in computer science student’s answers.

The goal of this project is to develop an autograder for computer sciences that can explain itself.

Previous Publications

[1] Ambrose, Susan A., Michael W. Bridges, Michele DiPietro, Marsha C. Lovett, and Marie K. Norman (2010). “What kinds of practice and feedback enhance learning?” In How Learning Works: Seven research-based principles for smart teaching (pp. 121–152). San Francisco, CA: Jossey-Bass.

[2]