An explainable assessment tool or autograder for CSE courses
Can we explain hallucinations in LLMs?
Autonomous vehicles are project to making errors and failures without knowing why. Our goal is to (1) detect those errors and (2) explain them for better understandability.