Projects

2025

The NeSy (previously known as LLM + Law) project focuses on harvesting the power of symbolic language and Neuro-symbolic AI for legal use cases

2023

Can we explain hallucinations in LLMs?

An explainable assessment tool or autograder for CSE courses

Can we explain hallucinations in LLMs?

Autonomous vehicles are project to making errors and failures without knowing why. Our goal is to (1) detect those errors and (2) explain them for better understandability.

0001

Understanding what knowledge AI learn and where it stores in its neurons.