Kumar Satvik Chaudhary
Computer science
Hometown: New Delhi, Delhi, India
Graduation date: Spring 2026
FURI | Fall 2025
EduCBM: Concept Bottleneck Models for Interpretable Education
Language models for education have demonstrated strong performance in tasks such as automatic essay grading, question answering, and providing tailored responses. Nonetheless, their “black box” nature creates major challenges for responsible implementation in educational environments, where transparency and interpretability are essential for building educator trust and enhancing student learning. We present EduCBM, a framework that transforms opaque educational AI into transparent systems that clearly explain their decision-making processes using recognizable teaching concepts, enabling educators to trust and verify automated grading and tutoring recommendations. Through comprehensive experiments on standardized test datasets, essay scoring corpora, and student response collections, we demonstrate that EduCBM maintains competitive predictive performance while providing valuable insights into model decision-making processes.
Mentor: Huan Liu
Featured project | Fall 2025

Kumar Satvik Chaudhary, a computer science undergraduate student, decided to participate in FURI to combine his passion for making the reasoning of artificial intelligence, or AI, models easier to explain with an interest in AI use for education. Mentored by Fulton Schools Regents Professor Huan Liu, who teaches in the computer science and engineering program, Chaudhary is working on a project to give students more valuable feedback from automated grading systems.
What made you want to get involved in this program? Why did you choose the project you’re working on?
My research interest started when I took CSE475 Foundations of Machine Learning at ASU, which opened my eyes to the power of AI in education. Later, I got the chance to work with Professor Huan Liu, and that experience shaped my passion for making models more transparent and trustworthy. I chose EduCBM, which is the name of my project, because I want to reduce the uncertainty in automated grading and give students and educators clearer, more interpretable feedback.
How will your engineering research project impact the world?
EduCBM tackles one of the biggest problems with AI in education — its “black box” nature. By making grading models interpretable, it ensures fairness and builds trust among students, instructors and institutions. In the long run, I see it helping educational systems adopt AI more responsibly and improving the way learning outcomes are measured.
What has been your most memorable experience as a student researcher in this program? Did you have a particular “aha!” moment during your project?
My most memorable moment was when I first saw how concept bottlenecks could explain a model’s decision in plain terms rather than just numbers. It felt like I had unlocked a window into how AI “thinks.” That moment confirmed to me that interpretability isn’t just a research goal — it’s a necessary bridge between technology and people.
Have there been any surprises in your research?
Yes, I was surprised by how often large language models can sound convincing but still give uncertain or inconsistent results. It was eye-opening to realize that even powerful models need additional layers, like concept bottlenecks, to make their outputs meaningful and reliable in high-stakes fields like education.
How do you see this experience helping with your career or advanced degree goals?
This project has deepened my interest in research and prepared me for graduate studies in machine learning and AI. It gave me hands-on experience in combining theory with practical applications, which I believe will help me pursue advanced degrees and eventually contribute to building trustworthy AI systems in both academia and industry.
What is the best advice you’ve gotten from your faculty mentor?
Professor Liu told me that research is not just about solving problems but about asking the right questions. That advice helped me shift my focus from chasing results to really thinking about why a problem matters and how my work can make a difference.
Why should other students get involved in this program?
I think FURI is one of the best ways to explore your passions beyond the classroom. It gives you the chance to ask big questions, work closely with faculty and build skills that carry forward into your career or graduate studies. More importantly, it shows you that your ideas can have a real-world impact even as an undergraduate.