FURI | Summer 2025

Domain-Specialized Course Assistant: Fine-Tuning and RAG-Enhanced LLMs for Precision in Cloud Computing Education

Data icon, disabled. Four grey bars arranged like a vertical bar chart.

Generic large language models (LLMs) often provide inaccurate or irrelevant answers to course-specific questions, limiting their educational value. To solve this, we developed a specialized AI assistant for ASU’s Cloud Computing course by fine-tuning a local DeepSeek-R1 model (deployed via Ollama/Docker) on professor-student Q&As and integrated RAGFlow for retrieval-augmented generation from lectures, syllabi, and validated academic sources. This approach yielded higher accuracy and reduced hallucinations compared to generic LLMs for course content.

Student researcher

Qizheng Yang

Computer science

Hometown: Beijing, Beijing, China

Graduation date: Fall 2025