FURI | Fall 2024
Disambiguating Human Language in Large Language Models and Analyzing its Effect to Improve NLP Task Accuracy
Ambiguity in natural language poses significant challenges to Large Language Models (LLMs) used for open-domain question answering and feature extraction. LLMs often struggle with the inherent uncertainties of human communication, leading to misinterpretations, miscommunications, and biased responses, which weaken their ability to be used for tasks like feature extraction or sentiment analysis. Using open-domain question answering as a test case, we compare off-the-shelf and few-shot LLM performance, focusing on the impact of explicit disambiguation strategies. The study also incorporates linguistic perturbations to investigate the use of models in real-world conversational scenarios.