FURI | Fall 2024
LLM-Multimodal Misinformation Detection
This research project aims to devise a novel framework that improves the detection of increasingly complex multimodal misinformation brought about by Generative AI across social media platforms. While previous studies exist, this approach utilizes a framework that integrates Vision-Language Models (VLMs) and Large Language Models (LLMs) to ascertain inconsistency among textual and image data across newspapers while also fact-checking textual claims through QA (Question-Answer) generation. The researchers seek to improve the baselines of Generative AI multimodal misinformation detection, ultimately increasing robustness and reducing the probability of Generation AI news spreading across social media platforms without requiring expensive human oversight.