This project develops a system to verify the accuracy of information crawled from social media platforms (e.g., Twitter) using a locally deployed Large Language Model (LLM). To address inherent limitations of LLMs, such as hallucinations, we integrate Retrieval-Augmented Generation (RAG) to ground responses in factual knowledge and employ a multi-model framework to ensure robust consensus. Compared to standard approaches, our method significantly enhances inference precision, offering a secure and effective solution for automated misinformation detection without relying on external APIs.