Spaces:
Sleeping
title: TruthCheck AI
emoji: π‘οΈ
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
TruthCheck: AI-Powered Fact Verification System
A state-of-the-art Automated Fact-Checking System that uses a multi-stage neural pipeline to verify text claims in real-time. It combines Web Scraping, Semantic Search, and Natural Language Inference (NLI) to determine the truthfulness of statements with high precision.
π Key Features
π§ Advanced AI Core
- Multi-Model Consensus: Aggregates judgments from
RoBERTa-large-MNLIandDeBERTa-v3-largefor robust accuracy. - Semantic Filtering: Uses
Sentence-Transformersto ensure only relevant evidence is analyzed. - Credibility Weighting: Automatically assigns higher trust scores to
.gov,.edu, and scientific domains.
π» Modern "Cyber-Noir" Interface
- Futuristic UI: deep space blue theme with neon cyan/purple accents using Tailwind CSS.
- Real-Time Dashboard: Track system stats, truth rates, and scan history in the Command Center.
- Interactive Visuals: Animated confidence gauges, evidence streams, and live "scanning" effects.
βοΈ Enterprise-Ready
- REST API: Fully documented endpoint (
/api/verify) for external integration. - Persistence: Built-in SQLite database stores all verification history.
- Scalable Architecture: Modular design separating Extraction, Retrieval, and Classification layers.
ποΈ System Architecture (Top-to-Bottom)
The application follows a strictly layered pipeline architecture:
Input Layer:
- User submits a claim via the Web UI or API.
- The
ClaimExtractoridentifies factual statements using spaCy.
Retrieval Layer:
KeywordExtractorpulls search terms (Entities/Nouns).EvidenceRetrieverscrapes trusted sources (Wikipedia, Google, DuckDuckGo).- Evidence is filtered by domain credibility and semantic similarity.
Inference Layer (The "Brain"):
- Filtered evidence is paired with the claim (Premise + Hypothesis).
- NLI Models classify each pair as
Entailment,Contradiction, orNeutral. - A weighted voting algorithm calculates the final Verdict and Confidence Score.
Presentation Layer:
- Results are returned to the user with a color-coded verdict (Green/Red/Amber).
- Data is archived in the
history.dbSQLite database.
π Installation & Setup Guide
Follow these steps to deploy the system locally.
Prerequisites
- Python 3.10+ installed.
- Git installed.
- Internet connection (for downloading models).
Step 1: Clone the Repository
git clone https://github.com/CHRISDANIEL145/truth-check.git
cd truth-check
Step 2: Create Virtual Environment
Isolate dependencies to avoid conflicts.
# Windows
python -m venv venv
.\venv\Scripts\activate
# Linux/Mac
python3 -m venv venv
source venv/bin/activate
Step 3: Install Dependencies
This will install PyTorch, Transformers, spaCy, and Flask.
pip install -r requirements.txt
Step 4: Download Language Models
Pre-download the necessary NLI and spaCy models.
python -m spacy download en_core_web_sm
Note: The Transformer models (RoBERTa/DeBERTa) will automatically download on the first run (approx. 3GB).
Step 5: Run the Application
Start the Flask server.
python run.py
You should see output indicating the server is running on http://127.0.0.1:5000.
π Usage Guide
1. Using the Analyzer
- Navigate to
http://127.0.0.1:5000. - Type a factual claim (e.g., "The Great Wall of China is visible from space").
- Click INIT_SCAN.
- View the Verdict, Confidence Score, and supporting/contradicting Evidence.
2. The Dashboard
- Click Dashboard in the top navigation.
- View global statistics (Truth Rate, Total Scans).
- Review your complete verification history.
3. API Integration
Invoke the verification engine programmatically:
Endpoint: POST /api/verify
Request:
{
"claim": "Water boils at 100 degrees Celsius."
}
Response:
{
"label": "True",
"confidence": 0.99,
"evidence": "..."
}
π Project Structure
TruthCheck/
βββ app.py # Main Flask application & routes
βββ run.py # Entry point
βββ history.db # SQLite database (auto-created)
βββ models/ # AI Core
β βββ claim_extractor.py # Identifies claims
β βββ evidence_retriever.py # Web scraping logic
β βββ keyword_extractor.py # NLP keyword extraction
β βββ nli_classifier.py # RoBERTa/DeBERTa inference pipeline
βββ static/ # Frontend Assets
β βββ css/style.css # Custom animations & styles
β βββ js/main.js # Frontend logic
βββ templates/ # HTML Views
β βββ index.html # Analyzer UI
β βββ dashboard.html # Stats & History
β βββ how_it_works.html # Architecture Docs
β βββ api.html # API Docs
βββ utils/ # Helpers
βββ config.py # App configuration
π€ Contributing
Contributions are welcome! Please fork the repository and submit a Pull Request.
π License
This project is licensed under the MIT License.