Disaster Tweets β BERT (fine-tuned) [IMPORTANT NOTE: The class imbalance in model was INTENTIONALLY kept. Read notebook's comments to KNOW WHY]
Model: bert-base-uncased fine-tuned for binary classification (disaster vs not-disaster).
Author: sakibalfahim
Uploaded: 2025-12-22
Quick description
A compact, fine-tuned BERT model to classify tweets as Disaster or Not Disaster.
Label mapping
0β Not Disaster1β Disaster
How to use (example)
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
repo_id = 'sakibalfahim/disaster-tweets-bert'
token = 'hf_xxx' # use secure token or Hugging Face login
tokenizer = AutoTokenizer.from_pretrained(repo_id, token=token)
model = AutoModelForSequenceClassification.from_pretrained(repo_id, token=token)
model.eval()
inputs = tokenizer('Massive flood reported in downtown area', return_tensors='pt', truncation=True)
with torch.no_grad():
logits = model(**inputs).logits
pred = int(logits.argmax(-1)[0].item())
print('Prediction:', {0: 'Not Disaster', 1: 'Disaster'}[pred])
Demo
URL: https://huggingface.co/spaces/sakibalfahim/disaster-tweets-demo
Training summary
- Base model:
bert-base-uncased - Training environment: Google Colab (GPU)
- Saved artifacts: uploaded to this repository.
Intended use & limitations
Intended for research/demo use. Validate on your domain before any high-stakes use. Be cautious with domain shift, sarcasm, or non-English text.
Reproducibility
Check the notebook for exact preprocessing, hyperparameters, and seed.
License
MIT License.
Contact
Author: sakibalfahim β via Hugging Face profile.
- Downloads last month
- 28
Model tree for sakibalfahim/disaster-tweets-bert
Base model
google-bert/bert-base-uncased