Disaster Tweets β€” BERT (fine-tuned) [IMPORTANT NOTE: The class imbalance in model was INTENTIONALLY kept. Read notebook's comments to KNOW WHY]

Model: bert-base-uncased fine-tuned for binary classification (disaster vs not-disaster). Author: sakibalfahim Uploaded: 2025-12-22


Quick description

A compact, fine-tuned BERT model to classify tweets as Disaster or Not Disaster.

Label mapping

  • 0 β†’ Not Disaster
  • 1 β†’ Disaster

How to use (example)

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

repo_id = 'sakibalfahim/disaster-tweets-bert'
token = 'hf_xxx'  # use secure token or Hugging Face login
tokenizer = AutoTokenizer.from_pretrained(repo_id, token=token)
model = AutoModelForSequenceClassification.from_pretrained(repo_id, token=token)
model.eval()

inputs = tokenizer('Massive flood reported in downtown area', return_tensors='pt', truncation=True)
with torch.no_grad():
    logits = model(**inputs).logits
    pred = int(logits.argmax(-1)[0].item())
print('Prediction:', {0: 'Not Disaster', 1: 'Disaster'}[pred])

Demo

URL: https://huggingface.co/spaces/sakibalfahim/disaster-tweets-demo

Training summary

  • Base model: bert-base-uncased
  • Training environment: Google Colab (GPU)
  • Saved artifacts: uploaded to this repository.

Intended use & limitations

Intended for research/demo use. Validate on your domain before any high-stakes use. Be cautious with domain shift, sarcasm, or non-English text.

Reproducibility

Check the notebook for exact preprocessing, hyperparameters, and seed.

License

MIT License.

Contact

Author: sakibalfahim β€” via Hugging Face profile.

Downloads last month
28
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for sakibalfahim/disaster-tweets-bert

Finetuned
(6256)
this model

Space using sakibalfahim/disaster-tweets-bert 1