Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# LiveOCRVQA: Mitigating Data Contamination to Test True LMM Stylized Text Reading
|
| 2 |
+
|
| 3 |
+
[**π Homepage**](https://flageval-baai.github.io/LiveOCRVQA/) | [**π€ Dataset**](https://huggingface.co/datasets/BAAI/LiveOCRVQA) | [**π Paper**]() | [**π arXiv**]() | [**GitHub**](https://github.com/flageval-baai/FlagEvalMM/tree/liveocrvqa/tasks/liveocrvqa)
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
Large Multimodal Models (LMMs) have demonstrated impressive text recognition capabilities on standard visual question answering (VQA) and document-centric VQA benchmarks. However, these benchmarks primarily feature text in standardized, print-like formats, which fails to capture the diverse and stylized fonts encountered in real-world scenarios such as artistic designs and web media.
|
| 8 |
+
|
| 9 |
+
While some existing datasets include images with complex text, they often rely on older, static image collections, risking data contamination from LMMs' extensive web pretraining. This means that high performance on these benchmarks may not reflect true text recognition capabilities but rather the model's ability to recall previously seen content.
|
| 10 |
+
|
| 11 |
+
LiveOCRVQA addresses this critical gap by:
|
| 12 |
+
|
| 13 |
+
1. Using continuously updated visual content and corresponding meta-data across four diverse categories
|
| 14 |
+
2. Employing a semi-automated pipeline to curate images with stylized text
|
| 15 |
+
3. Focusing on text that humans can easily decipher but that effectively challenges models' text processing abilities
|
| 16 |
+
|
| 17 |
+
## Data
|
| 18 |
+
|
| 19 |
+
The LiveOCRVQA dataset consists of 385 instances of images containing stylized text sourced from:
|
| 20 |
+
|
| 21 |
+
- **Album covers** - Recent music album artwork
|
| 22 |
+
- **Movie posters** - Newly released film promotional materials
|
| 23 |
+
- **Game artwork** - Current video game title screens and promotional images
|
| 24 |
+
- **Book covers** - Recent book cover designs
|
| 25 |
+
|
| 26 |
+
## Evaluation Results
|
| 27 |
+
|
| 28 |
+
Our evaluation of 21 prominent LMMs reveals that:
|
| 29 |
+
|
| 30 |
+
1. Even the most advanced models struggle significantly with queries involving stylized text from novel content
|
| 31 |
+
2. Current LMMs often rely on recalling textual content from previously seen images rather than performing fine-grained character recognition
|
| 32 |
+
3. Performance disparity suggests that high scores on previous, more standardized benchmarks may not accurately reflect robust text recognition capabilities for varied styles
|
| 33 |
+
|
| 34 |
+
## Usage
|
| 35 |
+
|
| 36 |
+
The dataset is available in both raw format (JSON + images) and as a processed Parquet file.
|
| 37 |
+
|
| 38 |
+
```python
|
| 39 |
+
# Example loading code (with Hugging Face datasets)
|
| 40 |
+
from datasets import load_dataset
|
| 41 |
+
|
| 42 |
+
dataset = load_dataset("BAAI/LiveOCRVQA")
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
## Leaderboard
|
| 47 |
+
|
| 48 |
+
| Model | Overall | Album | Book | Game | Movie |
|
| 49 |
+
|-------|---------|-------|------|------|-------|
|
| 50 |
+
| Gemini-2.5-pro | 80.52% | 68.31% | 90.14% | 85.71% | 88.33% |
|
| 51 |
+
| Gemini-2.5-flash | 75.32% | 66.20% | 80.28% | 81.25% | 80.00% |
|
| 52 |
+
| Qwen2-VL-7B | 72.21% | 64.08% | 83.10% | 71.43% | 80.00% |
|
| 53 |
+
| GPT-4o-2411 | 69.09% | 57.04% | 90.14% | 74.11% | 63.33% |
|
| 54 |
+
| gpt-4o-mini | 67.53% | 54.93% | 83.10% | 70.54% | 73.33% |
|
| 55 |
+
| Qwen2.5-VL-72B | 67.27% | 53.52% | 71.83% | 75.00% | 80.00% |
|
| 56 |
+
| GPT-4o-2408 | 67.01% | 48.59% | 81.69% | 79.46% | 70.00% |
|
| 57 |
+
| Qwen2.5-VL-7B | 65.97% | 51.41% | 74.65% | 72.32% | 78.33% |
|
| 58 |
+
| Qwen2-VL-2B | 64.68% | 52.82% | 76.06% | 66.07% | 76.67% |
|
| 59 |
+
| Qwen2-VL-72B | 63.64% | 56.34% | 66.20% | 66.07% | 73.33% |
|
| 60 |
+
| InternVL3-78B | 63.64% | 54.23% | 67.61% | 68.75% | 71.67% |
|
| 61 |
+
| Claude-3-7-sonnet | 62.60% | 38.03% | 85.92% | 66.07% | 86.67% |
|
| 62 |
+
| Claude-3-5-sonnet | 59.74% | 39.44% | 76.06% | 62.50% | 83.33% |
|
| 63 |
+
| Pixtral-Large | 58.96% | 41.55% | 76.06% | 60.71% | 76.67% |
|
| 64 |
+
| InternVL2.5-78B | 50.65% | 43.66% | 54.93% | 53.57% | 56.67% |
|
| 65 |
+
| InternVL3-8B | 49.61% | 40.14% | 54.93% | 49.11% | 66.67% |
|
| 66 |
+
| LLaVA-OV-7b | 45.71% | 45.77% | 50.70% | 41.07% | 48.33% |
|
| 67 |
+
| InternVL2.5-8B | 39.22% | 30.28% | 52.11% | 41.07% | 41.67% |
|
| 68 |
+
| Phi-3.5-vision | 37.40% | 19.01% | 40.85% | 53.57% | 46.67% |
|
| 69 |
+
| Idefics3-8B | 28.05% | 29.58% | 39.44% | 17.86% | 30.00% |
|
| 70 |
+
| Phi-4-multimodal | 17.92% | 16.20% | 21.13% | 17.86% | 18.33% |
|
| 71 |
+
|
| 72 |
+
## Future Updates
|
| 73 |
+
|
| 74 |
+
We plan to release updates to the LiveOCRVQA benchmark on a quarterly basis to continuously track the performance of various LMMs on truly novel content with stylized text.
|
| 75 |
+
|
| 76 |
+
## Citation
|
| 77 |
+
|
| 78 |
+
If you use LiveOCRVQA in your research, please cite our paper:
|
| 79 |
+
|
| 80 |
+
```
|
| 81 |
+
@article{LiveOCRVQA,
|
| 82 |
+
title={LiveOCRVQA: Mitigating Data Contamination to Test True LMM Stylized Text Reading},
|
| 83 |
+
author={},
|
| 84 |
+
journal={},
|
| 85 |
+
year={}
|
| 86 |
+
}
|
| 87 |
+
```
|