| license: apache-2.0 | |
| task_categories: | |
| - video-text-to-text | |
| ## V-NIAH-D Benchmark | |
| A Visual Needle-In-A-Haystack Benchmark with Periodic Distractors. It was presented in [VideoRoPE: What Makes for Good Video Rotary Position Embedding?](https://huggingface.co/papers/2502.05173). | |
| One can use it by following steps similar to [V-NIAH](https://github.com/EvolvingLMMs-Lab/LongVA). | |
| ## VideoRoPE Training Data | |
| To facilitate the reproduction of our experimental results, we have also uploaded the data used by VideoRoPE. We use a subset of the [LLaVA-Video-178K dataset](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K) to train VideoRoPE. | |
| The LLaVA-Video-178K dataset consists of 178K videos and approximately 5 million question-answer (QA) pairs from diverse sources such as HD-VILA, Kinetics, and ActivityNet. To balance training efficiency and long-video comprehension, we randomly select 136K videos with durations under 2 minutes and 18K videos with durations between 2 and 3 minutes. This process resulted in our training set containing approximately 1.3 million pairs. | |