Action100M:一个大规模视频动作数据集 / Action100M: A Large-scale Video Action Dataset
1️⃣ 一句话总结
这篇论文提出了一个名为Action100M的超大规模视频动作数据集,它通过自动化流程从海量网络教学视频中提取了约1亿个带开放词汇标注的动作片段,并展示了使用该数据集训练模型能显著提升其在多种动作识别任务上的性能。
Inferring physical actions from visual observations is a fundamental capability for advancing machine intelligence in the physical world. Achieving this requires large-scale, open-vocabulary video action datasets that span broad domains. We introduce Action100M, a large-scale dataset constructed from 1.2M Internet instructional videos (14.6 years of duration), yielding O(100 million) temporally localized segments with open-vocabulary action supervision and rich captions. Action100M is generated by a fully automated pipeline that (i) performs hierarchical temporal segmentation using V-JEPA 2 embeddings, (ii) produces multi-level frame and segment captions organized as a Tree-of-Captions, and (iii) aggregates evidence with a reasoning model (GPT-OSS-120B) under a multi-round Self-Refine procedure to output structured annotations (brief/detailed action, actor, brief/detailed caption). Training VL-JEPA on Action100M demonstrates consistent data-scaling improvements and strong zero-shot performance across diverse action recognition benchmarks, establishing Action100M as a new foundation for scalable research in video understanding and world modeling.
Action100M:一个大规模视频动作数据集 / Action100M: A Large-scale Video Action Dataset
这篇论文提出了一个名为Action100M的超大规模视频动作数据集,它通过自动化流程从海量网络教学视频中提取了约1亿个带开放词汇标注的动作片段,并展示了使用该数据集训练模型能显著提升其在多种动作识别任务上的性能。
源自 arXiv: 2601.10592