📄
Abstract - ARC-Chapter: Structuring Hour-Long Videos into Navigable Chapters and Hierarchical Summaries
The proliferation of hour-long videos (e.g., lectures, podcasts, documentaries) has intensified demand for efficient content structuring. However, existing approaches are constrained by small-scale training with annotations that are typical short and coarse, restricting generalization to nuanced transitions in long videos. We introduce ARC-Chapter, the first large-scale video chaptering model trained on over million-level long video chapters, featuring bilingual, temporally grounded, and hierarchical chapter annotations. To achieve this goal, we curated a bilingual English-Chinese chapter dataset via a structured pipeline that unifies ASR transcripts, scene texts, visual captions into multi-level annotations, from short title to long summaries. We demonstrate clear performance improvements with data scaling, both in data volume and label intensity. Moreover, we design a new evaluation metric termed GRACE, which incorporates many-to-one segment overlaps and semantic similarity, better reflecting real-world chaptering flexibility. Extensive experiments demonstrate that ARC-Chapter establishes a new state-of-the-art by a significant margin, outperforming the previous best by 14.0% in F1 score and 11.3% in SODA score. Moreover, ARC-Chapter shows excellent transferability, improving the state-of-the-art on downstream tasks like dense video captioning on YouCook2.
📄 论文总结
ARC-Chapter:将长达一小时的视频结构化为可导航章节和分层摘要 /
ARC-Chapter: Structuring Hour-Long Videos into Navigable Chapters and Hierarchical Summaries
1️⃣ 一句话总结
这篇论文提出了一个名为ARC-Chapter的视频结构化模型,它通过利用百万级双语长视频数据集进行训练,能够自动将长达一小时的视频(如讲座、纪录片)分割成可导航的章节并生成分层摘要,在多项指标上显著超越了现有最佳方法。