菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-12
📄 Abstract - EVATok: Adaptive Length Video Tokenization for Efficient Visual Autoregressive Generation

Autoregressive (AR) video generative models rely on video tokenizers that compress pixels into discrete token sequences. The length of these token sequences is crucial for balancing reconstruction quality against downstream generation computational cost. Traditional video tokenizers apply a uniform token assignment across temporal blocks of different videos, often wasting tokens on simple, static, or repetitive segments while underserving dynamic or complex ones. To address this inefficiency, we introduce $\textbf{EVATok}$, a framework to produce $\textbf{E}$fficient $\textbf{V}$ideo $\textbf{A}$daptive $\textbf{Tok}$enizers. Our framework estimates optimal token assignments for each video to achieve the best quality-cost trade-off, develops lightweight routers for fast prediction of these optimal assignments, and trains adaptive tokenizers that encode videos based on the assignments predicted by routers. We demonstrate that EVATok delivers substantial improvements in efficiency and overall quality for video reconstruction and downstream AR generation. Enhanced by our advanced training recipe that integrates video semantic encoders, EVATok achieves superior reconstruction and state-of-the-art class-to-video generation on UCF-101, with at least 24.4% savings in average token usage compared to the prior state-of-the-art LARP and our fixed-length baseline.

顶级标签: video generation model training aigc
详细标签: video tokenization autoregressive generation adaptive compression computational efficiency quality-cost trade-off 或 搜索:

EVATok:用于高效视觉自回归生成的自适应长度视频标记化框架 / EVATok: Adaptive Length Video Tokenization for Efficient Visual Autoregressive Generation


1️⃣ 一句话总结

这篇论文提出了一个名为EVATok的智能视频压缩框架,它能够根据视频内容的复杂程度动态分配压缩资源,在保证高质量视频重建和生成的同时,显著减少了计算开销,比现有方法平均节省了超过24%的存储或处理资源。

源自 arXiv: 2603.12267