菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - VideoFlexTok: Flexible-Length Coarse-to-Fine Video Tokenization

Visual tokenizers map high-dimensional raw pixels into a compressed representation for downstream modeling. Beyond compression, tokenizers dictate what information is preserved and how it is organized. A de facto standard approach to video tokenization is to represent a video as a spatiotemporal 3D grid of tokens, each capturing the corresponding local information in the original signal. This requires the downstream model that consumes the tokens, e.g., a text-to-video model, to learn to predict all low-level details "pixel-by-pixel" irrespective of the video's inherent complexity, leading to high learning complexity. We present VideoFlexTok, which represents videos with a variable-length sequence of tokens structured in a coarse-to-fine manner -- where the first tokens (emergently) capture abstract information, such as semantics and motion, and later tokens add fine-grained details. The generative flow decoder enables realistic video reconstructions from any token count. This representation structure allows adapting the token count according to downstream needs and encoding videos longer than the baselines with the same budget. We evaluate VideoFlexTok on class- and text-to-video generative tasks and show that it leads to more efficient training compared to 3D grid tokens, e.g., achieving comparable generation quality (gFVD and ViCLIP Score) with a 5x smaller model (1.1B vs 5.2B). Finally, we demonstrate how VideoFlexTok can enable long video generation without prohibitive computational cost by training a text-to-video model on 10-second 81-frame videos with only 672 tokens, 8x fewer than a comparable 3D grid tokenizer.

顶级标签: video generation model training multi-modal
详细标签: video tokenization coarse-to-fine representation generative modeling efficient training long video generation 或 搜索:

VideoFlexTok:一种从粗到细、长度可灵活调整的视频表征方法 / VideoFlexTok: Flexible-Length Coarse-to-Fine Video Tokenization


1️⃣ 一句话总结

这篇论文提出了一种新的视频表征方法,它不像传统方法那样把视频固定成一个三维网格,而是将其编码成一个长度可变的序列,其中前面的‘粗’令牌捕捉语义和运动等抽象信息,后面的‘细’令牌补充细节,从而让下游的AI模型(如文生视频模型)训练更高效、能处理更长的视频,且模型可以更小。

源自 arXiv: 2604.12887