TV2TV:一种用于交错式语言与视频生成的统一框架 / TV2TV: A Unified Framework for Interleaved Language and Video Generation
1️⃣ 一句话总结
这篇论文提出了一个名为TV2TV的新框架,它通过让AI模型在生成视频过程中穿插‘用文字思考’的步骤,来提升复杂视频的生成质量和可控性,使其能更好地理解和执行用户指令。
Video generation models are rapidly advancing, but can still struggle with complex video outputs that require significant semantic branching or repeated high-level reasoning about what should happen next. In this paper, we introduce a new class of omni video-text models that integrate ideas from recent LM reasoning advances to address this challenge. More specifically, we present TV2TV, a unified generative modeling framework which decomposes video generation into an interleaved text and video generation process. TV2TV jointly learns language modeling (next-token prediction) and video flow matching (next-frame prediction) using a Mixture-of-Transformers (MoT) architecture. At inference time, TV2TV decides when to alternate between generating text and video frames, allowing the model to "think in words" about subsequent content before ``acting in pixels'' to produce frames. This design offloads much of the responsibility for deciding what should happen next to the language modeling tower, enabling improved visual quality and prompt alignment of generated videos. It also enables fine-grained controllability, allowing users to modify the video generation trajectory through text interventions at any point in the process. In controlled experiments on video game data, TV2TV demonstrates substantial improvements in both visual quality and controllability. TV2TV also scales to natural videos, as we show by augmenting sports videos with interleaved natural language action descriptions using vision-language models (VLMs). Training TV2TV on this corpus yields strong visual quality and prompt alignment, showcasing the model's ability to reason about and generate complex real-world action sequences. Together, these results highlight TV2TV as a promising step toward video generation with open-ended textual reasoning and control.
TV2TV:一种用于交错式语言与视频生成的统一框架 / TV2TV: A Unified Framework for Interleaved Language and Video Generation
这篇论文提出了一个名为TV2TV的新框架,它通过让AI模型在生成视频过程中穿插‘用文字思考’的步骤,来提升复杂视频的生成质量和可控性,使其能更好地理解和执行用户指令。