深度强制:基于深度汇与参与式压缩的无训练长视频生成 / Deep Forcing: Training-Free Long Video Generation with Deep Sink and Participative Compression
1️⃣ 一句话总结
这篇论文提出了一种无需额外训练的方法,通过优化模型内部记忆管理机制,解决了AI生成超长视频时画面重复、质量下降和动作变慢的问题,能实时生成超过训练时长12倍的连贯高质量视频。
Recent advances in autoregressive video diffusion have enabled real-time frame streaming, yet existing solutions still suffer from temporal repetition, drift, and motion deceleration. We find that naively applying StreamingLLM-style attention sinks to video diffusion leads to fidelity degradation and motion stagnation. To overcome this, we introduce Deep Forcing, which consists of two training-free mechanisms that address this without any fine-tuning. Specifically, 1) Deep Sink dedicates half of the sliding window to persistent sink tokens and re-aligns their temporal RoPE phase to the current timeline, stabilizing global context during long rollouts. 2) Participative Compression performs importance-aware KV cache pruning that preserves only tokens actively participating in recent attention while safely discarding redundant and degraded history, minimizing error accumulation under out-of-distribution length generation. Together, these components enable over 12x extrapolation (e.g. 5s-trained to 60s+ generation) with better imaging quality than LongLive, better aesthetic quality than RollingForcing, almost maintaining overall consistency, and substantial gains in dynamic degree, all while maintaining real-time generation. Our results demonstrate that training-free KV-cache management can match or exceed training-based approaches for autoregressively streaming long-video generation.
深度强制:基于深度汇与参与式压缩的无训练长视频生成 / Deep Forcing: Training-Free Long Video Generation with Deep Sink and Participative Compression
这篇论文提出了一种无需额外训练的方法,通过优化模型内部记忆管理机制,解决了AI生成超长视频时画面重复、质量下降和动作变慢的问题,能实时生成超过训练时长12倍的连贯高质量视频。