利用海量激活引导视频扩散变换器 / Steering Video Diffusion Transformers with Massive Activations
1️⃣ 一句话总结
这篇论文发现视频扩散变换器内部存在一种罕见但高强度的‘海量激活’信号,其强度分布与视频帧的时间结构紧密相关,据此提出了一种无需额外训练、计算开销极小的引导方法,能有效提升生成视频的质量和时间连贯性。
Despite rapid progress in video diffusion transformers, how their internal model signals can be leveraged with minimal overhead to enhance video generation quality remains underexplored. In this work, we study the role of Massive Activations (MAs), which are rare, high-magnitude hidden state spikes in video diffusion transformers. We observed that MAs emerge consistently across all visual tokens, with a clear magnitude hierarchy: first-frame tokens exhibit the largest MA magnitudes, latent-frame boundary tokens (the head and tail portions of each temporal chunk in the latent space) show elevated but slightly lower MA magnitudes than the first frame, and interior tokens within each latent frame remain elevated, yet are comparatively moderate in magnitude. This structured pattern suggests that the model implicitly prioritizes token positions aligned with the temporal chunking in the latent space. Based on this observation, we propose Structured Activation Steering (STAS), a training-free self-guidance-like method that steers MA values at first-frame and boundary tokens toward a scaled global maximum reference magnitude. STAS achieves consistent improvements in terms of video quality and temporal coherence across different text-to-video models, while introducing negligible computational overhead.
利用海量激活引导视频扩散变换器 / Steering Video Diffusion Transformers with Massive Activations
这篇论文发现视频扩散变换器内部存在一种罕见但高强度的‘海量激活’信号,其强度分布与视频帧的时间结构紧密相关,据此提出了一种无需额外训练、计算开销极小的引导方法,能有效提升生成视频的质量和时间连贯性。
源自 arXiv: 2603.17825