焦点引导:从视频扩散模型的语义薄弱层中解锁可控性 / Focal Guidance: Unlocking Controllability from Semantic-Weak Layers in Video Diffusion Models
1️⃣ 一句话总结
这项研究提出了一种名为‘焦点引导’的新方法,通过识别并强化视频生成模型中那些对文字指令响应较弱的‘语义薄弱层’,有效提升了模型根据文字描述生成视频的准确性和可控性。
The task of Image-to-Video (I2V) generation aims to synthesize a video from a reference image and a text prompt. This requires diffusion models to reconcile high-frequency visual constraints and low-frequency textual guidance during the denoising process. However, while existing I2V models prioritize visual consistency, how to effectively couple this dual guidance to ensure strong adherence to the text prompt remains underexplored. In this work, we observe that in Diffusion Transformer (DiT)-based I2V models, certain intermediate layers exhibit weak semantic responses (termed Semantic-Weak Layers), as indicated by a measurable drop in text-visual similarity. We attribute this to a phenomenon called Condition Isolation, where attention to visual features becomes partially detached from text guidance and overly relies on learned visual priors. To address this, we propose Focal Guidance (FG), which enhances the controllability from Semantic-Weak Layers. FG comprises two mechanisms: (1) Fine-grained Semantic Guidance (FSG) leverages CLIP to identify key regions in the reference frame and uses them as anchors to guide Semantic-Weak Layers. (2) Attention Cache transfers attention maps from semantically responsive layers to Semantic-Weak Layers, injecting explicit semantic signals and alleviating their over-reliance on the model's learned visual priors, thereby enhancing adherence to textual instructions. To further validate our approach and address the lack of evaluation in this direction, we introduce a benchmark for assessing instruction following in I2V models. On this benchmark, Focal Guidance proves its effectiveness and generalizability, raising the total score on Wan2.1-I2V to 0.7250 (+3.97\%) and boosting the MMDiT-based HunyuanVideo-I2V to 0.5571 (+7.44\%).
焦点引导:从视频扩散模型的语义薄弱层中解锁可控性 / Focal Guidance: Unlocking Controllability from Semantic-Weak Layers in Video Diffusion Models
这项研究提出了一种名为‘焦点引导’的新方法,通过识别并强化视频生成模型中那些对文字指令响应较弱的‘语义薄弱层’,有效提升了模型根据文字描述生成视频的准确性和可控性。
源自 arXiv: 2601.07287