菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-02
📄 Abstract - Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation

Talking head generation creates lifelike avatars from static portraits for virtual communication and content creation. However, current models do not yet convey the feeling of truly interactive communication, often generating one-way responses that lack emotional engagement. We identify two key challenges toward truly interactive avatars: generating motion in real-time under causal constraints and learning expressive, vibrant reactions without additional labeled data. To address these challenges, we propose Avatar Forcing, a new framework for interactive head avatar generation that models real-time user-avatar interactions through diffusion forcing. This design allows the avatar to process real-time multimodal inputs, including the user's audio and motion, with low latency for instant reactions to both verbal and non-verbal cues such as speech, nods, and laughter. Furthermore, we introduce a direct preference optimization method that leverages synthetic losing samples constructed by dropping user conditions, enabling label-free learning of expressive interaction. Experimental results demonstrate that our framework enables real-time interaction with low latency (approximately 500ms), achieving 6.8X speedup compared to the baseline, and produces reactive and expressive avatar motion, which is preferred over 80% against the baseline.

顶级标签: computer vision multi-modal aigc
详细标签: talking head generation real-time interaction diffusion models avatar animation preference optimization 或 搜索:

化身驱动:面向自然对话的实时交互式头部化身生成 / Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation


1️⃣ 一句话总结

这篇论文提出了一种名为‘Avatar Forcing’的新方法,它能利用用户的语音和动作实时生成表情丰富、反应自然的虚拟头像,显著提升了虚拟交流的互动真实感和响应速度。

源自 arXiv: 2601.00664