FreeTalk:基于情感且拓扑无关的3D说话头生成 / FreeTalk: Emotional Topology-Free 3D Talking Heads
1️⃣ 一句话总结
这篇论文提出了一个名为FreeTalk的两阶段框架,能够仅根据语音和指定的情感,为任意拓扑结构(如未经处理的3D人脸扫描)生成逼真且带有情感动态的3D说话头动画,无需依赖预定义的网格模板。
Speech-driven 3D facial animation has advanced rapidly, yet most approaches remain tied to registered template meshes, preventing effective deployment on raw 3D scans with arbitrary topology. At the same time, modeling controllable emotional dynamics beyond lip articulation remains challenging, and is often tied to template-based parameterizations. We address these challenges by proposing FreeTalk, a two-stage framework for emotion-conditioned 3D talking-head animation that generalizes to unregistered face meshes with arbitrary vertex count and connectivity. First, Audio-To-Sparse (ATS) predicts a temporally coherent sequence of 3D landmark displacements from speech audio, conditioned on an emotion category and intensity. This sparse representation captures both articulatory and affective motion while remaining independent of mesh topology. Second, Sparse-To-Mesh (STM) transfers the predicted landmark motion to a target mesh by combining intrinsic surface features with landmark-to-vertex conditioning, producing dense per-vertex deformations without template fitting or correspondence supervision at test time. Extensive experiments show that FreeTalk matches specialized baselines when trained in-domain, while providing substantially improved robustness to unseen identities and mesh topologies. Code and pre-trained models will be made publicly available.
FreeTalk:基于情感且拓扑无关的3D说话头生成 / FreeTalk: Emotional Topology-Free 3D Talking Heads
这篇论文提出了一个名为FreeTalk的两阶段框架,能够仅根据语音和指定的情感,为任意拓扑结构(如未经处理的3D人脸扫描)生成逼真且带有情感动态的3D说话头动画,无需依赖预定义的网格模板。
源自 arXiv: 2603.15512