📄
Abstract - Bridging Semantic and Kinematic Conditions with Diffusion-based Discrete Motion Tokenizer
Prior motion generation largely follows two paradigms: continuous diffusion models that excel at kinematic control, and discrete token-based generators that are effective for semantic conditioning. To combine their strengths, we propose a three-stage framework comprising condition feature extraction (Perception), discrete token generation (Planning), and diffusion-based motion synthesis (Control). Central to this framework is MoTok, a diffusion-based discrete motion tokenizer that decouples semantic abstraction from fine-grained reconstruction by delegating motion recovery to a diffusion decoder, enabling compact single-layer tokens while preserving motion fidelity. For kinematic conditions, coarse constraints guide token generation during planning, while fine-grained constraints are enforced during control through diffusion-based optimization. This design prevents kinematic details from disrupting semantic token planning. On HumanML3D, our method significantly improves controllability and fidelity over MaskControl while using only one-sixth of the tokens, reducing trajectory error from 0.72 cm to 0.08 cm and FID from 0.083 to 0.029. Unlike prior methods that degrade under stronger kinematic constraints, ours improves fidelity, reducing FID from 0.033 to 0.014.
基于扩散的离散运动分词器:桥接语义与运动学条件 /
Bridging Semantic and Kinematic Conditions with Diffusion-based Discrete Motion Tokenizer
1️⃣ 一句话总结
这篇论文提出了一个名为MoTok的三阶段框架,通过一种基于扩散的离散运动分词器,巧妙地将擅长语义控制的离散模型与擅长精细运动学控制的连续扩散模型结合起来,从而在生成人体运动时,既能理解高级语义指令,又能精确满足复杂的运动细节要求,显著提升了生成质量和控制能力。