菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-09
📄 Abstract - Coordinate-Based Dual-Constrained Autoregressive Motion Generation

Text-to-motion generation has attracted increasing attention in the research community recently, with potential applications in animation, virtual reality, robotics, and human-computer interaction. Diffusion and autoregressive models are two popular and parallel research directions for text-to-motion generation. However, diffusion models often suffer from error amplification during noise prediction, while autoregressive models exhibit mode collapse due to motion discretization. To address these limitations, we propose a flexible, high-fidelity, and semantically faithful text-to-motion framework, named Coordinate-based Dual-constrained Autoregressive Motion Generation (CDAMD). With motion coordinates as input, CDAMD follows the autoregressive paradigm and leverages diffusion-inspired multi-layer perceptrons to enhance the fidelity of predicted motions. Furthermore, a Dual-Constrained Causal Mask is introduced to guide autoregressive generation, where motion tokens act as priors and are concatenated with textual encodings. Since there is limited work on coordinate-based motion synthesis, we establish new benchmarks for both text-to-motion generation and motion editing. Experimental results demonstrate that our approach achieves state-of-the-art performance in terms of both fidelity and semantic consistency on these benchmarks.

顶级标签: natural language processing multi-modal model training
详细标签: text-to-motion autoregressive models motion generation coordinate-based motion editing 或 搜索:

基于坐标的双约束自回归运动生成 / Coordinate-Based Dual-Constrained Autoregressive Motion Generation


1️⃣ 一句话总结

本文提出了一种名为CDAMD的新方法,它结合了自回归和扩散模型的优点,通过使用坐标输入和双重约束机制,显著提升了根据文本描述生成人体运动的逼真度和语义准确性。

源自 arXiv: 2604.08088