📄
Abstract - Not Like Transformers: Drop the Beat Representation for Dance Generation with Mamba-Based Diffusion Model
Dance is a form of human motion characterized by emotional expression and communication, playing a role in various fields such as music, virtual reality, and content creation. Existing methods for dance generation often fail to adequately capture the inherently sequential, rhythmical, and music-synchronized characteristics of dance. In this paper, we propose \emph{MambaDance}, a new dance generation approach that leverages a Mamba-based diffusion model. Mamba, well-suited to handling long and autoregressive sequences, is integrated into our two-stage diffusion architecture, substituting off-the-shelf Transformer. Additionally, considering the critical role of musical beats in dance choreography, we propose a Gaussian-based beat representation to explicitly guide the decoding of dance sequences. Experiments on AIST++ and FineDance datasets for each sequence length show that our proposed method effectively generates plausible dance movements while reflecting essential characteristics, consistently from short to long dances, compared to the previous methods. Additional qualitative results and demo videos are available at \small{this https URL}.
非Transformer之道:基于Mamba扩散模型的舞蹈生成与节拍表征 /
Not Like Transformers: Drop the Beat Representation for Dance Generation with Mamba-Based Diffusion Model
1️⃣ 一句话总结
这篇论文提出了一种名为MambaDance的新方法,它利用擅长处理长序列的Mamba模型替代传统的Transformer,并结合一种基于高斯分布的节拍表征来指导生成,从而能更有效地生成与音乐节奏同步、从短到长都连贯的舞蹈动作。