论文标题

Music2Dance:音乐驱动舞蹈一代的Dancenet

Music2Dance: DanceNet for Music-driven Dance Generation

论文作者

Zhuang, Wenlin, Wang, Congyi, Xia, Siyu, Chai, Jinxiang, Wang, Yangang

论文摘要

综合音乐中的人类动作,即音乐到舞蹈,很有吸引力,并吸引了许多研究兴趣。这不仅是由于对舞蹈的现实和复杂动作的要求,而且更重要的是,综合动作应与音乐的风格,节奏和旋律一致,因此具有挑战性。在本文中,我们提出了一种新颖的自回归生成模型Dancenet,以将音乐的风格,节奏和旋律作为控制信号,以产生具有高现实主义和多样性的3D舞蹈动作。为了提高我们提出的模型的性能,我们捕获了专业舞者的几个同步音乐对配对,并构建了高质量的音乐舞会对数据集。实验表明,所提出的方法可以实现最新的结果。

Synthesize human motions from music, i.e., music to dance, is appealing and attracts lots of research interests in recent years. It is challenging due to not only the requirement of realistic and complex human motions for dance, but more importantly, the synthesized motions should be consistent with the style, rhythm and melody of the music. In this paper, we propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals to generate 3D dance motions with high realism and diversity. To boost the performance of our proposed model, we capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset. Experiments have demonstrated that the proposed method can achieve the state-of-the-art results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源