论文标题

Som-Oformer:用于多人运动预测的社会感知运动变压器

SoMoFormer: Social-Aware Motion Transformer for Multi-Person Motion Prediction

论文作者

Peng, Xiaogang, Shen, Yaodi, Wang, Haoran, Nie, Binling, Wang, Yigang, Wu, Zizhao

论文摘要

多人运动预测仍然是一个具有挑战性的问题,尤其是在个人运动和社会互动的共同表示中。大多数先前的方法仅涉及学习局部姿势动态以进行单个运动(没有全球身体轨迹),并难以捕获社交互动的复杂互动依赖性。在本文中,我们提出了一种新颖的社会感知运动变压器(SOM形态),以共同的方式有效地模拟个人运动和社会互动。具体而言,Somoformer提取了位移轨迹空间中子序列的运动特征,以有效地学习每个人的局部和全局姿势动力学。此外,我们设计了一种新型的社会意识运动注意机制,以通过跨时间和社会维度的运动相似性计算同时捕获动态表示并捕获相互作用依赖性。在短期和长期视野上,我们在多人运动数据集上进行了经验评估我们的框架,并证明我们的方法极大地胜过单人和多人运动预测的最先进方法。接受后将公开提供代码。

Multi-person motion prediction remains a challenging problem, especially in the joint representation learning of individual motion and social interactions. Most prior methods only involve learning local pose dynamics for individual motion (without global body trajectory) and also struggle to capture complex interaction dependencies for social interactions. In this paper, we propose a novel Social-Aware Motion Transformer (SoMoFormer) to effectively model individual motion and social interactions in a joint manner. Specifically, SoMoFormer extracts motion features from sub-sequences in displacement trajectory space to effectively learn both local and global pose dynamics for each individual. In addition, we devise a novel social-aware motion attention mechanism in SoMoFormer to further optimize dynamics representations and capture interaction dependencies simultaneously via motion similarity calculation across time and social dimensions. On both short- and long-term horizons, we empirically evaluate our framework on multi-person motion datasets and demonstrate that our method greatly outperforms state-of-the-art methods of single- and multi-person motion prediction. Code will be made publicly available upon acceptance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源