论文标题
将可穿戴的IMU与多视图图像进行融合,用于人类姿势估计:几何方法
Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach
论文作者
论文摘要
我们建议从多视图图像中估算3D人类姿势,并在人的四肢附带一些IMU。它通过首先从两个信号中检测到2D姿势而运行,然后将它们提升到3D空间。我们提出了一种几何方法,以增强基于IMU的每对关节的视觉特征。这显着提高了2D姿势估计精度,尤其是当一个关节被遮挡时。我们称此方法定向正规网络(ORN)。然后,我们通过定向定向图形结构模型(ORPSM)将多视图2D姿势提升到3D空间,该模型(ORPSM)共同最大程度地减少了3D和2D姿势之间的投影误差,以及3D姿势和IMU方向之间的差异。简单的两步方法通过在公共数据集上的很大边距减少了最先进的错误。我们的代码将在https://github.com/chunyuwang/imu-human-pose-pytorch上发布。
We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs. It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space. We present a geometric approach to reinforce the visual features of each pair of joints based on the IMUs. This notably improves 2D pose estimation accuracy especially when one joint is occluded. We call this approach Orientation Regularized Network (ORN). Then we lift the multi-view 2D poses to the 3D space by an Orientation Regularized Pictorial Structure Model (ORPSM) which jointly minimizes the projection error between the 3D and 2D poses, along with the discrepancy between the 3D pose and IMU orientations. The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset. Our code will be released at https://github.com/CHUNYUWANG/imu-human-pose-pytorch.