论文标题
从点到空间:3D移动人类姿势使用商品wifi
From Point to Space: 3D Moving Human Pose Estimation Using Commodity WiFi
论文作者
论文摘要
在本文中,我们提出了Wi-Mose,这是使用商品WiFi的第一个3D移动人类姿势估计系统。以前的基于WiFi的作品已经实现了2D和3D姿势估计。这些解决方案要么是从一个角度捕捉姿势,要么是固定点的人的构造姿势,以防止在每日场景中广泛采用。要重建在整个空间而不是固定点的人的3D姿势,我们将幅度和相位融合到可以提供姿势和位置信息的频道状态信息(CSI)图像中。此外,我们设计了一个神经网络来提取仅与CSI图像相关联的功能,然后将功能转换为关键点坐标。实验结果表明,Wi-Mose可以分别以29.7mm和37.8mm的procrustes分析平均值分析在视线(LOS)和非线(NLOS)情景中的平均值(P-MPJPE)平均值,从而比Sate-The-The-Art方法更高。结果表明,Wi-Mose可以在整个空间中捕获高精度3D人类姿势。
In this paper, we present Wi-Mose, the first 3D moving human pose estimation system using commodity WiFi. Previous WiFi-based works have achieved 2D and 3D pose estimation. These solutions either capture poses from one perspective or construct poses of people who are at a fixed point, preventing their wide adoption in daily scenarios. To reconstruct 3D poses of people who move throughout the space rather than a fixed point, we fuse the amplitude and phase into Channel State Information (CSI) images which can provide both pose and position information. Besides, we design a neural network to extract features that are only associated with poses from CSI images and then convert the features into key-point coordinates. Experimental results show that Wi-Mose can localize key-point with 29.7mm and 37.8mm Procrustes analysis Mean Per Joint Position Error (P-MPJPE) in the Line of Sight (LoS) and Non-Line of Sight (NLoS) scenarios, respectively, achieving higher performance than the state-of-the-art method. The results indicate that Wi-Mose can capture high-precision 3D human poses throughout the space.