论文标题
UNOC:了解虚拟现实中体现存在的闭塞
UNOC: Understanding Occlusion for Embodied Presence in Virtual Reality
论文作者
论文摘要
在3D空间中跟踪身体和手动运动对于增强和虚拟环境中的社会和自我表达至关重要。与流行的3D姿势估计设置不同,该问题通常根据体现感知(例如,以自我为中心的相机,手持传感器)为内而外的跟踪。在本文中,我们提出了一个新的数据驱动框架,用于内而外的身体跟踪,针对基于优化的方法(例如,逆运动学求解器)中无所不在的遮挡挑战。我们首先使用光学标记和惯性传感器收集带有身体和手指运动的大规模运动捕获数据集。该数据集专注于社交场景,并捕获基本真理在自我周期和人体互动下的构成。然后,我们使用射线铸造算法在捕获的地面真相上模拟了遮挡图案,并学习深神网络以推断被遮挡的身体部位。在实验中,我们表明我们的方法能够通过在实时内而外的身体跟踪,手指运动合成和3点逆运动学的任务上应用提出的方法来生成高保真体现的姿势。
Tracking body and hand motions in the 3D space is essential for social and self-presence in augmented and virtual environments. Unlike the popular 3D pose estimation setting, the problem is often formulated as inside-out tracking based on embodied perception (e.g., egocentric cameras, handheld sensors). In this paper, we propose a new data-driven framework for inside-out body tracking, targeting challenges of omnipresent occlusions in optimization-based methods (e.g., inverse kinematics solvers). We first collect a large-scale motion capture dataset with both body and finger motions using optical markers and inertial sensors. This dataset focuses on social scenarios and captures ground truth poses under self-occlusions and body-hand interactions. We then simulate the occlusion patterns in head-mounted camera views on the captured ground truth using a ray casting algorithm and learn a deep neural network to infer the occluded body parts. In the experiments, we show that our method is able to generate high-fidelity embodied poses by applying the proposed method on the task of real-time inside-out body tracking, finger motion synthesis, and 3-point inverse kinematics.