论文标题
HOI4D:类别级的人类对象互动的4D EGECENTRIC数据集
HOI4D: A 4D Egocentric Dataset for Category-Level Human-Object Interaction
论文作者
论文摘要
我们提出HOI4D,这是一种具有丰富注释的大规模4D EgoCentric数据集,可促进类别级的人类对象相互作用的研究。 HOI4D由4000个序列由4000个序列组成,由4000个参与者收集的4000个序列与610个不同的室内室内房间的800个不同对象实例进行交互。还提供了针对圆形分割,运动分割,3D手姿势,类别级对象姿势和手动操作的框架注释,以及重建的对象网格和场景点云。使用HOI4D,我们建立了三个基准测试任务,以从4D视觉信号中促进类别级别的HOI,包括4D动态点云序列的语义分割,类别级级对象姿势姿势跟踪和以各种交互目标为中心的动作细分。深入分析表明,HOI4D对现有方法构成了巨大挑战,并带来了巨大的研究机会。
We present HOI4D, a large-scale 4D egocentric dataset with rich annotations, to catalyze the research of category-level human-object interaction. HOI4D consists of 2.4M RGB-D egocentric video frames over 4000 sequences collected by 4 participants interacting with 800 different object instances from 16 categories over 610 different indoor rooms. Frame-wise annotations for panoptic segmentation, motion segmentation, 3D hand pose, category-level object pose and hand action have also been provided, together with reconstructed object meshes and scene point clouds. With HOI4D, we establish three benchmarking tasks to promote category-level HOI from 4D visual signals including semantic segmentation of 4D dynamic point cloud sequences, category-level object pose tracking, and egocentric action segmentation with diverse interaction targets. In-depth analysis shows HOI4D poses great challenges to existing methods and produces great research opportunities.