论文标题
对象姿势使用中级视觉表示
Object Pose Estimation using Mid-level Visual Representations
论文作者
论文摘要
这项工作为对象类别提出了一个新颖的姿势估计模型,可以有效地转移到以前看不见的环境中。通常在专门策划的对象检测,姿势估计或3D重建的数据集上对姿势估计的深卷卷网络模型(CNN)进行培训和评估,这需要大量的培训数据。在这项工作中,我们提出了一个用于姿势估计的模型,该模型可以用少量数据训练,并建立在通用中层表示的顶部\ cite {tampymonsy2018}(例如,表面正常估计和重新遮蔽)。这些表示形式在大型数据集上进行了培训,而无需姿势和对象注释。后来,通过一个小的CNN神经网络对预测进行了完善,该神经网络利用对象面具和轮廓检索。提出的方法在PIX3D数据集\ Cite {pix3d}上实现了卓越的性能,并在仅25 \%的培训数据可用时显示了几乎35 \%的改进。我们表明,在概括和转移到新颖环境方面,该方法是有利的。为此,我们引入了一个新的姿势估计基准,以针对挑战的主动视觉数据集\ cite \ cite {ammirato2017Adf}遇到家具类别,并评估了在Pix3D数据集中训练的模型。
This work proposes a novel pose estimation model for object categories that can be effectively transferred to previously unseen environments. The deep convolutional network models (CNN) for pose estimation are typically trained and evaluated on datasets specifically curated for object detection, pose estimation, or 3D reconstruction, which requires large amounts of training data. In this work, we propose a model for pose estimation that can be trained with small amount of data and is built on the top of generic mid-level representations \cite{taskonomy2018} (e.g. surface normal estimation and re-shading). These representations are trained on a large dataset without requiring pose and object annotations. Later on, the predictions are refined with a small CNN neural network that exploits object masks and silhouette retrieval. The presented approach achieves superior performance on the Pix3D dataset \cite{pix3d} and shows nearly 35\% improvement over the existing models when only 25\% of the training data is available. We show that the approach is favorable when it comes to generalization and transfer to novel environments. Towards this end, we introduce a new pose estimation benchmark for commonly encountered furniture categories on challenging Active Vision Dataset \cite{Ammirato2017ADF} and evaluated the models trained on the Pix3D dataset.