论文标题

VIP-Deeplab:通过深度感知视频泛型细分学习视觉感知

ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation

论文作者

Qiao, Siyuan, Zhu, Yukun, Adam, Hartwig, Yuille, Alan, Chen, Liang-Chieh

论文摘要

在本文中,我们提出了VIP-Deeplab,这是一个统一的模型,试图解决视力中长期且具有挑战性的反向投影问题,我们将其模拟为从透视图像序列恢复点云的同时,同时为每个点提供实例级别的语义解释。解决此问题需要视觉模型来预测每个3D点的空间位置,语义类别和时间一致的实例标签。 VIP-DeepLab通过共同执行单眼深度估计和视频泛型分割来接近它。我们将此联合任务命名为深度感知的视频泛型细分,并提出了一个新的评估指标以及两个派生的数据集,可以向公众提供。在各个子任务上,VIP-DeepLab还取得了最新的结果,在CityScapes-VPS上的表现优于5.1%VPQ,在Kitti单眼深度估计基准中排名第1,而在Kitti Mots人行人上排名第一。数据集和评估代码可公开可用。

In this paper, we present ViP-DeepLab, a unified model attempting to tackle the long-standing and challenging inverse projection problem in vision, which we model as restoring the point clouds from perspective image sequences while providing each point with instance-level semantic interpretations. Solving this problem requires the vision models to predict the spatial location, semantic class, and temporally consistent instance label for each 3D point. ViP-DeepLab approaches it by jointly performing monocular depth estimation and video panoptic segmentation. We name this joint task as Depth-aware Video Panoptic Segmentation, and propose a new evaluation metric along with two derived datasets for it, which will be made available to the public. On the individual sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The datasets and the evaluation codes are made publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源