论文标题

解码器调制室内深度完成

Decoder Modulation for Indoor Depth Completion

论文作者

Senushkin, Dmitry, Romanov, Mikhail, Belikov, Ilia, Konushin, Anton, Patakin, Nikolay

论文摘要

深度完成可从传感器测量中恢复密集的深度图。当前的方法主要是针对室外设置中激光雷达的非常稀疏的深度测量而量身定制的,而对于室内场景(TOF)或结构化光传感器,大多数使用。这些传感器提供半密集的地图,在某些地区进行密集的测量,而在其他地区几乎是空的。我们提出了一个新的模型,该模型考虑了此类区域之间的统计差异。我们的主要贡献是将新的解码器调制分支添加到编码器架构中。编码器从串联的RGB图像和原始深度提取特征。鉴于缺失值作为输入,提出的调制分支控制了这些特征的密集深度图对不同区域的解码。这是通过通过空间自适应典型化(Spade)块修改解码器内部输出信号的空间分布来实现的。我们的第二个贡献是一种新颖的培训策略,它使我们能够在没有地面真相深度图的情况下对半密集的传感器数据进行训练。我们的模型在室内Matterport3D数据集上实现了最新的结果。为了半密度的输入深度,我们的模型仍然具有竞争力,它在Kitti数据集中采用了面向激光雷达的方法。我们的培训策略可大大提高预测质量,而无需使用浓密的地面真相,如NYUV2数据集所述。

Depth completion recovers a dense depth map from sensor measurements. Current methods are mostly tailored for very sparse depth measurements from LiDARs in outdoor settings, while for indoor scenes Time-of-Flight (ToF) or structured light sensors are mostly used. These sensors provide semi-dense maps, with dense measurements in some regions and almost empty in others. We propose a new model that takes into account the statistical difference between such regions. Our main contribution is a new decoder modulation branch added to the encoder-decoder architecture. The encoder extracts features from the concatenated RGB image and raw depth. Given the mask of missing values as input, the proposed modulation branch controls the decoding of a dense depth map from these features differently for different regions. This is implemented by modifying the spatial distribution of output signals inside the decoder via Spatially-Adaptive Denormalization (SPADE) blocks. Our second contribution is a novel training strategy that allows us to train on a semi-dense sensor data when the ground truth depth map is not available. Our model achieves the state of the art results on indoor Matterport3D dataset. Being designed for semi-dense input depth, our model is still competitive with LiDAR-oriented approaches on the KITTI dataset. Our training strategy significantly improves prediction quality with no dense ground truth available, as validated on the NYUv2 dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源