论文标题
视频超级分辨率用于单光子激光龙
Video super-resolution for single-photon LIDAR
论文作者
论文摘要
3D飞行时间(TOF)图像传感器在自动驾驶汽车,增强现实(AR)和机器人技术等应用中广泛使用。使用单光量雪崩二极管(SPADS)实施时,可以制作紧凑的,阵列格式传感器,可在长距离内提供准确的深度图,而无需进行机械扫描。但是,阵列大小往往很小,导致较低的横向分辨率,结合在高环境照明下的信噪比(SNR)水平较低,可能会导致场景解释困难。在本文中,我们使用合成深度序列来训练3D卷积神经网络(CNN)进行降级和升级(X4)深度数据。基于合成和实际TOF数据的实验结果用于证明该方案的有效性。随着GPU加速度,每秒以> 30帧的速度处理框架,使该方法适合于避免障碍物。
3D Time-of-Flight (ToF) image sensors are used widely in applications such as self-driving cars, Augmented Reality (AR) and robotics. When implemented with Single-Photon Avalanche Diodes (SPADs), compact, array format sensors can be made that offer accurate depth maps over long distances, without the need for mechanical scanning. However, array sizes tend to be small, leading to low lateral resolution, which combined with low Signal-to-Noise Ratio (SNR) levels under high ambient illumination, may lead to difficulties in scene interpretation. In this paper, we use synthetic depth sequences to train a 3D Convolutional Neural Network (CNN) for denoising and upscaling (x4) depth data. Experimental results, based on synthetic as well as real ToF data, are used to demonstrate the effectiveness of the scheme. With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.