论文标题

从事件中学习单眼密集的深度

Learning Monocular Dense Depth from Events

论文作者

Hidalgo-Carrió, Javier, Gehrig, Daniel, Scaramuzza, Davide

论文摘要

事件摄像机是新型传感器,以异步事件而不是强度帧的形式输出亮度变化。与传统的图像传感器相比,它们具有显着的优势:高时间分辨率,高动态范围,无运动模糊和较低的带宽。最近,基于学习的方法已应用于基于事件的数据,从而释放了其潜力并在各种任务(例如单眼深度预测)中取得了重大进展。大多数现有方法都使用标准的进料架构来生成网络预测,这些预测不会利用事件流中的时间一致性。我们提出了一个经常性架构来解决这项任务,并对标准的进料方法显示出显着改善。特别是,我们的方法使用单眼设置产生密集的深度预测,这先前尚未显示。我们使用包含CARLA模拟器中记录的事件和深度图的新数据集预处理我们的模型。我们在多车辆立体活动摄像机数据集(MVSEC)上测试我们的方法。相对于以前的基于事件的方法,定量实验的平均深度误差率高达50%。

Event cameras are novel sensors that output brightness changes in the form of a stream of asynchronous events instead of intensity frames. Compared to conventional image sensors, they offer significant advantages: high temporal resolution, high dynamic range, no motion blur, and much lower bandwidth. Recently, learning-based approaches have been applied to event-based data, thus unlocking their potential and making significant progress in a variety of tasks, such as monocular depth prediction. Most existing approaches use standard feed-forward architectures to generate network predictions, which do not leverage the temporal consistency presents in the event stream. We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods. In particular, our method generates dense depth predictions using a monocular setup, which has not been shown previously. We pretrain our model using a new dataset containing events and depth maps recorded in the CARLA simulator. We test our method on the Multi Vehicle Stereo Event Camera Dataset (MVSEC). Quantitative experiments show up to 50% improvement in average depth error with respect to previous event-based methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源