论文标题

基于事件的视觉探光仪的异步优化

Asynchronous Optimisation for Event-based Visual Odometry

论文作者

Liu, Daqi, Parra, Alvaro, Latif, Yasir, Chen, Bo, Chin, Tat-Jun, Reid, Ian

论文摘要

事件摄像机因其低潜伏期和高动态范围而为机器人感知开辟了新的可能性。另一方面,开发有效的基于事件的视觉算法,这些算法充分利用事件摄像机的有益特性仍在进行中。在本文中,我们专注于基于事件的视觉探子仪(VO)。尽管现有事件驱动的VO管道已采用连续的时间表示来异步处理事件数据,但他们要么假设已知的映射,要么将相机限制为平面轨迹,要么将其他其他传感器集成到系统中。朝向SE(3)中的仅无MAP事件的单眼VO,我们提出了一种异步结构,即动作 - 动作优化后端。我们的公式是由涉及非参数高斯过程运动建模和后验推断的原则性关节优化问题的基础。每个传入事件都采用高性能增量计算引擎来推理相机轨迹。与基于框架的方法相比,我们证明了异步后端的鲁棒性,这些方法取决于测量的准确时间积累。

Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range. On the other hand, developing effective event-based vision algorithms that fully exploit the beneficial properties of event cameras remains work in progress. In this paper, we focus on event-based visual odometry (VO). While existing event-driven VO pipelines have adopted continuous-time representations to asynchronously process event data, they either assume a known map, restrict the camera to planar trajectories, or integrate other sensors into the system. Towards map-free event-only monocular VO in SE(3), we propose an asynchronous structure-from-motion optimisation back-end. Our formulation is underpinned by a principled joint optimisation problem involving non-parametric Gaussian Process motion modelling and incremental maximum a posteriori inference. A high-performance incremental computation engine is employed to reason about the camera trajectory with every incoming event. We demonstrate the robustness of our asynchronous back-end in comparison to frame-based methods which depend on accurate temporal accumulation of measurements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源