论文标题
机器人导航的视觉语言图
Visual Language Maps for Robot Navigation
论文作者
论文摘要
可以使用在Internet规模数据(例如,图像标题)上预先预测的,可以使用现成的视觉语言模型来对导航代理的视觉观察进行接地语言。虽然这对于将图像与对象目标的自然语言描述匹配很有用,但它仍然与映射环境的过程不相交,因此缺乏经典几何图的空间精度。为了解决这个问题,我们提出了VLMAP,这是一个空间地图表示,将鉴定的视觉语言特征与物理世界的3D重建融合在一起。 VLMAP可以使用标准探索方法自动地从机器人上的视频提要上构建,并在没有其他标记数据的情况下实现自然语言索引。具体而言,当与大型语言模型(LLM)结合使用时,VLMAP可以用来(i)将自然语言命令转换为一系列开放式唱片的导航目标(除了先前的工作之外,可以通过构造空间进行空间,例如,例如,在sofa和tem之间进行“在sofa and TV之间”中的“三米”或“在椅子上”的构图,以及(II)在映射中的三个米),以及(II)。障碍物地图(通过使用障碍类别列表)。在模拟和现实世界环境中进行的广泛实验表明,与现有方法相比,VLMAPS可以根据语言指令更复杂的语言指令进行导航。视频可在https://vlmaps.github.io上找到。
Grounding language to the visual observations of a navigating agent can be performed using off-the-shelf visual-language models pretrained on Internet-scale data (e.g., image captions). While this is useful for matching images to natural language descriptions of object goals, it remains disjoint from the process of mapping the environment, so that it lacks the spatial precision of classic geometric maps. To address this problem, we propose VLMaps, a spatial map representation that directly fuses pretrained visual-language features with a 3D reconstruction of the physical world. VLMaps can be autonomously built from video feed on robots using standard exploration approaches and enables natural language indexing of the map without additional labeled data. Specifically, when combined with large language models (LLMs), VLMaps can be used to (i) translate natural language commands into a sequence of open-vocabulary navigation goals (which, beyond prior work, can be spatial by construction, e.g., "in between the sofa and TV" or "three meters to the right of the chair") directly localized in the map, and (ii) can be shared among multiple robots with different embodiments to generate new obstacle maps on-the-fly (by using a list of obstacle categories). Extensive experiments carried out in simulated and real world environments show that VLMaps enable navigation according to more complex language instructions than existing methods. Videos are available at https://vlmaps.github.io.