论文标题

知识驱动的场景先验,用于语义音频体现的导航

Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation

论文作者

Tatiya, Gyan, Francis, Jonathan, Bondi, Luca, Navarro, Ingrid, Nyberg, Eric, Sinapov, Jivko, Oh, Jean

论文摘要

对看不见的环境的概括仍然是体现导航代理的挑战。在语义视听导航(SAVI)任务的背景下,概括的概念应包括概括以使人看到的室内视觉场景以及对闻所未闻的声音对象的推广。但是,以前的SAVI任务定义不包括对真正新颖的声音对象的评估条件,而是求助于评估已知对象闻所未闻的声音剪辑上的代理;同时,以前的SAVI方法不包括合并有关对象和区域语义的域知识的明确机制。这些弱点限制了模型能力的发展和评估,以概括其学习的经验。 In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Encoder Networks, and background knowledge from a series of pre-training tasks -- all within a reinforcement learning framework for audio-visual navigation.我们还定义了一个新的视听导航子任务,该导航子任务是在新型响起的物体上评估的,而不是闻所未闻的已知物体剪辑。在Soundspaces任务下,我们在栖息地 - 模拟3D仿真环境中对强大基线和新颖的声音对象的概括显示了对强大基线的改进。

Generalisation to unseen contexts remains a challenge for embodied navigation agents. In the context of semantic audio-visual navigation (SAVi) tasks, the notion of generalisation should include both generalising to unseen indoor visual scenes as well as generalising to unheard sounding objects. However, previous SAVi task definitions do not include evaluation conditions on truly novel sounding objects, resorting instead to evaluating agents on unheard sound clips of known objects; meanwhile, previous SAVi methods do not include explicit mechanisms for incorporating domain knowledge about object and region semantics. These weaknesses limit the development and assessment of models' abilities to generalise their learned experience. In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Encoder Networks, and background knowledge from a series of pre-training tasks -- all within a reinforcement learning framework for audio-visual navigation. We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects. We show improvements over strong baselines in generalisation to unseen regions and novel sounding objects, within the Habitat-Matterport3D simulation environment, under the SoundSpaces task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源