论文标题
带有多模式变压器的指导跟随代理
Instruction-Following Agents with Multimodal Transformer
论文作者
论文摘要
人类擅长理解语言和愿景,以完成各种任务。相比之下,创建一般的指导跟随体现的代理仍然是一个困难的挑战。使用纯语言模型的先前工作缺乏视觉接地,因此很难将语言说明与视觉观察联系起来。另一方面,使用预训练的多峰模型的方法通常具有分开的语言和视觉表示,需要设计专门的网络体系结构将它们融合在一起。我们为机器人提出了一个简单而有效的模型,以在基于视觉的环境中求解指令遵守任务。我们的\我们的方法由一个多模式变压器组成,该变压器编码视觉观察和语言说明,以及基于变压器的策略,该策略可预测基于编码表示的操作。多模式变压器已在数百万的图像文本对和自然语言文本上进行了预训练,从而产生了观测和说明的通用跨模式表示。基于变压器的策略会跟踪观察和行动的完整历史,并在自动审核中预测动作。尽管它很简单,但我们表明,这种统一的变压器模型优于单个任务和多任务设置中所有最先进的预训练或经过训练的从事或经过训练的从抓手方法。与先前的工作相比,我们的模型还显示出更好的模型可伸缩性和概括能力。
Humans are excellent at understanding language and vision to accomplish a wide range of tasks. In contrast, creating general instruction-following embodied agents remains a difficult challenge. Prior work that uses pure language-only models lack visual grounding, making it difficult to connect language instructions with visual observations. On the other hand, methods that use pre-trained multimodal models typically come with divided language and visual representations, requiring designing specialized network architecture to fuse them together. We propose a simple yet effective model for robots to solve instruction-following tasks in vision-based environments. Our \ours method consists of a multimodal transformer that encodes visual observations and language instructions, and a transformer-based policy that predicts actions based on encoded representations. The multimodal transformer is pre-trained on millions of image-text pairs and natural language text, thereby producing generic cross-modal representations of observations and instructions. The transformer-based policy keeps track of the full history of observations and actions, and predicts actions autoregressively. Despite its simplicity, we show that this unified transformer model outperforms all state-of-the-art pre-trained or trained-from-scratch methods in both single-task and multi-task settings. Our model also shows better model scalability and generalization ability than prior work.