论文标题
场景图:关于人类场景 - 佐字及其与照片和文字的互补性
SceneTrilogy: On Human Scene-Sketch and its Complementarity with Photo and Text
论文作者
论文摘要
在本文中,我们将场景的理解扩展到包括人类草图的场景。结果是来自三种不同和互补方式的场景表示形式的完整三部曲 - 素描,照片和文字。我们专注于学习一种柔性的关节嵌入,该辅助性带来的``可选性'',而不是学习一个辅助性带来的``选择性'',我们的嵌入在两个轴上支持``(i)跨模态的````可选''''``可选'',以跨越模态的任何组合,将跨越的任务组合起来,而不是在两个轴上使用 - 使用跨越的任务(ii)(ii)任务 - (i)判别性(例如,检索)或生成任务(例如,字幕)为最终用户提供了灵活性。其次,使用修改后的跨注意事项对草图,照片和文本的反应实例进行了协调,我们表明我们的嵌入可以容纳与场景相关的多个任务,包括第一次通过素描启用的任务,而没有任何任务特定项目。 \ url {http://www.pinakinathc.me/scenetrilogy}
In this paper, we extend scene understanding to include that of human sketch. The result is a complete trilogy of scene representation from three diverse and complementary modalities -- sketch, photo, and text. Instead of learning a rigid three-way embedding and be done with it, we focus on learning a flexible joint embedding that fully supports the ``optionality" that this complementarity brings. Our embedding supports optionality on two axes: (i) optionality across modalities -- use any combination of modalities as query for downstream tasks like retrieval, (ii) optionality across tasks -- simultaneously utilising the embedding for either discriminative (e.g., retrieval) or generative tasks (e.g., captioning). This provides flexibility to end-users by exploiting the best of each modality, therefore serving the very purpose behind our proposal of a trilogy in the first place. First, a combination of information-bottleneck and conditional invertible neural networks disentangle the modality-specific component from modality-agnostic in sketch, photo, and text. Second, the modality-agnostic instances from sketch, photo, and text are synergised using a modified cross-attention. Once learned, we show our embedding can accommodate a multi-facet of scene-related tasks, including those enabled for the first time by the inclusion of sketch, all without any task-specific modifications. Project Page: \url{http://www.pinakinathc.me/scenetrilogy}