论文标题

跨模式语义增强图像句子检索的相互作用

Cross-modal Semantic Enhanced Interaction for Image-Sentence Retrieval

论文作者

Ge, Xuri, Chen, Fuhai, Xu, Songpei, Tao, Fuxiang, Jose, Joemon M.

论文摘要

由于其有希望的应用,图像句子检索吸引了多媒体和计算机视觉的广泛研究。关键问题在于共同学习视觉和文本表示以准确估计其相似性。为此,主流架构采用基于对象的注意来计算其相关性分数,并使用注意力特征来完善其交互式表示,但是,这些特征忽略了对象表示与句子中谓词相匹配的对象表示的上下文。在本文中,我们提出了一种跨模式语义增强的相互作用方法,称为图像句子检索的CMSEI,该方法将对象和单词之间的内部和模式间语义相关联。特别是,我们首先设计了基于模式内空间和语义图的推理,以增强对象的空间位置及其场景图的显式关系指导的对象的语义表示。然后,视觉和文本语义表示通过模式间相互作用和交叉模式比对共同完善。为了将对象的上下文与文本上下文相关联,我们通过基于跨级对象句子和基于单词图像的交互式注意来进一步完善视觉语义表示。七个标准评估指标的实验结果表明,拟议的CMSEI在MS-Coco和Flickr30k基准上的最先进和替代方法优于最先进的方法。

Image-sentence retrieval has attracted extensive research attention in multimedia and computer vision due to its promising application. The key issue lies in jointly learning the visual and textual representation to accurately estimate their similarity. To this end, the mainstream schema adopts an object-word based attention to calculate their relevance scores and refine their interactive representations with the attention features, which, however, neglects the context of the object representation on the inter-object relationship that matches the predicates in sentences. In this paper, we propose a Cross-modal Semantic Enhanced Interaction method, termed CMSEI for image-sentence retrieval, which correlates the intra- and inter-modal semantics between objects and words. In particular, we first design the intra-modal spatial and semantic graphs based reasoning to enhance the semantic representations of objects guided by the explicit relationships of the objects' spatial positions and their scene graph. Then the visual and textual semantic representations are refined jointly via the inter-modal interactive attention and the cross-modal alignment. To correlate the context of objects with the textual context, we further refine the visual semantic representation via the cross-level object-sentence and word-image based interactive attention. Experimental results on seven standard evaluation metrics show that the proposed CMSEI outperforms the state-of-the-art and the alternative approaches on MS-COCO and Flickr30K benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源