论文标题

MAF:弱监督短语接地的多模式对齐框架

MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase Grounding

论文作者

Wang, Qinxin, Tan, Hao, Shen, Sheng, Mahoney, Michael W., Yao, Zhewei

论文摘要

短语定位是一项任务,研究了从文本短语到图像区域的映射。鉴于在大规模注释短语到对象数据集方面遇到的困难,我们开发了一个多模式对齐框架(MAF)来利用更广泛的可用字幕图像数据集,然后可以用作弱监督的一种形式。我们首先提出算法,以通过利用细粒度的视觉表示和视觉意义的语言表示来对短语对象相关性进行建模。通过采用对比目标,我们的方法在标题图像对中使用信息来提高弱监督的场景中的性能。在广泛的FlickR30K数据集上进行的实验表明,对现有弱监督方法的改善有了显着改善。借助视觉意见的语言表示,我们还可以将以前的最佳无监督结果提高5.56%。我们进行消融研究,以表明我们的新型模型和弱监督的策略都显着促进了我们的出色结果。

Phrase localization is a task that studies the mapping from textual phrases to regions of an image. Given difficulties in annotating phrase-to-object datasets at scale, we develop a Multimodal Alignment Framework (MAF) to leverage more widely-available caption-image datasets, which can then be used as a form of weak supervision. We first present algorithms to model phrase-object relevance by leveraging fine-grained visual representations and visually-aware language representations. By adopting a contrastive objective, our method uses information in caption-image pairs to boost the performance in weakly-supervised scenarios. Experiments conducted on the widely-adopted Flickr30k dataset show a significant improvement over existing weakly-supervised methods. With the help of the visually-aware language representations, we can also improve the previous best unsupervised result by 5.56%. We conduct ablation studies to show that both our novel model and our weakly-supervised strategies significantly contribute to our strong results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源