论文标题

训练视觉语言变压器的标题

Training Vision-Language Transformers from Captions

论文作者

Gui, Liangke, Chang, Yingshan, Huang, Qiuyuan, Som, Subhojit, Hauptmann, Alex, Gao, Jianfeng, Bisk, Yonatan

论文摘要

可以在没有低级人类标签的情况下学习视觉变压器(例如类标签,边界框等)。现有的工作,无论是明确利用边界框还是补丁,都假定必须首先对ImageNet类预测进行培训,然后才能集成到多模式语言管道中。我们表明,这不是必需的,并从字幕(VLC)引入了一个新的模型视觉语言(VLC),该语言(VLC)建立在不需要此监督的蒙版自动编码器上。实际上,在VILT之间的正面比较中,当前基于补丁的视力语言变压器通过监督的对象分类鉴定,并且我们的模型VLC,我们发现我们的方法1。优于标准基准上的Vilt,在标准基准上替代2。可解释的和直观的贴片,并具有更大的可视化和3的竞争力,并且具有较大的模型。

Vision-Language Transformers can be learned without low-level human labels (e.g. class labels, bounding boxes, etc). Existing work, whether explicitly utilizing bounding boxes or patches, assumes that the visual backbone must first be trained on ImageNet class prediction before being integrated into a multimodal linguistic pipeline. We show that this is not necessary and introduce a new model Vision-Language from Captions (VLC) built on top of Masked Auto-Encoders that does not require this supervision. In fact, in a head-to-head comparison between ViLT, the current state-of-the-art patch-based vision-language transformer which is pretrained with supervised object classification, and our model, VLC, we find that our approach 1. outperforms ViLT on standard benchmarks, 2. provides more interpretable and intuitive patch visualizations, and 3. is competitive with many larger models that utilize ROIs trained on annotated bounding-boxes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源