论文标题

预验证的单峰和多峰模型中的视觉常识

Visual Commonsense in Pretrained Unimodal and Multimodal Models

论文作者

Zhang, Chenyu, Van Durme, Benjamin, Li, Zhuowan, Stengel-Eskin, Elias

论文摘要

我们对物体的常识知识包括它们的典型视觉属性;我们知道香蕉通常是黄色或绿色,而不是紫色。受到报告偏见的文本和图像语料库代表了这种世界知识的忠诚程度。在本文中,我们调查了单峰(仅语言)和多模式(图像和语言)模型在多大程度上捕获了广泛的视觉显着属性。为此,我们为5000多名受试者创建了涵盖5种属性类型(颜色,形状,材料,大小和视觉共存)的视觉常识测试(Vicomte)数据集。我们通过表明我们的接地颜色数据与Paik等人提供的众包颜色判断相比,我们的接地颜色数据相关得多。 (2021)。然后,我们使用数据集评估经过验证的单峰模型和多模型模型。我们的结果表明,多模型模型可以更好地重建属性分布,但仍存在报告偏差。此外,增加模型大小不会提高性能,这表明视觉常识的关键在于数据。

Our commonsense knowledge about objects includes their typical visual attributes; we know that bananas are typically yellow or green, and not purple. Text and image corpora, being subject to reporting bias, represent this world-knowledge to varying degrees of faithfulness. In this paper, we investigate to what degree unimodal (language-only) and multimodal (image and language) models capture a broad range of visually salient attributes. To that end, we create the Visual Commonsense Tests (ViComTe) dataset covering 5 property types (color, shape, material, size, and visual co-occurrence) for over 5000 subjects. We validate this dataset by showing that our grounded color data correlates much better than ungrounded text-only data with crowdsourced color judgments provided by Paik et al. (2021). We then use our dataset to evaluate pretrained unimodal models and multimodal models. Our results indicate that multimodal models better reconstruct attribute distributions, but are still subject to reporting bias. Moreover, increasing model size does not enhance performance, suggesting that the key to visual commonsense lies in the data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源