论文标题

检测跨模式不一致以防御神经假新闻

Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News

论文作者

Tan, Reuben, Plummer, Bryan A., Saenko, Kate

论文摘要

大规模传播虚假信息在线旨在误导或欺骗普通人群是一个主要的社会问题。图像,视频和自然语言生成模型的快速发展仅加剧了这种情况,并加剧了我们对有效的防御机制的需求。尽管已经提出了现有的方法来防御神经假新闻,但通常将它们限制在非常有限的环境中,其中文章仅具有文本和元数据,例如标题和作者。在本文中,我们介绍了防御机器生成的新闻的更现实,更具挑战性的任务,该新闻还包括图像和标题。为了确定对手可以利用的可能弱点,我们创建了一个由4种不同类型的生成文章组成的NeuralNews数据集,并根据该数据集进行了一系列人类用户研究实验。除了从我们的用户研究实验中收集的有价值的见解外,我们还基于检测视觉语义上的不一致性提供了一种相对有效的方法,该方法将作为有效的第一道防线,并为防御机器生成的虚假信息提供有用的工作有用。

Large-scale dissemination of disinformation online intended to mislead or deceive the general population is a major societal problem. Rapid progression in image, video, and natural language generative models has only exacerbated this situation and intensified our need for an effective defense mechanism. While existing approaches have been proposed to defend against neural fake news, they are generally constrained to the very limited setting where articles only have text and metadata such as the title and authors. In this paper, we introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions. To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles as well as conduct a series of human user study experiments based on this dataset. In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies, which will serve as an effective first line of defense and a useful reference for future work in defending against machine-generated disinformation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源