论文标题

值得信赖的媒体挑战数据集和用户研究

Trusted Media Challenge Dataset and User Study

论文作者

Chen, Weiling, Chua, Sheng Lun Benjamin, Winkler, Stefan, Ng, See-Kiong

论文摘要

强大的深度学习技术的发展给社会和个人带来了一些负面影响。一个问题是假媒体的出现。为了解决这个问题,我们组织了可信赖的媒体挑战(TMC),以探讨如何利用人工智能(AI)技术来打击假媒体。为了实现进一步的研究,我们正在发布我们从TMC挑战中准备的数据集,其中包括4,380个假和2,563个真实视频,并采用了各种视频和/或音频操作方法来生产不同类型的假媒体。 TMC数据集中的所有视频都伴随着音频,最小分辨率为360p。这些视频具有各种持续时间,背景,照明,并且可能包含模仿传输错误和压缩的扰动。我们还进行了一项用户研究,以证明TMC数据集的质量并比较人类和AI模型的性能。结果表明,在许多情况下,TMC数据集可以欺骗人类参与者,而受信任的媒体挑战的获胜AI模型优于人类。通过[email protected],可以根据要求提供TMC数据集。

The development of powerful deep learning technologies has brought about some negative effects to both society and individuals. One such issue is the emergence of fake media. To tackle the issue, we have organized the Trusted Media Challenge (TMC) to explore how Artificial Intelligence (AI) technologies could be leveraged to combat fake media. To enable further research, we are releasing the dataset that we had prepared from the TMC challenge, consisting of 4,380 fake and 2,563 real videos, with various video and/or audio manipulation methods employed to produce different types of fake media. All the videos in the TMC dataset are accompanied with audios and have a minimum resolution of 360p. The videos have various durations, background, illumination, and may contain perturbations that mimic transmission errors and compression. We have also carried out a user study to demonstrate the quality of the TMC dataset and to compare the performance of humans and AI models. The results showed that the TMC dataset can fool human participants in many cases, and the winning AI models of the Trusted Media Challenge outperformed humans. The TMC dataset is available for research purpose upon request via [email protected].

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源