论文标题

部分可观测时空混沌系统的无模型预测

Improving Adversarial Robustness by Contrastive Guided Diffusion Process

论文作者

Ouyang, Yidong, Xie, Liyan, Cheng, Guang

论文摘要

合成数据的生成已成为一种新兴工具,可以帮助改善分类任务中的对抗性鲁棒性,因为与标准分类任务相比,鲁棒学习需要大量的培训样本。在各种深层生成模型中,扩散模型已被证明可以产生高质量的合成图像,并在改善对抗性鲁棒性方面取得了良好的性能。但是,与其他生成模型相比,数据生成的扩散型方法通常速度较慢。尽管最近提出了不同的加速技术,但研究如何提高下游任务的样本效率也非常重要。在本文中,我们首先分析合成分布的最佳条件,以实现非平凡的鲁棒精度。我们表明,增强生成数据之间的区分性对于改善对抗性鲁棒性至关重要。因此,我们提出了对比引导的扩散过程(对比DP),该过程采用了对比损失,以指导数据生成中的扩散模型。我们使用模拟验证了理论结果,并在图像数据集上证明了对比度DP的良好性能。

Synthetic data generation has become an emerging tool to help improve the adversarial robustness in classification tasks since robust learning requires a significantly larger amount of training samples compared with standard classification tasks. Among various deep generative models, the diffusion model has been shown to produce high-quality synthetic images and has achieved good performance in improving the adversarial robustness. However, diffusion-type methods are typically slow in data generation as compared with other generative models. Although different acceleration techniques have been proposed recently, it is also of great importance to study how to improve the sample efficiency of generated data for the downstream task. In this paper, we first analyze the optimality condition of synthetic distribution for achieving non-trivial robust accuracy. We show that enhancing the distinguishability among the generated data is critical for improving adversarial robustness. Thus, we propose the Contrastive-Guided Diffusion Process (Contrastive-DP), which adopts the contrastive loss to guide the diffusion model in data generation. We verify our theoretical results using simulations and demonstrate the good performance of Contrastive-DP on image datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源