论文标题
SCS-CO:图像协调的自洽样式的对比学习
SCS-Co: Self-Consistent Style Contrastive Learning for Image Harmonization
论文作者
论文摘要
图像协调旨在通过调整前景以使其与背景兼容,以实现复合图像中的视觉一致性。但是,现有方法始终仅将真实图像用作指导训练的正样本,最多最多将相应的复合图像作为辅助约束的单个负面样本,从而导致有限的失真知识,并进一步导致太大的解决方案空间,从而导致生成的统一的统一图像扭曲。此外,没有一个共同限制前景自风格和前景式风格的一致性,这加剧了这个问题。此外,最近的区域感知的自适应实例归一化取得了巨大的成功,但仅考虑了全球背景特征分布,从而使对齐的前景特征分布有偏见。为了解决这些问题,我们提出了一种自洽风格的对比学习计划(SCS-CO)。通过动态生成多个负样本,我们的SCS-CO可以学习更多的失真知识,并可以从样式表示空间中从前景的两个方面进行自风形和前景 - 靠地面样式的一致性来适应产生的统一图像,从而导致更加逼真的视觉结果。此外,我们提出了一个稳定的背景自适应实例归一化(BAIN),以根据前景背景特征相似性获得注意力加权特征分布。在定量比较和视觉分析中,实验证明了我们方法比其他最新方法的优越性。
Image harmonization aims to achieve visual consistency in composite images by adapting a foreground to make it compatible with a background. However, existing methods always only use the real image as the positive sample to guide the training, and at most introduce the corresponding composite image as a single negative sample for an auxiliary constraint, which leads to limited distortion knowledge, and further causes a too large solution space, making the generated harmonized image distorted. Besides, none of them jointly constrain from the foreground self-style and foreground-background style consistency, which exacerbates this problem. Moreover, recent region-aware adaptive instance normalization achieves great success but only considers the global background feature distribution, making the aligned foreground feature distribution biased. To address these issues, we propose a self-consistent style contrastive learning scheme (SCS-Co). By dynamically generating multiple negative samples, our SCS-Co can learn more distortion knowledge and well regularize the generated harmonized image in the style representation space from two aspects of the foreground self-style and foreground-background style consistency, leading to a more photorealistic visual result. In addition, we propose a background-attentional adaptive instance normalization (BAIN) to achieve an attention-weighted background feature distribution according to the foreground-background feature similarity. Experiments demonstrate the superiority of our method over other state-of-the-art methods in both quantitative comparison and visual analysis.