论文标题
内容固定图像到图像翻译的样式流
StyleFlow For Content-Fixed Image to Image Translation
论文作者
论文摘要
图像到图像(I2i)翻译是计算机视觉中一个具有挑战性的话题。我们将此问题分为三个任务:严格受限的翻译,通常受约束的翻译和弱约束的翻译。这里的约束表示保留原始图像中的内容或语义信息的程度。尽管以前的方法在弱限制的任务中取得了良好的性能,但它们无法完全保留强烈和正常受约束的任务中的内容,包括光真实主义的综合,样式传递和着色等,以实现强烈约束和正常约束任务的内容传递,我们提出了一个新的I2I Translass Model and Saniors and Saniors and Saniors and Saniors and Saniors and Sanive and Sanive and Sanive and and Sanive and and and Sanive and norkal and norkal and and Sanive and norkal and and norkal wall and Flof。借助可逆网络结构,StyleFlow首先将图像输入向前通行中的深度特征空间,而向后通行证则利用SAN模块执行内容固定的特征转换,然后将其投影回图像空间。我们的模型支持图像引导的翻译和多模式合成。我们在几个I2I翻译基准中评估了我们的模型,结果表明,在强烈约束和正常约束任务中,所提出的模型比以前的方法具有优势。
Image-to-image (I2I) translation is a challenging topic in computer vision. We divide this problem into three tasks: strongly constrained translation, normally constrained translation, and weakly constrained translation. The constraint here indicates the extent to which the content or semantic information in the original image is preserved. Although previous approaches have achieved good performance in weakly constrained tasks, they failed to fully preserve the content in both strongly and normally constrained tasks, including photo-realism synthesis, style transfer, and colorization, etc. To achieve content-preserving transfer in strongly constrained and normally constrained tasks, we propose StyleFlow, a new I2I translation model that consists of normalizing flows and a novel Style-Aware Normalization (SAN) module. With the invertible network structure, StyleFlow first projects input images into deep feature space in the forward pass, while the backward pass utilizes the SAN module to perform content-fixed feature transformation and then projects back to image space. Our model supports both image-guided translation and multi-modal synthesis. We evaluate our model in several I2I translation benchmarks, and the results show that the proposed model has advantages over previous methods in both strongly constrained and normally constrained tasks.