论文标题
图像到图像翻译的在线范例微调
Online Exemplar Fine-Tuning for Image-to-Image Translation
论文作者
论文摘要
在深度卷积神经网络(CNN)中求解基于示例的图像到图像翻译的现有技术通常需要训练阶段,以优化针对域特异性和特定任务特异性基准的网络参数,从而具有有限的适用性和概括能力。在本文中,我们首次提出了一个新颖的框架,以通过在线优化为基于示例的翻译求解输入图像对,称为在线示例微调(OEFEFT),其中我们在其中微调了现成的和通用的网络,并将其用于输入图像对本身。我们设计了两个子网络,即对通信微调和多个GAN倒置,并优化这些网络参数和潜在代码,从预先训练的范围开始,具有明确的损失功能。我们的框架不需要离线培训阶段,这一直是现有方法的主要挑战,而是预先培训的网络以在线优化。实验结果证明,我们的框架有效地具有一般性的图像对,并且显然超过了需要强化训练阶段的最先进的图像对。
Existing techniques to solve exemplar-based image-to-image translation within deep convolutional neural networks (CNNs) generally require a training phase to optimize the network parameters on domain-specific and task-specific benchmarks, thus having limited applicability and generalization ability. In this paper, we propose a novel framework, for the first time, to solve exemplar-based translation through an online optimization given an input image pair, called online exemplar fine-tuning (OEFT), in which we fine-tune the off-the-shelf and general-purpose networks to the input image pair themselves. We design two sub-networks, namely correspondence fine-tuning and multiple GAN inversion, and optimize these network parameters and latent codes, starting from the pre-trained ones, with well-defined loss functions. Our framework does not require the off-line training phase, which has been the main challenge of existing methods, but the pre-trained networks to enable optimization in online. Experimental results prove that our framework is effective in having a generalization power to unseen image pairs and clearly even outperforms the state-of-the-arts needing the intensive training phase.