论文标题

Coogan:用于高分辨率面部属性编辑的记忆效率框架

CooGAN: A Memory-Efficient Framework for High-Resolution Facial Attribute Editing

论文作者

Chen, Xuanhong, Ni, Bingbing, Liu, Naiyuan, Liu, Ziang, Jiang, Yiliu, Truong, Loc, Tian, Qi

论文摘要

与以低分辨率的记忆力消耗的面部编辑方法相比,操纵高分辨率(HR)面部图像,即通常大于7682像素,并且存储器非常有限仍然具有挑战性。这是由于原因是1)棘手的对记忆的巨大需求; 2)效率低下的多尺度特征融合。为了解决这些问题,我们提出了一个新颖的像素翻译框架,称为“合作甘”(Coogan),用于人力资源面部图像编辑。该框架具有局部局部面部贴片生成(即贴片级HR,低内存)的本地路径和全局低分辨率(LR)面部结构监测(即图像级LR,低内存)的全局路径,这很大程度上会降低内存需求。这两条路径在局部到全球的一致性目标(即用于平滑缝线)下以合作的方式工作。此外,我们提出了一个较轻的选择性传输单元,以进行更有效的多尺度特征融合,从而产生更高的富裕性面部属性操纵。 Celebahq的广泛实验很好地证明了记忆效率以及所提出的框架的高图像生成质量。

In contrast to great success of memory-consuming face editing methods at a low resolution, to manipulate high-resolution (HR) facial images, i.e., typically larger than 7682 pixels, with very limited memory is still challenging. This is due to the reasons of 1) intractable huge demand of memory; 2) inefficient multi-scale features fusion. To address these issues, we propose a NOVEL pixel translation framework called Cooperative GAN(CooGAN) for HR facial image editing. This framework features a local path for fine-grained local facial patch generation (i.e., patch-level HR, LOW memory) and a global path for global lowresolution (LR) facial structure monitoring (i.e., image-level LR, LOW memory), which largely reduce memory requirements. Both paths work in a cooperative manner under a local-to-global consistency objective (i.e., for smooth stitching). In addition, we propose a lighter selective transfer unit for more efficient multi-scale features fusion, yielding higher fidelity facial attributes manipulation. Extensive experiments on CelebAHQ well demonstrate the memory efficiency as well as the high image generation quality of the proposed framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源