论文标题
通过扩散模型从中风和草图产生适应性现实的图像
Adaptively-Realistic Image Generation from Stroke and Sketch with Diffusion Model
论文作者
论文摘要
从手绘中生成图像是内容创建的至关重要和基本任务。翻译很困难,因为存在无限的可能性,并且不同的用户通常会期望不同的结果。因此,我们提出了一个统一的框架,该框架支持基于扩散模型的草图和笔触对图像合成的三维控制。用户不仅可以确定输入笔画和草图的忠诚程度,而且可以决定现实程度,因为用户输入通常与真实图像不一致。定性和定量实验表明,我们的框架实现了最先进的性能,同时提供了具有控制形状,颜色和现实主义的自定义图像的灵活性。此外,我们的方法释放了应用程序,例如在真实图像上编辑,部分草图和笔触的生成以及多域多模式合成。
Generating images from hand-drawings is a crucial and fundamental task in content creation. The translation is difficult as there exist infinite possibilities and the different users usually expect different outcomes. Therefore, we propose a unified framework supporting a three-dimensional control over the image synthesis from sketches and strokes based on diffusion models. Users can not only decide the level of faithfulness to the input strokes and sketches, but also the degree of realism, as the user inputs are usually not consistent with the real images. Qualitative and quantitative experiments demonstrate that our framework achieves state-of-the-art performance while providing flexibility in generating customized images with control over shape, color, and realism. Moreover, our method unleashes applications such as editing on real images, generation with partial sketches and strokes, and multi-domain multi-modal synthesis.