论文标题

Cogview2:通过分层变压器更快,更快的文本对图像生成

CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers

论文作者

Ding, Ming, Zheng, Wendi, Hong, Wenyi, Tang, Jie

论文摘要

基于变压器的文本对图像模型的开发受到高分辨率图像的缓慢产生和复杂性的阻碍。在这项工作中,我们提出了基于层次变压器和本地平行自动回归生成的解决方案。我们为6B参数变压器预算了一个简单,灵活的自我监督任务,跨模式通用语言模型(COGLM),并为快速的超级分辨率提供了芬特。与并发的最新dall-e-2相比,新的文本到图像系统Cogview2显示出非常有竞争力的一代,并且自然支持图像上的交互式文本指导的编辑。

The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源