论文标题
自我监督的字符蒸馏文本识别
Self-supervised Character-to-Character Distillation for Text Recognition
论文作者
论文摘要
当处理复杂的文本图像(例如,不规则的结构,低分辨率,沉重的闭塞和不均匀的照明)时,现有的监督文本识别方法是数据渴望的。尽管这些方法采用大规模合成文本图像来减少对注释的真实图像的依赖性,但域间隙仍限制了识别性能。因此,通过自我监督的学习在未标记的真实图像上探索强大的文本特征表示是一个很好的解决方案。但是,现有的自我监督文本识别方法通过大致沿水平轴划分视觉特征来进行序列到序列表示学习,这限制了增强的灵活性,因为大型基于几何的增强可能会导致序列到序列的顺序不一致。在此激励的情况下,我们提出了一种新颖的自我监督角色到字符蒸馏方法CCD,它使多功能增强功能可以促进一般文本表示学习。具体而言,我们通过设计一个自我监督的字符分割模块来描述未标记的真实图像的字符结构。此后,使用图像中的两个增强视图之间的转换矩阵,CCD可以轻松地丰富本地字符的多样性,同时将其成对对齐保持在柔性增强下。实验表明,CCD取得了最新的结果,文本识别的平均性能增长为1.38%,文本分割中的1.7%,0.24 dB(PSNR)和0.0321(SSIM)(SSIM)(SSIM)在文本超级分辨率中。代码可在https://github.com/tongkunguan/ccd上找到。
When handling complicated text images (e.g., irregular structures, low resolution, heavy occlusion, and uneven illumination), existing supervised text recognition methods are data-hungry. Although these methods employ large-scale synthetic text images to reduce the dependence on annotated real images, the domain gap still limits the recognition performance. Therefore, exploring the robust text feature representations on unlabeled real images by self-supervised learning is a good solution. However, existing self-supervised text recognition methods conduct sequence-to-sequence representation learning by roughly splitting the visual features along the horizontal axis, which limits the flexibility of the augmentations, as large geometric-based augmentations may lead to sequence-to-sequence feature inconsistency. Motivated by this, we propose a novel self-supervised Character-to-Character Distillation method, CCD, which enables versatile augmentations to facilitate general text representation learning. Specifically, we delineate the character structures of unlabeled real images by designing a self-supervised character segmentation module. Following this, CCD easily enriches the diversity of local characters while keeping their pairwise alignment under flexible augmentations, using the transformation matrix between two augmented views from images. Experiments demonstrate that CCD achieves state-of-the-art results, with average performance gains of 1.38% in text recognition, 1.7% in text segmentation, 0.24 dB (PSNR) and 0.0321 (SSIM) in text super-resolution. Code is available at https://github.com/TongkunGuan/CCD.