论文标题
通过端到端学习完全和弱监督的转介表达细分
Fully and Weakly Supervised Referring Expression Segmentation with End-to-End Learning
论文作者
论文摘要
旨在根据给定的语言表达来定位和分割目标的参考表达细分(RES)引起了人们的注意。现有方法共同考虑本地化和细分步骤,这些步骤依赖于两个步骤的融合视觉和语言特征。我们认为,识别对象和生成掩码的目的之间的冲突限制了RES性能。为了解决这个问题,我们提出了一个并行位置 - 内分子分段管道,以更好地隔离,然后相互作用的本地化和分割步骤。在我们的管道中,语言信息不会直接污染视觉特征以进行分割。具体而言,本地化步骤基于参考表达式将目标对象定位在图像中,然后从本地化步骤获得的视觉内核指导分段步骤。该管道还使我们能够以弱监督的方式训练RES,在该方法中,像素级分段标签被中心和角点上的单击注释代替。位置头是完全监督的,并以单击注释作为监督进行了训练,并且分段头受到弱监督分段损失的训练。为了在弱监督的设置上验证我们的框架,我们用单击注释注释了三个基准数据集(reccoco,refcoco+ and reccocog)。将发布基准代码和数据集。
Referring Expression Segmentation (RES), which is aimed at localizing and segmenting the target according to the given language expression, has drawn increasing attention. Existing methods jointly consider the localization and segmentation steps, which rely on the fused visual and linguistic features for both steps. We argue that the conflict between the purpose of identifying an object and generating a mask limits the RES performance. To solve this problem, we propose a parallel position-kernel-segmentation pipeline to better isolate and then interact the localization and segmentation steps. In our pipeline, linguistic information will not directly contaminate the visual feature for segmentation. Specifically, the localization step localizes the target object in the image based on the referring expression, and then the visual kernel obtained from the localization step guides the segmentation step. This pipeline also enables us to train RES in a weakly-supervised way, where the pixel-level segmentation labels are replaced by click annotations on center and corner points. The position head is fully-supervised and trained with the click annotations as supervision, and the segmentation head is trained with weakly-supervised segmentation losses. To validate our framework on a weakly-supervised setting, we annotated three RES benchmark datasets (RefCOCO, RefCOCO+ and RefCOCOg) with click annotations.Our method is simple but surprisingly effective, outperforming all previous state-of-the-art RES methods on fully- and weakly-supervised settings by a large margin. The benchmark code and datasets will be released.