论文标题
CONTOURREND:通过渲染来改进轮廓的细分方法
ContourRend: A Segmentation Method for Improving Contours by Rendering
论文作者
论文摘要
一个好的对象细分应包含清晰的轮廓和完整区域。但是,基于掩模的分割无法在粗糙的预测网格上很好地处理轮廓特征,从而导致了模糊边缘的问题。虽然基于轮廓的细分可直接提供轮廓,但错过了轮廓的详细信息。为了获得精细的轮廓,我们提出了一种名为Contourrend的分割方法,该方法采用轮廓渲染器来完善细分轮廓。我们在基于图形卷积网络(GCN)的分割模型上实施方法。对于CityScapes数据集上的单个对象细分任务,基于GCN的分割contour用于生成单个对象的轮廓,然后我们的轮廓渲染器专注于轮廓周围的像素,并以高分辨率预测类别。通过渲染轮廓结果,我们的方法达到了72.41%的联合(IOU)平均交叉点,并超过基线多边形GCN 1.22%。
A good object segmentation should contain clear contours and complete regions. However, mask-based segmentation can not handle contour features well on a coarse prediction grid, thus causing problems of blurry edges. While contour-based segmentation provides contours directly, but misses contours' details. In order to obtain fine contours, we propose a segmentation method named ContourRend which adopts a contour renderer to refine segmentation contours. And we implement our method on a segmentation model based on graph convolutional network (GCN). For the single object segmentation task on cityscapes dataset, the GCN-based segmentation con-tour is used to generate a contour of a single object, then our contour renderer focuses on the pixels around the contour and predicts the category at high resolution. By rendering the contour result, our method reaches 72.41% mean intersection over union (IoU) and surpasses baseline Polygon-GCN by 1.22%.