论文标题
全卷积网络用于全景分割
Fully Convolutional Networks for Panoptic Segmentation
论文作者
论文摘要
在本文中,我们提出了一个概念上的简单,强大,有效的框架框架,用于全景分割,称为圆锥体FCN。我们的方法旨在在统一的完全卷积管道中代表和预测前景事物和背景内容。特别是,Panoptic FCN用建议的内核生成器将每个对象实例或物体类别编码为特定的内核重量,并通过直接卷入高分辨率功能来产生预测。通过这种方法,可以在简单生成的内核和段工作流中分别满足事物和事物的实例感知和语义一致的属性。没有用于本地化或实例分离的额外盒子,该建议的方法在可可,城市景观和具有单一比例输入的Mapillary Vistas数据集上的效率高于先前的基于盒子的模型。我们的代码可在https://github.com/jia-research-lab/panopticfcn上公开获得。
In this paper, we present a conceptually simple, strong, and efficient framework for panoptic segmentation, called Panoptic FCN. Our approach aims to represent and predict foreground things and background stuff in a unified fully convolutional pipeline. In particular, Panoptic FCN encodes each object instance or stuff category into a specific kernel weight with the proposed kernel generator and produces the prediction by convolving the high-resolution feature directly. With this approach, instance-aware and semantically consistent properties for things and stuff can be respectively satisfied in a simple generate-kernel-then-segment workflow. Without extra boxes for localization or instance separation, the proposed approach outperforms previous box-based and -free models with high efficiency on COCO, Cityscapes, and Mapillary Vistas datasets with single scale input. Our code is made publicly available at https://github.com/Jia-Research-Lab/PanopticFCN.