论文标题
一阶段视频实例细分:从框架框架到剪辑剪辑
One-stage Video Instance Segmentation: From Frame-in Frame-out to Clip-in Clip-out
论文作者
论文摘要
许多视频实例分割(VIS)方法将视频序列分配到单个帧中,以通过框架检测和细分对象。但是,这样的框架框架(FIFO)管道无效地利用时间信息。基于一个短夹中的相邻帧在内容上高度连贯,我们建议将单阶段的FIFO框架扩展到剪辑剪辑(CICO)ONE,该剪辑通过夹子执行Vis clip。具体而言,我们将所有框架的FPN特征堆叠在一个简短的视频剪辑中以构建时空特征立方体,并在预测头中替换2D Convel层,并用3D Conv层中的蒙版分支,形成夹子级别的预测头(CPH)和剪贴层蒙版头(CMH)。然后,可以通过将CPH和夹级特征从CMH馈送到一个小的完全卷积网络的盒子级预测来生成实例的夹级掩码。提出了剪辑级分割损失,以确保生成的实例掩模在剪辑中具有时间连贯。拟议的CICO策略没有框架间的一致性,并且可以轻松地嵌入现有的基于FIFO的VIS方法中。为了验证我们的CICO策略的一般性和有效性,我们将其应用于两种代表性的FIFO方法,即Yolact \ cite {bolya2019yolact}和Condinst \ cite {Tian2020-Conditional} 35.2/35.4 \%和17.2/18.0 \%掩码AP使用Resnet50骨架,41.8/41.4 \%\%,38.0/38.9 \%和18.0/18.0/18.2 \%掩码AP使用Swin Transformer在YouTube-vis-Vis-Vis-Vis 2019,2021和OVIS上的swin Transform Backone tiny Backbone plose,AP最先进的。 CICO的代码和视频演示可以在\ url {https://github.com/minghanli/cico}找到。
Many video instance segmentation (VIS) methods partition a video sequence into individual frames to detect and segment objects frame by frame. However, such a frame-in frame-out (FiFo) pipeline is ineffective to exploit the temporal information. Based on the fact that adjacent frames in a short clip are highly coherent in content, we propose to extend the one-stage FiFo framework to a clip-in clip-out (CiCo) one, which performs VIS clip by clip. Specifically, we stack FPN features of all frames in a short video clip to build a spatio-temporal feature cube, and replace the 2D conv layers in the prediction heads and the mask branch with 3D conv layers, forming clip-level prediction heads (CPH) and clip-level mask heads (CMH). Then the clip-level masks of an instance can be generated by feeding its box-level predictions from CPH and clip-level features from CMH into a small fully convolutional network. A clip-level segmentation loss is proposed to ensure that the generated instance masks are temporally coherent in the clip. The proposed CiCo strategy is free of inter-frame alignment, and can be easily embedded into existing FiFo based VIS approaches. To validate the generality and effectiveness of our CiCo strategy, we apply it to two representative FiFo methods, Yolact \cite{bolya2019yolact} and CondInst \cite{tian2020conditional}, resulting in two new one-stage VIS models, namely CiCo-Yolact and CiCo-CondInst, which achieve 37.1/37.3\%, 35.2/35.4\% and 17.2/18.0\% mask AP using the ResNet50 backbone, and 41.8/41.4\%, 38.0/38.9\% and 18.0/18.2\% mask AP using the Swin Transformer tiny backbone on YouTube-VIS 2019, 2021 and OVIS valid sets, respectively, recording new state-of-the-arts. Code and video demos of CiCo can be found at \url{https://github.com/MinghanLi/CiCo}.