论文标题
Vita:通过对象令牌关联进行视频实例分割
VITA: Video Instance Segmentation via Object Token Association
论文作者
论文摘要
我们基于以下假设,即明确面向对象的信息可以成为理解整个序列的上下文的有力线索,我们引入了一个新的范式用于离线视频实例分割(VIS)。为此,我们提出了Vita,这是一个简单的结构,建立在基于现成的变压器的图像实例分割模型之上。具体来说,我们使用图像对象检测器作为将特定于对象的上下文提炼到对象令牌中的一种手段。 Vita通过在不使用时空主链功能的情况下关联帧级对象令牌来完成视频级别的理解。通过使用凝结的信息有效建立对象之间的关系,Vita用Resnet-50骨架在VIS基准上实现了最先进的方法:49.8 AP,YouTube-VIS 2019和2021上的45.7 AP,以及OVIS上的19.6 AP。此外,由于其基于对象令牌的结构与骨干特征脱节,Vita显示了以前的离线VIS方法未探索的几个实用优势 - 使用常见的GPU处理长长和高分辨率的视频,并冷冻在图像域上训练的框架级检测器。代码可在https://github.com/sukjunhwang/vita上找到。
We introduce a novel paradigm for offline Video Instance Segmentation (VIS), based on the hypothesis that explicit object-oriented information can be a strong clue for understanding the context of the entire sequence. To this end, we propose VITA, a simple structure built on top of an off-the-shelf Transformer-based image instance segmentation model. Specifically, we use an image object detector as a means of distilling object-specific contexts into object tokens. VITA accomplishes video-level understanding by associating frame-level object tokens without using spatio-temporal backbone features. By effectively building relationships between objects using the condensed information, VITA achieves the state-of-the-art on VIS benchmarks with a ResNet-50 backbone: 49.8 AP, 45.7 AP on YouTube-VIS 2019 & 2021, and 19.6 AP on OVIS. Moreover, thanks to its object token-based structure that is disjoint from the backbone features, VITA shows several practical advantages that previous offline VIS methods have not explored - handling long and high-resolution videos with a common GPU, and freezing a frame-level detector trained on image domain. Code is available at https://github.com/sukjunhwang/VITA.