论文标题
图形CNN用于从看不见的视频中移动对象检测的移动对象检测
Graph CNN for Moving Object Detection in Complex Environments from Unseen Videos
论文作者
论文摘要
移动对象检测(MOD)是许多计算机视觉应用程序的基本步骤。当从静态或移动的相机捕获的视频序列面临挑战时,MOD变得非常具有挑战性:伪装,阴影,动态背景和照明变化,仅举几例。深度学习方法已成功地应用于竞争性能。但是,为了解决过度拟合的问题,深度学习方法需要大量标记的数据,这是一项艰巨的任务,因为始终无法详尽的注释。此外,某些MOD深度学习方法表明,在没有看不见的视频序列的情况下,性能降解,因为在网络学习过程中涉及相同序列的测试和训练分裂。在这项工作中,我们使用图形卷积神经网络(GCNN)提出了MOD作为节点分类问题的问题。我们的算法被称为GraphMod-NET,包括实例分割,背景初始化,特征提取和图形结构。在看不见的视频上测试了GraphMod-NET,并且在2014年变更检测(CDNET2014)和UCSD背景亚收集数据集的几个挑战中,在无监督,半监督和监督学习中均优胜最先进的方法。
Moving Object Detection (MOD) is a fundamental step for many computer vision applications. MOD becomes very challenging when a video sequence captured from a static or moving camera suffers from the challenges: camouflage, shadow, dynamic backgrounds, and lighting variations, to name a few. Deep learning methods have been successfully applied to address MOD with competitive performance. However, in order to handle the overfitting problem, deep learning methods require a large amount of labeled data which is a laborious task as exhaustive annotations are always not available. Moreover, some MOD deep learning methods show performance degradation in the presence of unseen video sequences because the testing and training splits of the same sequences are involved during the network learning process. In this work, we pose the problem of MOD as a node classification problem using Graph Convolutional Neural Networks (GCNNs). Our algorithm, dubbed as GraphMOD-Net, encompasses instance segmentation, background initialization, feature extraction, and graph construction. GraphMOD-Net is tested on unseen videos and outperforms state-of-the-art methods in unsupervised, semi-supervised, and supervised learning in several challenges of the Change Detection 2014 (CDNet2014) and UCSD background subtraction datasets.