论文标题
GraphVid:仅需几个节点就可以理解视频
GraphVid: It Only Takes a Few Nodes to Understand a Video
论文作者
论文摘要
我们提出了一个简明的视频表示,这些视频将感知有意义的特征编码为图形。通过此表示,我们旨在利用视频中的大量冗余并节省计算。首先,我们通过将Superpixel视为图形节点并在相邻的Superpixels之间创建空间和时间连接来构建视频的超级像素图表示。然后,我们利用图形卷积网络处理此表示形式并预测所需的输出。结果,我们能够使用更少的参数训练模型,这转化为较短的培训期和计算资源要求的减少。一项关于公开可用数据集动力学-400和Charades的全面实验研究表明,所提出的方法具有很高的成本效益,并且在培训和推理期间使用有限的商品硬件。它减少了计算要求10倍,同时获得与最先进方法相当的结果。我们认为,提出的方法是一个有希望的方向,可以为更有效地解决视频理解打开大门,并使更多的资源用户能够在该研究领域蓬勃发展。
We propose a concise representation of videos that encode perceptually meaningful features into graphs. With this representation, we aim to leverage the large amount of redundancies in videos and save computations. First, we construct superpixel-based graph representations of videos by considering superpixels as graph nodes and create spatial and temporal connections between adjacent superpixels. Then, we leverage Graph Convolutional Networks to process this representation and predict the desired output. As a result, we are able to train models with much fewer parameters, which translates into short training periods and a reduction in computation resource requirements. A comprehensive experimental study on the publicly available datasets Kinetics-400 and Charades shows that the proposed method is highly cost-effective and uses limited commodity hardware during training and inference. It reduces the computational requirements 10-fold while achieving results that are comparable to state-of-the-art methods. We believe that the proposed approach is a promising direction that could open the door to solving video understanding more efficiently and enable more resource limited users to thrive in this research field.