论文标题
学习用于主动扬声器检测的长期时空图
Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection
论文作者
论文摘要
在带有多个扬声器的视频中,主动扬声器检测(ASD)是一项艰巨的任务,因为它需要在长时间的暂时窗口上学习有效的视听功能和时空相关性。在本文中,我们提出了一种新颖的时空图形学习框架,可以解决复杂的任务,例如ASD。为此,视频框架中的每个人首先在该框架的唯一节点中编码。对应于跨帧的单个人的节点已连接以编码其时间动力学。帧中的节点也连接到编码人际关系。因此,咒语将ASD减少到节点分类任务。重要的是,咒语能够在所有节点上为较长的时间上下文推理,而无需依赖计算昂贵的完全连接的图形神经网络。通过对Ava-Activespeaker数据集进行的大量实验,我们证明了基于图形的表示形式可以显着改善主动扬声器检测性能,因为其明确的空间和时间结构。拼写优于所有先前的最新方法,同时需要大大降低内存和计算资源。我们的代码可在https://github.com/sra2/spell上公开获取
Active speaker detection (ASD) in videos with multiple speakers is a challenging task as it requires learning effective audiovisual features and spatial-temporal correlations over long temporal windows. In this paper, we present SPELL, a novel spatial-temporal graph learning framework that can solve complex tasks such as ASD. To this end, each person in a video frame is first encoded in a unique node for that frame. Nodes corresponding to a single person across frames are connected to encode their temporal dynamics. Nodes within a frame are also connected to encode inter-person relationships. Thus, SPELL reduces ASD to a node classification task. Importantly, SPELL is able to reason over long temporal contexts for all nodes without relying on computationally expensive fully connected graph neural networks. Through extensive experiments on the AVA-ActiveSpeaker dataset, we demonstrate that learning graph-based representations can significantly improve the active speaker detection performance owing to its explicit spatial and temporal structure. SPELL outperforms all previous state-of-the-art approaches while requiring significantly lower memory and computational resources. Our code is publicly available at https://github.com/SRA2/SPELL