论文标题
文本视频检索的解开表示学习
Disentangled Representation Learning for Text-Video Retrieval
论文作者
论文摘要
跨模式相互作用是文本视频检索(TVR)中的关键组成部分,但是几乎没有研究如何计算相互作用的影响因素影响性能。本文首先研究了相互作用范式的深度,我们发现其计算可以分为两个术语,即不同粒度的相互作用内容和匹配函数,以区分相同语义的对。我们还观察到,单矢量表示和隐式密集功能极大地阻碍了优化。基于这些发现,我们提出了一个分离的框架,以捕获顺序和分层表示。首先,考虑到文本和视频输入中的自然顺序结构,执行了加权令牌的交互(WTI)模块以使内容解除并自适应利用配对的相关性。对于顺序输入,这种相互作用可以形成更好的分离歧管。其次,我们引入了通道去相关(CDCR),以最大程度地减少比较向量的组件之间的冗余,这有助于学习分层表示。 We demonstrate the effectiveness of the disentangled representation on various benchmarks, e.g., surpassing CLIP4Clip largely by +2.9%, +3.1%, +7.9%, +2.3%, +2.8% and +6.5% R@1 on the MSR-VTT, MSVD, VATEX, LSMDC, AcitivityNet, and DiDeMo, respectively.
Cross-modality interaction is a critical component in Text-Video Retrieval (TVR), yet there has been little examination of how different influencing factors for computing interaction affect performance. This paper first studies the interaction paradigm in depth, where we find that its computation can be split into two terms, the interaction contents at different granularity and the matching function to distinguish pairs with the same semantics. We also observe that the single-vector representation and implicit intensive function substantially hinder the optimization. Based on these findings, we propose a disentangled framework to capture a sequential and hierarchical representation. Firstly, considering the natural sequential structure in both text and video inputs, a Weighted Token-wise Interaction (WTI) module is performed to decouple the content and adaptively exploit the pair-wise correlations. This interaction can form a better disentangled manifold for sequential inputs. Secondly, we introduce a Channel DeCorrelation Regularization (CDCR) to minimize the redundancy between the components of the compared vectors, which facilitate learning a hierarchical representation. We demonstrate the effectiveness of the disentangled representation on various benchmarks, e.g., surpassing CLIP4Clip largely by +2.9%, +3.1%, +7.9%, +2.3%, +2.8% and +6.5% R@1 on the MSR-VTT, MSVD, VATEX, LSMDC, AcitivityNet, and DiDeMo, respectively.