论文标题
代表学习的自我监督关系推理
Self-Supervised Relational Reasoning for Representation Learning
论文作者
论文摘要
在自我监督的学习中,系统的任务是通过在一组未标记的数据上定义替代目标来实现替代目标。目的是构建可以在下游任务中使用的有用表示,而无需昂贵的手动注释。在这项工作中,我们提出了一种新颖的自我监督的关系推理,使学习者可以从未标记的数据中隐含的信息引导信号。训练一个关系头,以区分实体与自身(反应内)和其他实体(间歇性)之间的关系,从而在基础神经网络骨干链中导致丰富而描述性的表示,这些骨干可以用于分类和图像恢复等下游任务。我们使用标准数据集,协议和骨干遵循严格的实验程序,评估所提出的方法。在各种情况下,自我监督的关系推理的精度平均优于最佳竞争对手,而最新的最新模型则比最新的竞争对手高3%。我们将该方法的有效性与伯努利对数可能性的最大化联系起来,可以将其视为最大化相互信息的代理,从而在相对于常用的对比损失方面具有更有效的目标。
In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on a set of unlabeled data. The aim is to build useful representations that can be used in downstream tasks, without costly manual annotation. In this work, we propose a novel self-supervised formulation of relational reasoning that allows a learner to bootstrap a signal from information implicit in unlabeled data. Training a relation head to discriminate how entities relate to themselves (intra-reasoning) and other entities (inter-reasoning), results in rich and descriptive representations in the underlying neural network backbone, which can be used in downstream tasks such as classification and image retrieval. We evaluate the proposed method following a rigorous experimental procedure, using standard datasets, protocols, and backbones. Self-supervised relational reasoning outperforms the best competitor in all conditions by an average 14% in accuracy, and the most recent state-of-the-art model by 3%. We link the effectiveness of the method to the maximization of a Bernoulli log-likelihood, which can be considered as a proxy for maximizing the mutual information, resulting in a more efficient objective with respect to the commonly used contrastive losses.