论文标题
音乐节奏估计
Equivariant Self-Supervision for Musical Tempo Estimation
论文作者
论文摘要
自从近年来,自我监督的方法已成为代表性学习的有前途的途径,因为它们减轻了对被标记的数据集的需求,这些数据集的需求稀缺且昂贵。对比方法是在音频域中自学的流行选择,通常通过强迫模型不变到输入的某些转换来提供学习信号。但是,这些方法需要采取诸如阴性采样或某种形式的正则化之类的措施,以防止模型在琐碎的溶液上崩溃。在这项工作中,我们建议将肩variance用作自我判断信号,以从未标记的数据中学习音频节奏表示。我们得出了一个简单的损耗函数,可防止网络在训练过程中崩溃,而无需任何形式的正则化或负抽样。我们的实验表明,可以通过仅依靠模棱两可的自学来学习有意义的速度估计表示,从而实现与几种基准上有监督方法的绩效。为了额外的好处,我们的方法只需要适度的计算资源,因此仍然可以使用广泛的研究社区。
Self-supervised methods have emerged as a promising avenue for representation learning in the recent years since they alleviate the need for labeled datasets, which are scarce and expensive to acquire. Contrastive methods are a popular choice for self-supervision in the audio domain, and typically provide a learning signal by forcing the model to be invariant to some transformations of the input. These methods, however, require measures such as negative sampling or some form of regularisation to be taken to prevent the model from collapsing on trivial solutions. In this work, instead of invariance, we propose to use equivariance as a self-supervision signal to learn audio tempo representations from unlabelled data. We derive a simple loss function that prevents the network from collapsing on a trivial solution during training, without requiring any form of regularisation or negative sampling. Our experiments show that it is possible to learn meaningful representations for tempo estimation by solely relying on equivariant self-supervision, achieving performance comparable with supervised methods on several benchmarks. As an added benefit, our method only requires moderate compute resources and therefore remains accessible to a wide research community.