论文标题
频谱图是斑块的序列
Spectrograms Are Sequences of Patches
论文作者
论文摘要
自我监督的预训练模型已成功地用于多个机器学习域。但是,只有少量的工作与音乐有关。在我们的作品中,我们将音乐谱图视为一系列贴片,并设计了一个自制的模型,该模型捕获了这些顺序补丁的特征:patchifier,它可以很好地利用来自NLP和CV域的自我监督的学习方法。我们不使用标记的数据进行预训练过程,只有包含16K音乐剪辑的MTAT数据集的一个子集。预训练后,我们将模型应用于几个下游任务。与其他音频表示模型相比,我们的模型取得了相当可接受的结果。同时,我们的作品表明,将音频视为一系列补丁片段是有意义的。
Self-supervised pre-training models have been used successfully in several machine learning domains. However, only a tiny amount of work is related to music. In our work, we treat a spectrogram of music as a series of patches and design a self-supervised model that captures the features of these sequential patches: Patchifier, which makes good use of self-supervised learning methods from both NLP and CV domains. We do not use labeled data for the pre-training process, only a subset of the MTAT dataset containing 16k music clips. After pre-training, we apply the model to several downstream tasks. Our model achieves a considerably acceptable result compared to other audio representation models. Meanwhile, our work demonstrates that it makes sense to consider audio as a series of patch segments.