论文标题

大规模的对比度语言审计,并通过功能融合和关键字到符号进行预处理

Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation

论文作者

Wu, Yusong, Chen, Ke, Zhang, Tianyu, Hui, Yuchen, Nezhurina, Marianna, Berg-Kirkpatrick, Taylor, Dubnov, Shlomo

论文摘要

对比学习在多模式表示学习领域表现出色。在本文中,我们提出了一条对比的语言审计的管道,以通过将音频数据与自然语言描述相结合来开发音频表示。为了实现这一目标,我们首先发布了Laion-Audio-630k,这是来自不同数据源的633,526个音频对接的大量集合。其次,我们通过考虑不同的音频编码器和文本编码器来构建一个对比的语言审计模型。我们将功能融合机制和关键字到胶合量的增强纳入模型设计中,以进一步使模型能够处理可变长度的音频输入并增强性能。第三,我们执行全面的实验,以评估我们的模型跨三个任务:文本到审计检索,零声音分类和监督音频分类。结果表明,我们的模型在文本到原告检索任务中取得了卓越的性能。在音频分类任务中,该模型在零拍设置中实现了最先进的性能,并能够获得与模型在非零弹片设置中的结果相当的性能。 Laion-Audio-630k和拟议的模型都可以向公众使用。

Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源