论文标题

与双重对比的一致性相几乎没有弹头的文本分类

Few-shot Text Classification with Dual Contrastive Consistency

论文作者

Sun, Liwen, Han, Jiawei

论文摘要

在本文中,我们探讨了如何利用预训练的语言模型来执行几种镜头的文本分类,其中仅给出了每个类别的少数注释示例。由于在这种情况下使用传统的跨透镜损失对微调语言模型会导致严重的过度拟合,并导致模型的亚最佳概括,因此我们对少数标记的数据和一致性未标记的数据采用了有监督的对比度学习。此外,我们提出了一种新颖的对比一致性,以进一步提高模型性能和完善句子表示。在在四个数据集上进行了广泛的实验后,我们证明了我们的模型(FTCC)可以胜过最先进的方法,并且具有更好的鲁棒性。

In this paper, we explore how to utilize pre-trained language model to perform few-shot text classification where only a few annotated examples are given for each class. Since using traditional cross-entropy loss to fine-tune language model under this scenario causes serious overfitting and leads to sub-optimal generalization of model, we adopt supervised contrastive learning on few labeled data and consistency-regularization on vast unlabeled data. Moreover, we propose a novel contrastive consistency to further boost model performance and refine sentence representation. After conducting extensive experiments on four datasets, we demonstrate that our model (FTCC) can outperform state-of-the-art methods and has better robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源