论文标题
您可以使用室外数据少标记吗?活跃和转移学习很少
Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions
论文作者
论文摘要
为毒性和社会偏见的自定义维度标记社交媒体数据是具有挑战性和劳动力密集的。现有的转移和主动学习方法旨在减少注释努力,需要进行微调,这遭受了过度拟合的噪声,并且可能会导致域移动较小的样本量。在这项工作中,我们提出了一种新颖的主动转移指令(ATF)方法,不需要微调。 ATF利用了预训练的语言模型(PLM)的内部语言知识,以促进从现有的预先标记数据集(源域任务)中传输信息,并在未标记的目标数据(目标域任务)上使用最低标签工作。我们的策略可以产生正向转移,而没有22b参数PLM的NO转移,其平均AUC增益为10.5%。我们进一步表明,仅通过主动学习对几个目标域样本进行注释可能有益于转移,但是通过更多的注释工作减少了影响(在100到2000年的注释示例中,增益下降了26%)。最后,我们发现并非所有转移方案都会产生积极的增益,这似乎与目标域任务的PLMS初始性能有关。
Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from over-fitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). Our strategy can yield positive transfer achieving a mean AUC gain of 10.5% compared to no transfer with a large 22b parameter PLM. We further show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort (26% drop in gain between 100 and 2000 annotated examples). Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.