论文标题

贵宾犬:通过惩罚分发样本来改善几乎没有的学习

POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples

论文作者

Le, Duong H., Nguyen, Khoi D., Nguyen, Khoi, Tran, Quoc-Huy, Nguyen, Rang, Hua, Binh-Son

论文摘要

在这项工作中,我们建议使用分布式样本,即来自目标类别外部的未标记样本,以改善少量学习。具体而言,我们利用易于可用的分发样品来驱动分类器,以避免通过最大化原型到分布样品的距离,同时最大程度地减少分布样品的距离(即支持,查询数据)。我们的方法易于实现,不可知论的是提取器,轻量级,而没有任何额外的预训练费用,并且适用于电感和跨托架设置。对各种标准基准测试的广泛实验表明,所提出的方法始终提高具有不同架构的预审计网络的性能。

In this work, we propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning. Specifically, we exploit the easily available out-of-distribution samples to drive the classifier to avoid irrelevant features by maximizing the distance from prototypes to out-of-distribution samples while minimizing that of in-distribution samples (i.e., support, query data). Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings. Extensive experiments on various standard benchmarks demonstrate that the proposed method consistently improves the performance of pretrained networks with different architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源