论文标题

TAFSSL:任务自适应特征子空间学习,用于几次分类

TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot classification

论文作者

Lichtenstein, Moshe, Sattigeri, Prasanna, Feris, Rogerio, Giryes, Raja, Karlinsky, Leonid

论文摘要

每本新颖班级(在培训期间看不见)的少量学习(通常为$ 1 $或$ 5 $)的知识(通常为$ 1 $或$ 5 $)的领域在最近的文献中受到了很大的关注和显着的表现进步。尽管已经提出了针对FSL的技术数量,但对于FSL的性能来说,已经出现了几个因素,即使是最简单的技术,也将SOTA授予了SOTA。这些是:基础架构(较大),基础类别的预训练类型(元训练与常规的多级阶级,目前常规的胜利),基础类别集合的数量和多样性(越越越来越较较高,导致更丰富的适应性特征),以及在提供自我培训的工作中的使用(在固定培训期间的使用)(可以增加培训的能力),以增加培训的能力(以培训为基础,可以用作培训的多样性)。在本文中,我们提出了另一种简单的技术,对于少数拍摄的学习性能很重要 - 搜索紧凑的特征子空间,对于给定的少数测试任务而言,这是歧视性的。我们表明,当一些其他未标记的数据伴随着新颖的少数弹药任务时,任务自适应特征子空间学习(TAFSSL)可以显着提高FSL方案的性能,无论是一组未标记的查询(传输FSL),或者是其他未标记的数据示例(半标记数据示例)(semi-Supers samples(semi-Super cured FSL))。具体来说,我们表明,在具有挑战性的迷你胶原和Tieredimagenet基准上,TAFSSL可以改善转机和半监视的FSL FSL设置中的最新最先进的FSL设置超过$ 5 \%$,同时将FSL中未标记数据的收益提高到超过10美元$ $ $ $ $ $ $ $ $ $ $ \%\%\%。

The field of Few-Shot Learning (FSL), or learning from very few (typically $1$ or $5$) examples per novel class (unseen during training), has received a lot of attention and significant performance advances in the recent literature. While number of techniques have been proposed for FSL, several factors have emerged as most important for FSL performance, awarding SOTA even to the simplest of techniques. These are: the backbone architecture (bigger is better), type of pre-training on the base classes (meta-training vs regular multi-class, currently regular wins), quantity and diversity of the base classes set (the more the merrier, resulting in richer and better adaptive features), and the use of self-supervised tasks during pre-training (serving as a proxy for increasing the diversity of the base set). In this paper we propose yet another simple technique that is important for the few shot learning performance - a search for a compact feature sub-space that is discriminative for a given few-shot test task. We show that the Task-Adaptive Feature Sub-Space Learning (TAFSSL) can significantly boost the performance in FSL scenarios when some additional unlabeled data accompanies the novel few-shot task, be it either the set of unlabeled queries (transductive FSL) or some additional set of unlabeled data samples (semi-supervised FSL). Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5\%$, while increasing the benefit of using unlabeled data in FSL to above $10\%$ performance gain.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源