论文标题
元稀疏回归的样本复杂性
The Sample Complexity of Meta Sparse Regression
论文作者
论文摘要
本文通过无限任务解决了稀疏线性回归中的元学习问题。我们假设学习者可以访问几个类似的任务。学习者的目标是将知识从以前的任务转移到类似但新颖的任务。对于P参数,支持集k的大小和每个任务的L样本,我们表明o((k log(p)) /l)任务足以恢复所有任务的共同支持。通过恢复的支持,我们可以大大降低样品复杂性,以估计新任务的参数,即相对于t和p,在o(1)中。我们还证明我们的速率是最小的。元学习与经典多任务学习之间的一个关键区别在于,元学习仅着眼于新任务的参数的恢复,而多任务学习估计所有任务的参数,这需要L以t增长。取而代之的是,我们有效的元学习估计器使L相对于t(即,学习很少)是恒定的。
This paper addresses the meta-learning problem in sparse linear regression with infinite tasks. We assume that the learner can access several similar tasks. The goal of the learner is to transfer knowledge from the prior tasks to a similar but novel task. For p parameters, size of the support set k , and l samples per task, we show that T \in O (( k log(p) ) /l ) tasks are sufficient in order to recover the common support of all tasks. With the recovered support, we can greatly reduce the sample complexity for estimating the parameter of the novel task, i.e., l \in O (1) with respect to T and p . We also prove that our rates are minimax optimal. A key difference between meta-learning and the classical multi-task learning, is that meta-learning focuses only on the recovery of the parameters of the novel task, while multi-task learning estimates the parameter of all tasks, which requires l to grow with T . Instead, our efficient meta-learning estimator allows for l to be constant with respect to T (i.e., few-shot learning).