论文标题
使用$ \ ell_1 $ regarlized Logistic回归的高维iSing模型选择的元学习
Meta Learning for High-dimensional Ising Model Selection Using $\ell_1$-regularized Logistic Regression
论文作者
论文摘要
在本文中,我们考虑了使用$ \ ell_1 $ regularized Logistic回归的方法来估算与高维iSing模型相关的图形的元学习问题。我们的目标是在学习新任务中使用从辅助任务中学到的信息来降低其足够的样本复杂性。为此,我们提出了一种新颖的生成模型以及不当的估计方法。在我们的设置中,所有任务均为\ emph {相似}在其\ emph {Random}模型参数和支持中。 By pooling all the samples from the auxiliary tasks to \emph{improperly} estimate a single parameter vector, we can recover the true support union, assumed small in size, with a high probability with a sufficient sample complexity of $Ω(1) $ per task, for $K = Ω(d^3 \log p ) $ tasks of Ising models with $p$ nodes and a maximum neighborhood size $d$.然后,在对新任务的支持仅限于估计的支持联盟的支持下,我们证明,可以通过降低$ω(d^3 \ log d)$的足够样品复杂性来获得新任务的一致邻域选择。
In this paper, we consider the meta learning problem for estimating the graphs associated with high-dimensional Ising models, using the method of $\ell_1$-regularized logistic regression for neighborhood selection of each node. Our goal is to use the information learned from the auxiliary tasks in the learning of the novel task to reduce its sufficient sample complexity. To this end, we propose a novel generative model as well as an improper estimation method. In our setting, all the tasks are \emph{similar} in their \emph{random} model parameters and supports. By pooling all the samples from the auxiliary tasks to \emph{improperly} estimate a single parameter vector, we can recover the true support union, assumed small in size, with a high probability with a sufficient sample complexity of $Ω(1) $ per task, for $K = Ω(d^3 \log p ) $ tasks of Ising models with $p$ nodes and a maximum neighborhood size $d$. Then, with the support for the novel task restricted to the estimated support union, we prove that consistent neighborhood selection for the novel task can be obtained with a reduced sufficient sample complexity of $Ω(d^3 \log d)$.