论文标题
贝叶斯模型不合时宜的元学习是否比模型敏锐的元学习更好?
Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably?
论文作者
论文摘要
元学习旨在学习一个可以快速适应看不见的任务的模型。广泛使用的元学习方法包括模型不可知的元学习(MAML),隐式MAML,贝叶斯MAML。由于其对不确定性进行建模的能力,贝叶斯MAML通常具有有利的经验表现。但是,对贝叶斯MAML的理论理解仍然受到限制,尤其是在诸如贝叶斯MAML何时的问题上,贝叶斯MAML的性能比MAML更好。在本文中,我们旨在通过比较MAML和贝叶斯MAML的元测试风险来为贝叶斯MAML的有利性能提供理论上的理由。在元线性回归中,在分布不可知和线性质心病例下,我们确定贝叶斯MAML确实比MAML更低的元测试风险要低。我们通过实验验证我们的理论结果。
Meta learning aims at learning a model that can quickly adapt to unseen tasks. Widely used meta learning methods include model agnostic meta learning (MAML), implicit MAML, Bayesian MAML. Thanks to its ability of modeling uncertainty, Bayesian MAML often has advantageous empirical performance. However, the theoretical understanding of Bayesian MAML is still limited, especially on questions such as if and when Bayesian MAML has provably better performance than MAML. In this paper, we aim to provide theoretical justifications for Bayesian MAML's advantageous performance by comparing the meta test risks of MAML and Bayesian MAML. In the meta linear regression, under both the distribution agnostic and linear centroid cases, we have established that Bayesian MAML indeed has provably lower meta test risks than MAML. We verify our theoretical results through experiments.