论文标题
通过功能限制的多任务偏见变化权衡权衡取舍
Multi-task Bias-Variance Trade-off Through Functional Constraints
论文作者
论文摘要
多任务学习旨在获取一组功能,即回归器或分类器,这些功能在各种任务中都表现出色。从本质上讲,多任务学习背后的想法是利用数据源之间的内在相似性,以帮助每个单个领域的学习过程。在本文中,我们从两个极端的学习方案中汲取了直觉 - 所有任务的一个函数,而特定于任务的功能忽略了其他任务依赖关系 - 提出了偏见 - 变化的权衡。为了控制方差之间的关系(由I.I.D.样本的数量给出)和偏差(来自其他任务的数据),我们引入了一个受约束的学习公式,以实施特定的特定解决方案以接近中心函数。该问题在双重域中解决,为此我们提出了一种随机原始偶算法。使用实际数据的多域分类问题的实验结果表明,所提出的过程的表现既优于任务特定,又超过了单个分类器。
Multi-task learning aims to acquire a set of functions, either regressors or classifiers, that perform well for diverse tasks. At its core, the idea behind multi-task learning is to exploit the intrinsic similarity across data sources to aid in the learning process for each individual domain. In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks dependencies -- to propose a bias-variance trade-off. To control the relationship between the variance (given by the number of i.i.d. samples), and the bias (coming from data from other task), we introduce a constrained learning formulation that enforces domain specific solutions to be close to a central function. This problem is solved in the dual domain, for which we propose a stochastic primal-dual algorithm. Experimental results for a multi-domain classification problem with real data show that the proposed procedure outperforms both the task specific, as well as the single classifiers.