论文标题
关于多输出高斯过程中潜在功能的负转移和结构
On Negative Transfer and Structure of Latent Functions in Multi-output Gaussian Processes
论文作者
论文摘要
多输出高斯过程($ \ Mathcal {MGP} $)基于以下假设:输出共享共同点,但是,如果此假设不保持负转移,则相对于独立或在子集中的学习输出相对于学习输出,将导致绩效下降。在本文中,我们首先在$ \ Mathcal {MGP} $的上下文中定义负转移,然后为$ \ Mathcal {MGP} $模型得出必要的条件,以避免负转移。具体而言,在卷积构造下,我们表明避免负转移主要取决于具有足够数量的潜在功能$ q $,而不管使用的内核或推理过程的灵活性是什么。但是,$ Q $的略有增加导致估计的参数数量大幅增加。为此,我们提出了两个潜在结构,将规模扩展到任意大数据集,可以避免负面传输,并允许内部使用任何内核或稀疏近似值。这些结构还允许正规化,可以提供一致且自动选择相关输出。
The multi-output Gaussian process ($\mathcal{MGP}$) is based on the assumption that outputs share commonalities, however, if this assumption does not hold negative transfer will lead to decreased performance relative to learning outputs independently or in subsets. In this article, we first define negative transfer in the context of an $\mathcal{MGP}$ and then derive necessary conditions for an $\mathcal{MGP}$ model to avoid negative transfer. Specifically, under the convolution construction, we show that avoiding negative transfer is mainly dependent on having a sufficient number of latent functions $Q$ regardless of the flexibility of the kernel or inference procedure used. However, a slight increase in $Q$ leads to a large increase in the number of parameters to be estimated. To this end, we propose two latent structures that scale to arbitrarily large datasets, can avoid negative transfer and allow any kernel or sparse approximations to be used within. These structures also allow regularization which can provide consistent and automatic selection of related outputs.