论文标题
通过合并的内部分布来克服域感知设置中的概念转移
Overcoming Concept Shift in Domain-Aware Settings through Consolidated Internal Distributions
论文作者
论文摘要
我们开发了一种算法,以提高概念转移中的预训练模型的性能,而无需从头开始重新审查模型,而只有可以访问初始概念的未经注释的样本。我们将此问题建模为域的适应问题,其中源域数据在模型适应过程中无法访问。核心思想是基于合并中间内部分布,在调整模型之后学会代表源域数据的基础。我们提供理论分析并进行广泛的实验,以证明所提出的方法是有效的。
We develop an algorithm to improve the performance of a pre-trained model under concept shift without retraining the model from scratch when only unannotated samples of initial concepts are accessible. We model this problem as a domain adaptation problem, where the source domain data is inaccessible during model adaptation. The core idea is based on consolidating the intermediate internal distribution, learned to represent the source domain data, after adapting the model. We provide theoretical analysis and conduct extensive experiments to demonstrate that the proposed method is effective.