论文标题

通过普通特异性低级分解有效域的概括

Efficient Domain Generalization via Common-Specific Low-Rank Decomposition

论文作者

Piratla, Vihari, Netrapalli, Praneeth, Sarawagi, Sunita

论文摘要

域的概括是指训练模型的任务,该模型将概括为训练过程中未见的新领域。我们介绍了CSD(常见的特定分解),对于这种设置,该设置共同学习了一个共同的组件(该组件概括为新域)和特定的域特定组件(在训练域上过度拟合)。训练后丢弃域特异性组件,并且仅保留共同的组件。该算法非常简单,仅涉及修改任何给定神经网络体系结构的最终线性分类层。我们提出了一项原则分析,以了解现有方法,提供CSD的可识别性结果以及低级别对域泛化的研究效果。我们表明,基于域擦除,域扰动数据增强和元学习,CSD要么匹配域概括的最新方法。旋转的MNIST的进一步诊断(在域是可解释的)证实了以下假设:CSD成功地解散了共同的和域的特定组件,因此导致了更好的域泛化。

Domain generalization refers to the task of training a model which generalizes to new domains that are not seen during training. We present CSD (Common Specific Decomposition), for this setting,which jointly learns a common component (which generalizes to new domains) and a domain specific component (which overfits on training domains). The domain specific components are discarded after training and only the common component is retained. The algorithm is extremely simple and involves only modifying the final linear classification layer of any given neural network architecture. We present a principled analysis to understand existing approaches, provide identifiability results of CSD,and study effect of low-rank on domain generalization. We show that CSD either matches or beats state of the art approaches for domain generalization based on domain erasure, domain perturbed data augmentation, and meta-learning. Further diagnostics on rotated MNIST, where domains are interpretable, confirm the hypothesis that CSD successfully disentangles common and domain specific components and hence leads to better domain generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源