论文标题
关于认证和改善对看不见领域的概括
On Certifying and Improving Generalization to Unseen Domains
论文作者
论文摘要
域的概括(DG)旨在学习通过使用来自多个相关源域的数据,其在测试时间遇到的看不见的域的性能保持较高的模型。许多现有的DG算法降低了表示空间中源分布之间的差异,以使靠近来源的未见域可能对齐。这是由分析的动机,该分析解释了使用分布距离(例如Wasserstein距离)到源的概括。但是,由于DG目标的开放性,使用一些基准数据集对DG算法进行全面评估是一项挑战。特别是,我们证明了用DG方法训练的模型的准确性在从流行的基准数据集中产生的未见域上差异很大。这强调了DG方法在一些基准数据集上的性能可能无法代表其在野外看不见的域上的性能。为了克服这一障碍,我们提出了一个基于分配强大优化(DRO)的通用认证框架,该框架可以有效地证明任何DG方法的最差性能。这使得对基准数据集对经验评估的DG方法进行了与数据无关的评估。此外,我们提出了一种培训算法,可以与任何DG方法一起使用,以改善其认证性能。我们的经验评估证明了我们方法在显着改善最严重的损失(即降低野生模型失败的风险)方面的有效性,而不会在基准数据集上产生显着的性能下降。
Domain Generalization (DG) aims to learn models whose performance remains high on unseen domains encountered at test-time by using data from multiple related source domains. Many existing DG algorithms reduce the divergence between source distributions in a representation space to potentially align the unseen domain close to the sources. This is motivated by the analysis that explains generalization to unseen domains using distributional distance (such as the Wasserstein distance) to the sources. However, due to the openness of the DG objective, it is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets. In particular, we demonstrate that the accuracy of the models trained with DG methods varies significantly across unseen domains, generated from popular benchmark datasets. This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild. To overcome this roadblock, we propose a universal certification framework based on distributionally robust optimization (DRO) that can efficiently certify the worst-case performance of any DG method. This enables a data-independent evaluation of a DG method complementary to the empirical evaluations on benchmark datasets. Furthermore, we propose a training algorithm that can be used with any DG method to provably improve their certified performance. Our empirical evaluation demonstrates the effectiveness of our method at significantly improving the worst-case loss (i.e., reducing the risk of failure of these models in the wild) without incurring a significant performance drop on benchmark datasets.