论文标题
通过图像样式化令人沮丧的简单域概括
Frustratingly Simple Domain Generalization via Image Stylization
论文作者
论文摘要
卷积神经网络(CNN)在标准分类设置中表现出令人印象深刻的性能,在该设置中绘制了训练和测试数据。来自给定的域。但是,CNN不容易概括为具有不同统计数据的新域,这对于人类来说很简单。在这项工作中,我们解决了域的泛化问题,分类器必须推广到未知目标域。受到CNN和人类之间偏差有所不同的最近作品的启发,我们证明了一种非常简单但有效的方法,即通过使用风格化的图像来增强数据集来纠正这种偏差。与使用外部数据源(例如ART)的现有风格化作品相反,我们进一步引入了一种完全不使用此类数据来源的方法。我们提供了有关该方法起作用的机制的详细分析,验证了我们声称它改变形状/纹理偏置的说法,并证明结果超过或与使用更复杂方法的艺术状态相比。
Convolutional Neural Networks (CNNs) show impressive performance in the standard classification setting where training and testing data are drawn i.i.d. from a given domain. However, CNNs do not readily generalize to new domains with different statistics, a setting that is simple for humans. In this work, we address the Domain Generalization problem, where the classifier must generalize to an unknown target domain. Inspired by recent works that have shown a difference in biases between CNNs and humans, we demonstrate an extremely simple yet effective method, namely correcting this bias by augmenting the dataset with stylized images. In contrast with existing stylization works, which use external data sources such as art, we further introduce a method that is entirely in-domain using no such extra sources of data. We provide a detailed analysis as to the mechanism by which the method works, verifying our claim that it changes the shape/texture bias, and demonstrate results surpassing or comparable to the state of the arts that utilize much more complex methods.