论文标题

OCCAMNET:通过偏爱更简单的假设来减轻数据集偏差

OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses

论文作者

Shrestha, Robik, Kafle, Kushal, Kanan, Christopher

论文摘要

数据集偏见和虚假相关性可能会大大损害深层神经网络中的概括。许多先前的努力已经使用替代损失功能或集中在罕见模式的采样策略来解决这个问题。我们提出了一个新的方向:修改网络体系结构以施加归纳偏见,从而使网络对数据集偏置进行稳健。具体而言,我们提出了OCCAMNET,这些OCCAMNET有偏见以通过设计偏爱更简单的解决方案。 OCCAMNET具有两个感应偏见。首先,他们有偏见地使用单个示例所需的网络深度。其次,它们偏向使用更少的图像位置进行预测。尽管Occamnets偏向更简单的假设,但必要时可以学习更多复杂的假设。在实验中,OCCAMNET的表现优于或竞争对手的最新方法在不融合这些电感偏见的体系结构上运行。此外,我们证明,当最先进的伪造方法与OCCAMNETS结合使用时,结果进一步改善。

Dataset bias and spurious correlations can significantly impair generalization in deep neural networks. Many prior efforts have addressed this problem using either alternative loss functions or sampling strategies that focus on rare patterns. We propose a new direction: modifying the network architecture to impose inductive biases that make the network robust to dataset bias. Specifically, we propose OccamNets, which are biased to favor simpler solutions by design. OccamNets have two inductive biases. First, they are biased to use as little network depth as needed for an individual example. Second, they are biased toward using fewer image locations for prediction. While OccamNets are biased toward simpler hypotheses, they can learn more complex hypotheses if necessary. In experiments, OccamNets outperform or rival state-of-the-art methods run on architectures that do not incorporate these inductive biases. Furthermore, we demonstrate that when the state-of-the-art debiasing methods are combined with OccamNets results further improve.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源