论文标题
依从性:通过形状意识提高概括和鲁棒性的感应偏置蒸馏
InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness
论文作者
论文摘要
与深层神经网络相比,人类较少依赖虚假的相关性和微不足道的提示,例如纹理,从而导致更好的概括和稳健性。它可以归因于先前的知识或大脑中存在的高级认知电感偏差。因此,将有意义的归纳偏见引入神经网络可以帮助学习更多通用和高级表示形式,并减轻一些缺点。我们提出痴迷以提炼感应偏见并为神经网络带来形状意识。我们的方法包括一个偏差对准目标,该目标强制执行网络学习更多的通用表示,这些代表不太容易受到数据中意外提示的影响,从而改善了概括性能。偏见不太容易受到捷径学习的影响,并且表现出较低的质地偏见。更好的表示还有助于提高对对抗性攻击的鲁棒性,因此我们无缝地插入了现有的对抗训练方案,以显示概括和稳健性之间的更好的权衡。
Humans rely less on spurious correlations and trivial cues, such as texture, compared to deep neural networks which lead to better generalization and robustness. It can be attributed to the prior knowledge or the high-level cognitive inductive bias present in the brain. Therefore, introducing meaningful inductive bias to neural networks can help learn more generic and high-level representations and alleviate some of the shortcomings. We propose InBiaseD to distill inductive bias and bring shape-awareness to the neural networks. Our method includes a bias alignment objective that enforces the networks to learn more generic representations that are less vulnerable to unintended cues in the data which results in improved generalization performance. InBiaseD is less susceptible to shortcut learning and also exhibits lower texture bias. The better representations also aid in improving robustness to adversarial attacks and we hence plugin InBiaseD seamlessly into the existing adversarial training schemes to show a better trade-off between generalization and robustness.