论文标题
解释深层可视化忘记的深层神经网络如何忘记
Explaining How Deep Neural Networks Forget by Deep Visualization
论文作者
论文摘要
解释通常被视为黑匣子的深神经网络的行为,尤其是当它们在人类生活的各个方面被采用时,尤其是至关重要的。借助可解释的机器学习的优势(可解释的ML),本文提出了一种名为灾难性遗忘的解剖器(或CFD)的新颖工具,以解释在持续学习环境中的灾难性遗忘。我们还根据工具的观察结果引入了一种称为关键冻结的新方法。关于重新系统的实验表达了灾难性遗忘是如何发生的,尤其是表明该著名网络的哪些组成部分正在忘记。我们的新持续学习算法通过很大的余量击败了各种最近的技术,证明了调查的能力。批判性冻结不仅攻击灾难性遗忘,而且揭示了解释性。
Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life. Taking the advantages of interpretable machine learning (interpretable ML), this paper proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings. We also introduce a new method called Critical Freezing based on the observations of our tool. Experiments on ResNet articulate how catastrophic forgetting happens, particularly showing which components of this famous network are forgetting. Our new continual learning algorithm defeats various recent techniques by a significant margin, proving the capability of the investigation. Critical freezing not only attacks catastrophic forgetting but also exposes explainability.