论文标题

偶然忘记了连接主义网络

Fortuitous Forgetting in Connectionist Networks

论文作者

Zhou, Hattie, Vani, Ankit, Larochelle, Hugo, Courville, Aaron

论文摘要

在人类和机器学习中,忘记通常被视为不必要的特征。但是,我们建议忘记实际上可能有利于学习。我们介绍“忘记和重学”,作为塑造人工神经网络的学习轨迹的强大范式。在此过程中,遗忘步骤选择性地从模型中删除了不良信息,重新学习步骤加强了在不同条件下始终有用的功能。忘记和奖励的框架统一了图像分类和语言出现文献中许多现有的迭代培训算法,并使我们能够从不成比例的遗忘不良信息方面理解这些算法的成功。我们通过设计更有针对性的遗忘操作来利用这种理解来改善现有算法。我们分析中的见解为神经网络中迭代训练的动态提供了连贯的看法,并为改进性能提供了清晰的途径。

Forgetting is often seen as an unwanted characteristic in both human and machine learning. However, we propose that forgetting can in fact be favorable to learning. We introduce "forget-and-relearn" as a powerful paradigm for shaping the learning trajectories of artificial neural networks. In this process, the forgetting step selectively removes undesirable information from the model, and the relearning step reinforces features that are consistently useful under different conditions. The forget-and-relearn framework unifies many existing iterative training algorithms in the image classification and language emergence literature, and allows us to understand the success of these algorithms in terms of the disproportionate forgetting of undesirable information. We leverage this understanding to improve upon existing algorithms by designing more targeted forgetting operations. Insights from our analysis provide a coherent view on the dynamics of iterative training in neural networks and offer a clear path towards performance improvements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源