论文标题
BayESPCN:不断可学习的预测性编码关联记忆
BayesPCN: A Continually Learnable Predictive Coding Associative Memory
论文作者
论文摘要
联想记忆在人类智能中起着重要的作用,其机制与机器学习中的关注有关。尽管最近重新启动了机器学习社区对关联记忆的兴趣,但大多数工作都集中在内存学习($ read $)上($ write $)。在本文中,我们介绍了BayESPCN,这是一种层次结构的关联记忆,能够执行连续的单发记忆而无需元学习。此外,Bayespcn能够逐渐忘记过去的观察结果($忘记$)来释放其内存。实验表明,Bayespcn可以回忆起损坏的I.I.D.高维数据观察到数百至千``timeSteps'',与最新的离线脱机参数记忆模型相比,召回能力下降没有大幅下降。
Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ($read$) over memory learning ($write$). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ($forget$) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds to a thousand ``timesteps'' ago without a large drop in recall ability compared to the state-of-the-art offline-learned parametric memory models.