论文标题
我们学到的隐喻
Metaphors We Learn By
论文作者
论文摘要
使用错误反向传播(``Backprop'')的基于梯度的学习是对AI最近进步的众所周知的贡献者。不太明显但可以说同样重要的成分是参数共享 - 在卷积网络的背景下最著名的。在本文中,我们将参数共享(``'重量共享'')与类比制作和认知隐喻的思想流派联系起来。我们讨论如何将经常性和自动回归模型视为将类比从静态特征扩展到动态技能和程序。我们还讨论了这种观点的推论,例如,它如何挑战目前基于Connectionist和``经典''基于``经典''规则的计算观点之间的根深蒂固的二分法。
Gradient based learning using error back-propagation (``backprop'') is a well-known contributor to much of the recent progress in AI. A less obvious, but arguably equally important, ingredient is parameter sharing - most well-known in the context of convolutional networks. In this essay we relate parameter sharing (``weight sharing'') to analogy making and the school of thought of cognitive metaphor. We discuss how recurrent and auto-regressive models can be thought of as extending analogy making from static features to dynamic skills and procedures. We also discuss corollaries of this perspective, for example, how it can challenge the currently entrenched dichotomy between connectionist and ``classic'' rule-based views of computation.