论文标题

通过可接受的神经启发式学习可区分的程序

Learning Differentiable Programs with Admissible Neural Heuristics

论文作者

Shah, Ameesh, Zhan, Eric, Sun, Jennifer J., Verma, Abhinav, Yue, Yisong, Chaudhuri, Swarat

论文摘要

我们研究以特定于领域的语言为程序表达的可区分功能的问题。这样的程序模型可以提供诸如合并性和可解释性之类的好处;但是,学习它们需要在程序“体系结构”的组合空间上进行优化。我们将这个优化问题作为搜索中的加权图中的搜索将其编码为程序语法的自上而下的推导。我们的关键创新是将各种类别的神经网络视为对程序空间的持续放松,然后可以将其用于完成任何部分程序。这个轻松的程序是可区分的,可以端对端训练,由此产生的训练损失是一种近似可接受的启发式,可以指导组合搜索。我们将方法实例化在A-Star算法之上,并迭代地加深分支和结合搜索,并使用这些算法在三个序列分类任务中学习程序化分类器。我们的实验表明,该算法的表现要优于程序学习的最先进方法,并且他们发现了可以产生自然解释并实现竞争准确性的程序化分类器。

We study the problem of learning differentiable functions expressed as programs in a domain-specific language. Such programmatic models can offer benefits such as composability and interpretability; however, learning them requires optimizing over a combinatorial space of program "architectures". We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax. Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs, which can then be used to complete any partial program. This relaxed program is differentiable and can be trained end-to-end, and the resulting training loss is an approximately admissible heuristic that can guide the combinatorial search. We instantiate our approach on top of the A-star algorithm and an iteratively deepened branch-and-bound search, and use these algorithms to learn programmatic classifiers in three sequence classification tasks. Our experiments show that the algorithms outperform state-of-the-art methods for program learning, and that they discover programmatic classifiers that yield natural interpretations and achieve competitive accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源