论文标题
使用符号理论的场景生成模型的可扩展正则化
Scalable Regularization of Scene Graph Generation Models using Symbolic Theories
论文作者
论文摘要
最近,几种技术旨在通过合并背景知识来提高场景图生成(SGG)的深度学习模型的性能。最先进的技术可以分为两个家庭:一个以潜在的方式将背景知识纳入模型,而另一种以象征性形式保持背景知识。尽管有希望的结果,但两个技术家族都面临着几个缺点:第一个需要临时,更复杂的神经体系结构来增加培训或推理成本;第二个遭受有限的可伸缩性W.R.T.背景知识的大小。我们的工作引入了一种正规化技术,将符号背景知识注入神经SGG模型,以克服先前的艺术局限性。我们的技术是模型不合时宜的,在推理时间不会产生任何成本,并扩展到以前难以管理的背景知识规模。我们证明我们的技术可以提高最新SGG模型的准确性,最多可提高33%。
Several techniques have recently aimed to improve the performance of deep learning models for Scene Graph Generation (SGG) by incorporating background knowledge. State-of-the-art techniques can be divided into two families: one where the background knowledge is incorporated into the model in a subsymbolic fashion, and another in which the background knowledge is maintained in symbolic form. Despite promising results, both families of techniques face several shortcomings: the first one requires ad-hoc, more complex neural architectures increasing the training or inference cost; the second one suffers from limited scalability w.r.t. the size of the background knowledge. Our work introduces a regularization technique for injecting symbolic background knowledge into neural SGG models that overcomes the limitations of prior art. Our technique is model-agnostic, does not incur any cost at inference time, and scales to previously unmanageable background knowledge sizes. We demonstrate that our technique can improve the accuracy of state-of-the-art SGG models, by up to 33%.