论文标题
通过序列级探索更好地字幕
Better Captioning with Sequence-Level Exploration
论文作者
论文摘要
序列级学习目标已被广泛用于字幕任务,以实现许多模型的最新性能。在此目标中,该模型受到其生成字幕(序列级别)的质量的奖励训练。在这项工作中,我们显示了当前序列级学习目标的局限性,用于从理论和经验结果中介绍任务。从理论上讲,我们表明当前的目标等同于仅优化模型产生的标题集的精确侧,因此忽略了召回端。经验结果表明,通过此目标训练的模型往往会在召回方面获得较低的分数。我们建议将序列级探索项添加到当前的目标中,以增强召回率。它指导该模型探索培训中更合理的标题。通过这种方式,提议的目标同时考虑了生成的字幕的精确和回忆。实验显示了所提出的方法对视频和图像字幕数据集的有效性。
Sequence-level learning objective has been widely used in captioning tasks to achieve the state-of-the-art performance for many models. In this objective, the model is trained by the reward on the quality of its generated captions (sequence-level). In this work, we show the limitation of the current sequence-level learning objective for captioning tasks from both theory and empirical result. In theory, we show that the current objective is equivalent to only optimizing the precision side of the caption set generated by the model and therefore overlooks the recall side. Empirical result shows that the model trained by this objective tends to get lower score on the recall side. We propose to add a sequence-level exploration term to the current objective to boost recall. It guides the model to explore more plausible captions in the training. In this way, the proposed objective takes both the precision and recall sides of generated captions into account. Experiments show the effectiveness of the proposed method on both video and image captioning datasets.