论文标题
图像字幕的深度学习方法:评论
Deep Learning Approaches on Image Captioning: A Review
论文作者
论文摘要
图像字幕是一个非常重要的研究领域,旨在以静止图像的形式为视觉内容生成自然语言描述。深度学习的出现以及最近视觉语言的预训练技术彻底改变了该领域,从而导致了更复杂的方法和改善的性能。在本调查文件中,我们通过介绍全面的分类法并详细讨论每个方法,对图像字幕进行深度学习方法的结构化审查。此外,我们研究了图像字幕研究中常用的数据集,以及用于评估不同字幕模型性能的评估指标。我们通过强调诸如对象幻觉,缺失的上下文,照明条件,上下文理解和参考表达等问题来应对该领域所面临的挑战。根据广泛使用的评估指标,我们对不同的深度学习方法的表现进行了对当前最新状态的洞察力。此外,我们确定了该领域研究的几个潜在的未来研究方向,其中包括解决图像和文本模式之间的信息未对准问题,减轻数据集偏见,结合视觉语言的预训练方法,以增强字幕的产生,并开发改进的评估工具以准确地衡量图像质量的质量标志。
Image captioning is a research area of immense importance, aiming to generate natural language descriptions for visual content in the form of still images. The advent of deep learning and more recently vision-language pre-training techniques has revolutionized the field, leading to more sophisticated methods and improved performance. In this survey paper, we provide a structured review of deep learning methods in image captioning by presenting a comprehensive taxonomy and discussing each method category in detail. Additionally, we examine the datasets commonly employed in image captioning research, as well as the evaluation metrics used to assess the performance of different captioning models. We address the challenges faced in this field by emphasizing issues such as object hallucination, missing context, illumination conditions, contextual understanding, and referring expressions. We rank different deep learning methods' performance according to widely used evaluation metrics, giving insight into the current state of the art. Furthermore, we identify several potential future directions for research in this area, which include tackling the information misalignment problem between image and text modalities, mitigating dataset bias, incorporating vision-language pre-training methods to enhance caption generation, and developing improved evaluation tools to accurately measure the quality of image captions.