论文标题
层次结构3D适配器,用于长时间视频到文本摘要
Hierarchical3D Adapters for Long Video-to-text Summarization
论文作者
论文摘要
在本文中,我们关注视频到文本摘要,并研究如何最好地利用多模式信息,以汇总长输入(例如一个小时的电视节目)中的长输出(例如,多句摘要)。我们扩展了summscreen(Chen等,2021),这是一个对话摘要数据集,该数据集由带有参考摘要的电视剧集的成绩单组成,并通过收集相应的全长视频来创建多模式变体。我们将多模式信息纳入了预先训练的文本摘要中,使用适配器模块增强了层次结构,同时仅调谐模型参数的3.8%。我们的实验表明,多模式信息比更多的内存且完全微调的文本摘要方法提供了卓越的性能。
In this paper, we focus on video-to-text summarization and investigate how to best utilize multimodal information for summarizing long inputs (e.g., an hour-long TV show) into long outputs (e.g., a multi-sentence summary). We extend SummScreen (Chen et al., 2021), a dialogue summarization dataset consisting of transcripts of TV episodes with reference summaries, and create a multimodal variant by collecting corresponding full-length videos. We incorporate multimodal information into a pre-trained textual summarizer efficiently using adapter modules augmented with a hierarchical structure while tuning only 3.8\% of model parameters. Our experiments demonstrate that multimodal information offers superior performance over more memory-heavy and fully fine-tuned textual summarization methods.