论文标题

枫树:多模式提示学习

MaPLe: Multi-modal Prompt Learning

论文作者

Khattak, Muhammad Uzair, Rasheed, Hanoona, Maaz, Muhammad, Khan, Salman, Khan, Fahad Shahbaz

论文摘要

预训练的视觉语言(V-L)模型(例如剪辑)表现出出色的泛化能力下游任务。但是,它们对选择输入文本提示的选择很敏感,并且需要仔细选择及时模板才能表现良好。受自然语言处理(NLP)文献的启发,最近的剪辑适应方法学习提示是作为下游任务的微调剪辑的文本输入。我们注意到,使用提示在剪辑的单个分支(语言或视觉)中调整表示形式是亚最佳选择,因为它不允许在下游任务上动态调整两个表示空间的灵活性。在这项工作中,我们为视觉和语言分支提出了多模式及时学习(MAPLE),以改善视觉和语言表示之间的一致性。我们的设计促进了视觉语言提示之间的牢固耦合,以确保相互协同作用并阻止学习独立的单模式解决方案。此外,我们在不同的早期阶段学习了单独的提示,以逐步建模阶段的特征关系,以允许丰富的上下文学习。我们评估了方法对新的类别,新的目标数据集和看不见的域变化的三个代表性任务的有效性。与最先进的方法共同套管相比,枫木表现出良好的性能,并且在新颖类中获得了3.45%的绝对增益,而整体谐波均值为2.72%,平均为11种不同的图像识别数据集。我们的代码和预培训模型可在https://github.com/muzairkhattak/multimodal-prompt-learning上找到。

Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源