论文标题
PolyHistor:密集视力任务的参数效率多任务改编
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks
论文作者
论文摘要
通过微调调整大规模预处理的模型是各种下游任务,是机器学习的标准方法。最近,参数有效的微调方法在仅训练几个参数的同时,将验证的模型适应不同的任务时显示了有望。尽管它们成功了,但大多数现有的方法是在具有语言变压器的自然语言处理任务中提出的,并且使用视觉变压器对计算机视觉任务的适应仍然没有探索,尤其是对于密集的视觉任务。此外,在多任务设置中,单独的微调和为不同任务存储单独的模型效率低下。在这项工作中,我们提供了广泛的多任务参数效率的基准测试,并检查视觉任务的现有参数效率微调NLP方法。我们对四个不同密集视力任务的结果表明,由于层次视觉变压器的层次结构性质,现有方法无法有效地集成。为了克服这个问题,我们提出了由分解的超级核武器和层缩放量表组成的多组和利用线,以使用少数可训练的参数共享不同任务的信息。这会导致对现有参数有效方法的有利性能提高,同时使用较少的可训练参数。具体而言,与最先进的方法相比,多组人可以达到竞争精度,而仅使用约10%的可训练参数。此外,当使用大型网络和更多预处理数据时,我们的方法显示出较大的性能增长。
Adapting large-scale pretrained models to various downstream tasks via fine-tuning is a standard method in machine learning. Recently, parameter-efficient fine-tuning methods show promise in adapting a pretrained model to different tasks while training only a few parameters. Despite their success, most existing methods are proposed in Natural Language Processing tasks with language Transformers, and adaptation to Computer Vision tasks with Vision Transformers remains under-explored, especially for dense vision tasks. Further, in multi-task settings, individually fine-tuning and storing separate models for different tasks is inefficient. In this work, we provide an extensive multi-task parameter-efficient benchmark and examine existing parameter-efficient fine-tuning NLP methods for vision tasks. Our results on four different dense vision tasks showed that existing methods cannot be efficiently integrated due to the hierarchical nature of the Hierarchical Vision Transformers. To overcome this issue, we propose Polyhistor and Polyhistor-Lite, consisting of Decomposed HyperNetworks and Layer-wise Scaling Kernels, to share information across different tasks with a few trainable parameters. This leads to favorable performance improvements against existing parameter-efficient methods while using fewer trainable parameters. Specifically, Polyhistor achieves competitive accuracy compared to the state-of-the-art while only using ~10% of their trainable parameters. Furthermore, our methods show larger performance gains when large networks and more pretraining data are used.