论文标题
视觉情感分类的多源域改编
Multi-source Domain Adaptation for Visual Sentiment Classification
论文作者
论文摘要
通常在单源方案下研究了视觉情感分类的现有域适应方法,在该方案下,从足够标记数据的源域中学到的知识将转移到松散标记或未标记的数据的目标域。但是,实际上,来自单个源域的数据通常具有有限的体积,并且几乎不能涵盖目标域的特征。在本文中,我们提出了一种新型的多源域适应性(MDA)方法,称为多源情感生成对抗网络(MSGAN),用于视觉情感分类。为了处理来自多个源域的数据,它学习找到一个统一的情感潜在空间,其中来自源和目标域的数据共享相似的分布。这是通过端到端方式通过循环一致的对抗学习来实现的。在四个基准数据集上进行的广泛实验表明,MSGAN明显优于最先进的MDA方法,以进行视觉情感分类。
Existing domain adaptation methods on visual sentiment classification typically are investigated under the single-source scenario, where the knowledge learned from a source domain of sufficient labeled data is transferred to the target domain of loosely labeled or unlabeled data. However, in practice, data from a single source domain usually have a limited volume and can hardly cover the characteristics of the target domain. In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.