论文标题

不相关的半生产子空间学习

Uncorrelated Semi-paired Subspace Learning

论文作者

Wang, Li, Zhang, Lei-Hong, Shen, Chungen, Li, Ren-Cang

论文摘要

多视图数据集越来越多地收集到许多现实世界中的应用程序中,并且通过现有多视图学习方法比单独应用于每个视图的常规单视图学习方法更好地学习了学习性能。但是,这些多视图学习方法中的大多数都是基于以下假设:在每种情况下都不丢失视图,并且所有视图中的所有数据点都必须完美配对。因此,他们无法处理未配对的数据,而是从学习过程中完全忽略了它们。但是,不合格的数据在现实中可能比配对数据更丰富,而只是忽略了所有未配对的数据,因此资源中造成了巨大的浪费。在本文中,我们专注于通过半生产子空间学习学习不相关的功能,这是由许多现有作品的动机,这些作品显示出学习不相关的功能的巨大成功。具体而言,我们提出了一个普遍的不相关的多视图子空间学习框架,该框架可以自然地整合许多已验证的学习标准,以半生的数据为准。为了展示该框架的灵活性,我们将五个新的半生产模型实例化,用于无监督和半监督学习。我们还设计了一种连续的交替近似方法(SAA)方法来解决所得的优化问题,如果需要,该方法可以与功能强大的Krylov子空间投影技术结合使用。多视图特征提取和多模式分类的广泛实验结果表明,我们的拟议模型竞争性或比基线更好。

Multi-view datasets are increasingly collected in many real-world applications, and we have seen better learning performance by existing multi-view learning methods than by conventional single-view learning methods applied to each view individually. But, most of these multi-view learning methods are built on the assumption that at each instance no view is missing and all data points from all views must be perfectly paired. Hence they cannot handle unpaired data but ignore them completely from their learning process. However, unpaired data can be more abundant in reality than paired ones and simply ignoring all unpaired data incur tremendous waste in resources. In this paper, we focus on learning uncorrelated features by semi-paired subspace learning, motivated by many existing works that show great successes of learning uncorrelated features. Specifically, we propose a generalized uncorrelated multi-view subspace learning framework, which can naturally integrate many proven learning criteria on the semi-paired data. To showcase the flexibility of the framework, we instantiate five new semi-paired models for both unsupervised and semi-supervised learning. We also design a successive alternating approximation (SAA) method to solve the resulting optimization problem and the method can be combined with the powerful Krylov subspace projection technique if needed. Extensive experimental results on multi-view feature extraction and multi-modality classification show that our proposed models perform competitively to or better than the baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源