论文标题

内核支持张量火车机器

Kernelized Support Tensor Train Machines

论文作者

Chen, Cong, Batselier, Kim, Yu, Wenjian, Wong, Ngai

论文摘要

张量是一种多维数据结构,最近在机器学习社区中被利用。传统的机器学习方法是基于矢量或矩阵的,无法直接处理紧张数据。在本文中,我们首次提出了基于张量列车(TT)的内核技术,并将其应用于传统的支持向量机(SVM)进行图像分类。具体而言,我们提出了一台内核支持张量训练机,该机构接受张力输入并保留固有的内核属性。主要贡献是三倍。首先,我们提出了一个基于TT的特征映射过程,该过程维护特征空间中的TT结构。其次,我们演示了两种构建基于TT的内核函数的方法,同时考虑与TT内部产品保持一致并保存信息。第三,我们表明可以在不同的数据模式上应用不同的内核函数。原则上,我们的方法在其输入结构和内核映射方案上张开标准SVM。对现实世界张量数据进行了广泛的实验,该实验证明了在少数样本的高维输入下提出的方案的优越性。

Tensor, a multi-dimensional data structure, has been exploited recently in the machine learning community. Traditional machine learning approaches are vector- or matrix-based, and cannot handle tensorial data directly. In this paper, we propose a tensor train (TT)-based kernel technique for the first time, and apply it to the conventional support vector machine (SVM) for image classification. Specifically, we propose a kernelized support tensor train machine that accepts tensorial input and preserves the intrinsic kernel property. The main contributions are threefold. First, we propose a TT-based feature mapping procedure that maintains the TT structure in the feature space. Second, we demonstrate two ways to construct the TT-based kernel function while considering consistency with the TT inner product and preservation of information. Third, we show that it is possible to apply different kernel functions on different data modes. In principle, our method tensorizes the standard SVM on its input structure and kernel mapping scheme. Extensive experiments are performed on real-world tensor data, which demonstrates the superiority of the proposed scheme under few-sample high-dimensional inputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源