论文标题

纹理分类的大规模描述学习

Large-Margin Representation Learning for Texture Classification

论文作者

de Matos, Jonathan, de Oliveira, Luiz Eduardo Soares, Junior, Alceu de Souza Britto, Koerich, Alessandro Lameiras

论文摘要

本文提出了一种新的方法,该方法结合了卷积层(CLS)和大规模度量度量学习,用于在小数据集上进行培训模型以进行纹理分类。这种方法的核心是损失函数,该函数计算了感兴趣的实例和支持向量之间的距离。目的是在课程之间学习具有很大边距的表示形式。每次迭代都会产生一个基于这种表示形式的支持向量表示的大细边缘判别模型。拟议方法的优势W.R.T.卷积神经网络(CNN)为两倍。首先,由于参数数量减少,与等效的CNN相比,它允许用少量数据进行表示。其次,自返回传播仅考虑支持向量以来,它的培训成本较低。关于纹理和组织病理学图像数据集的实验结果表明,与等效的CNN相比,所提出的方法以较低的计算成本和更快的收敛性达到了竞争精度。

This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification. The core of such an approach is a loss function that computes the distances between instances of interest and support vectors. The objective is to update the weights of CLs iteratively to learn a representation with a large margin between classes. Each iteration results in a large-margin discriminant model represented by support vectors based on such a representation. The advantage of the proposed approach w.r.t. convolutional neural networks (CNNs) is two-fold. First, it allows representation learning with a small amount of data due to the reduced number of parameters compared to an equivalent CNN. Second, it has a low training cost since the backpropagation considers only support vectors. The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源