论文标题

进行性神经网络学习的子集抽样

Subset Sampling For Progressive Neural Network Learning

论文作者

Tran, Dat Thanh, Gabbouj, Moncef, Iosifidis, Alexandros

论文摘要

渐进的神经网络学习是一类算法,可逐步构建网络的拓扑并根据培训数据优化其参数。虽然这种方法将用户免于设计和验证多个网络拓扑的手动任务,但通常需要大量的计算。在本文中,我们建议通过在每个增量培训步骤中利用培训数据的子集来加快这一过程。提出并评估了根据不同标准选择训练样品的三种不同的抽样策略。我们还建议在网络发展过程中执行在线超参数选择,这进一步缩短了整体培训时间。对象,场景和面部识别问题的实验结果表明,所提出的方法在与基线方法相当的同时,利用整个训练过程中的整个训练集。

Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data. While this approach exempts the users from the manual task of designing and validating multiple network topologies, it often requires an enormous number of computations. In this paper, we propose to speed up this process by exploiting subsets of training data at each incremental training step. Three different sampling strategies for selecting the training samples according to different criteria are proposed and evaluated. We also propose to perform online hyperparameter selection during the network progression, which further reduces the overall training time. Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably while operating on par with the baseline approach exploiting the entire training set throughout the training process.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源