论文标题
用于镶嵌核学习的新算法
A New Algorithm for Tessellated Kernel Learning
论文作者
论文摘要
基于内核优化的机器学习算法的准确性和复杂性受其能够优化的内核集限制。一组理想的内核应该:接纳线性参数化(用于拖延性);在所有内核(以鲁棒性)中保持密集;保持普遍(准确性)。最近提议的缝合核(TKS)目前是符合所有三个标准的唯一已知类别。但是,以前用于优化TK的算法仅限于分类,并依赖于半决赛编程(SDP) - 将它们限制在相对较小的数据集中。相比之下,此处提出的两步算法将其扩展到10,000个数据点,并扩展到回归问题。此外,当应用于基准数据时,该算法在具有相似的计算时间的神经网和Simpleemkl上的性能显着改善。
The accuracy and complexity of machine learning algorithms based on kernel optimization are limited by the set of kernels over which they are able to optimize. An ideal set of kernels should: admit a linear parameterization (for tractability); be dense in the set of all kernels (for robustness); be universal (for accuracy). The recently proposed Tesselated Kernels (TKs) is currently the only known class which meets all three criteria. However, previous algorithms for optimizing TKs were limited to classification and relied on Semidefinite Programming (SDP) - limiting them to relatively small datasets. By contrast, the 2-step algorithm proposed here scales to 10,000 data points and extends to the regression problem. Furthermore, when applied to benchmark data, the algorithm demonstrates significant improvement in performance over Neural Nets and SimpleMKL with similar computation time.