论文标题

模型不可知组合用于合奏学习

Model Agnostic Combination for Ensemble Learning

论文作者

Silbert, Ohad, Peleg, Yitzhak, Kopelowitz, Evi

论文摘要

众所周知,模型的合奏可以改善单个模型性能。我们提出了一种新颖的结合技术创造的MAC,该技术旨在找到结合模型的最佳函数,同时保持组合中涉及的子模型的数量。不可知的子模型数量即使在部署后也可以添加和更换子模型为组合,这与许多当前的结合方法(例如堆叠,增强,升级,专家和超级学习者的混合物)不同,这些方法锁定了在训练过程中用于组合的模型,因此每当引入新模型中,都需要重新培训。我们表明,在Kaggle RSNA颅内出血检测挑战上,MAC的表现优于经典的平均方法,证明了通过XGBOOST增强固定数量的子模型的竞争结果,并在不重新培训的情况下将子模型添加到组合中时胜过它。

Ensemble of models is well known to improve single model performance. We present a novel ensembling technique coined MAC that is designed to find the optimal function for combining models while remaining invariant to the number of sub-models involved in the combination. Being agnostic to the number of sub-models enables addition and replacement of sub-models to the combination even after deployment, unlike many of the current methods for ensembling such as stacking, boosting, mixture of experts and super learners that lock the models used for combination during training and therefore need retraining whenever a new model is introduced into the ensemble. We show that on the Kaggle RSNA Intracranial Hemorrhage Detection challenge, MAC outperforms classical average methods, demonstrates competitive results to boosting via XGBoost for a fixed number of sub-models, and outperforms it when adding sub-models to the combination without retraining.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源