论文标题
分类器结合:偏见变化的观点
Ensembling over Classifiers: a Bias-Variance Perspective
论文作者
论文摘要
合奏是一种直接,非常有效的方法,用于提高模型对分类任务的准确性,校准和鲁棒性;然而,其成功基础的原因仍然是研究的积极领域。我们基于PFAU(2013)的偏见变化分解的扩展,以便对分类器的团结行为进行关键见解。为了引入偏见变化权衡的双重重新聚集,我们首先得出了典型的分类任务的非对称损失的总期望和差异的广义定律。比较条件和引导偏置/方差估计值,我们表明条件估计必定会导致不可还原误差。接下来,我们表明在双空间中结合会降低差异并使偏差不变,而标准结合可以任意影响偏见。从经验上讲,标准的结合减少偏见,导致我们假设分类器的集合可能会出色地表现出色。这表明,与经典的智慧相反,靶向偏见可能是分类器合奏的有希望的方向。
Ensembles are a straightforward, remarkably effective method for improving the accuracy,calibration, and robustness of models on classification tasks; yet, the reasons that underlie their success remain an active area of research. We build upon the extension to the bias-variance decomposition by Pfau (2013) in order to gain crucial insights into the behavior of ensembles of classifiers. Introducing a dual reparameterization of the bias-variance tradeoff, we first derive generalized laws of total expectation and variance for nonsymmetric losses typical of classification tasks. Comparing conditional and bootstrap bias/variance estimates, we then show that conditional estimates necessarily incur an irreducible error. Next, we show that ensembling in dual space reduces the variance and leaves the bias unchanged, whereas standard ensembling can arbitrarily affect the bias. Empirically, standard ensembling reducesthe bias, leading us to hypothesize that ensembles of classifiers may perform well in part because of this unexpected reduction.We conclude by an empirical analysis of recent deep learning methods that ensemble over hyperparameters, revealing that these techniques indeed favor bias reduction. This suggests that, contrary to classical wisdom, targeting bias reduction may be a promising direction for classifier ensembles.