论文标题
通过进化计算重新构成异质生成合奏
Re-purposing Heterogeneous Generative Ensembles with Evolutionary Computation
论文作者
论文摘要
生成对抗网络(GAN)是生成建模的流行工具。它们的对抗性学习的动力学会导致训练期间的收敛病,例如模式和歧视者崩溃。在机器学习中,预测变量的合奏表现出比许多任务的单个预测变量更好的结果。在这项研究中,我们应用两种进化算法(EA)来创建合奏以重新使用生成模型,即给出了一组针对一个目标进行优化的异质发电机(例如,最小化特征构成距离),创建其组成的组合,创建它们以优化不同的目标(例如,最大化的samples sameples sameples sameples sameples samples samples samples samples)。第一种方法受集合的确切大小的限制,第二种方法仅限制了整体大小的上限。对MNIST图像基准的实验分析表明,这两种EA组合创建方法都可以重新使用模型,而无需降低其原始功能。与其他基于启发式的方法相比,基于EA的性能明显更好。在比较两种进化时,在整体尺寸上只有上层尺寸的一个是最好的。
Generative Adversarial Networks (GANs) are popular tools for generative modeling. The dynamics of their adversarial learning give rise to convergence pathologies during training such as mode and discriminator collapse. In machine learning, ensembles of predictors demonstrate better results than a single predictor for many tasks. In this study, we apply two evolutionary algorithms (EAs) to create ensembles to re-purpose generative models, i.e., given a set of heterogeneous generators that were optimized for one objective (e.g., minimize Frechet Inception Distance), create ensembles of them for optimizing a different objective (e.g., maximize the diversity of the generated samples). The first method is restricted by the exact size of the ensemble and the second method only restricts the upper bound of the ensemble size. Experimental analysis on the MNIST image benchmark demonstrates that both EA ensembles creation methods can re-purpose the models, without reducing their original functionality. The EA-based demonstrate significantly better performance compared to other heuristic-based methods. When comparing both evolutionary, the one with only an upper size bound on the ensemble size is the best.