论文标题

但这不是原因:通过交互式原型修订进行推理调整

But that's not why: Inference adjustment by interactive prototype revision

论文作者

Gerstenberger, Michael, Lapuschkin, Sebastian, Eisert, Peter, Bosse, Sebastian

论文摘要

尽管在机器学习方面取得了重大进展,但人造代理的决策仍然并不完美,并且通常需要事后的人类干预措施。如果模型的预测取决于不合理的因素,则希望消除其效果。深度交互式原型调整使用户能够提供提示并纠正模型的推理。在本文中,我们证明了原型零件模型非常适合此任务,因为它们的预测基于典型的图像贴片,用户可以通过语义解释。它表明,即使是正确的分类也可以依赖于由于混淆数据集中的变量而导致的不合理原型。因此,我们提出了简单但有效的交互计划以进行推理调整:交互式咨询用户以识别错误的原型。可以通过原型掩盖或定制的取消选择训练模式来删除非对象的原型。交互式原型拒绝使机器学习幼稚的用户可以在不损害准确性的情况下调整推理的逻辑。

Despite significant advances in machine learning, decision-making of artificial agents is still not perfect and often requires post-hoc human interventions. If the prediction of a model relies on unreasonable factors it is desirable to remove their effect. Deep interactive prototype adjustment enables the user to give hints and correct the model's reasoning. In this paper, we demonstrate that prototypical-part models are well suited for this task as their prediction is based on prototypical image patches that can be interpreted semantically by the user. It shows that even correct classifications can rely on unreasonable prototypes that result from confounding variables in a dataset. Hence, we propose simple yet effective interaction schemes for inference adjustment: The user is consulted interactively to identify faulty prototypes. Non-object prototypes can be removed by prototype masking or a custom mode of deselection training. Interactive prototype rejection allows machine learning naïve users to adjust the logic of reasoning without compromising the accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源