论文标题
自我解释社会技术系统的学习分类器系统
Learning Classifier Systems for Self-Explaining Socio-Technical-Systems
论文作者
论文摘要
在社会技术环境中,决策支持系统越来越多地协助运营商。通过采用这些社会技术系统的重要特性,例如自我适应和自我优化,将有望进一步改善。要被运营商接受并有效地接纳,决策支持系统需要能够提供有关特定决策背后推理的解释。在本文中,我们提出了学习分类器系统(基于规则的机器学习方法家庭)的用法,以促进透明的决策,并突出一些改进的技术。然后,我们提出了一个七个问题的模板,以评估特定于应用的解释性需求,并在基于面试的制造场景的案例研究中证明了它们的用法。我们发现,收到的答案确实对精心设计的LCS模型产生了有用的见解,并且要求利益相关者积极与智能代理人互动。
In socio-technical settings, operators are increasingly assisted by decision support systems. By employing these, important properties of socio-technical systems such as self-adaptation and self-optimization are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this paper, we propose the usage of Learning Classifier Systems, a family of rule-based machine learning methods, to facilitate transparent decision making and highlight some techniques to improve that. We then present a template of seven questions to assess application-specific explainability needs and demonstrate their usage in an interview-based case study for a manufacturing scenario. We find that the answers received did yield useful insights for a well-designed LCS model and requirements to have stakeholders actively engage with an intelligent agent.