论文标题
可解释软件工程的AI
Explainable AI for Software Engineering
论文作者
论文摘要
人工智能/机器学习技术已被广泛用于软件工程中,以提高开发人员的生产率,软件系统的质量和决策。但是,这种用于软件工程的AI/ML模型仍然是不切实际的,无法解释,并且不可行。这些担忧通常会阻碍在软件工程实践中采用AI/ML模型。在本文中,我们首先强调了对软件工程中可解释的AI的需求。然后,我们总结了三个成功的案例研究,涉及如何使用可解释的AI技术来通过使软件缺陷预测模型更实用,可解释和可行来解决上述挑战。
Artificial Intelligence/Machine Learning techniques have been widely used in software engineering to improve developer productivity, the quality of software systems, and decision-making. However, such AI/ML models for software engineering are still impractical, not explainable, and not actionable. These concerns often hinder the adoption of AI/ML models in software engineering practices. In this article, we first highlight the need for explainable AI in software engineering. Then, we summarize three successful case studies on how explainable AI techniques can be used to address the aforementioned challenges by making software defect prediction models more practical, explainable, and actionable.