论文标题

人类判断如何损害自动欺骗检测性能

How human judgment impairs automated deception detection performance

论文作者

Kleinberg, Bennett, Verschuere, Bruno

论文摘要

背景:欺骗检测是安全从业者的普遍问题。需要更大规模的方法,使用机器学习的自动化方法已获得吸引力。但是,检测性能仍然意味着大量错误率。来自其他领域的发现表明,混合人机整合可以在欺骗检测任务中提供可行的途径。方法:我们收集了有关参与者自传意图(n = 1640)的真实和欺骗性的答案,并测试了监督机器学习和人类判断的结合是否可以提高欺骗检测准确性。向人类法官提出了真实和欺骗性陈述的自动信誉判断的结果。他们可以将其完全覆盖(混合范围的条件)或在给定边界(杂交调整条件)内进行调整。结果:数据表明,在混合条件下,人类判断都没有增加有意义的贡献。孤立的机器学习确定了真相的总体准确性为69%。通过混合范围的决策人类参与将准确性恢复到了机会水平。杂交调整条件没有欺骗检测性能。人类的决策策略表明,真理偏见 - 承担对方的趋势正在说出真相 - 可以解释有害的效果。结论:当前的研究不支持人类可以有意义地增加机器学习系统的欺骗检测性能的观念。

Background: Deception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from other domains suggest that hybrid human-machine integrations could offer a viable path in deception detection tasks. Method: We collected a corpus of truthful and deceptive answers about participants' autobiographical intentions (n=1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful and deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition). Results: The data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to the chance level. The hybrid-adjust condition did not deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect. Conclusion: The current study does not support the notion that humans can meaningfully add to the deception detection performance of a machine learning system.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源