论文标题
信任AI及其在接受AI技术中的作用
Trust in AI and Its Role in the Acceptance of AI Technologies
论文作者
论文摘要
随着AI增强技术在各种领域变得普遍,越来越需要定义和检查用户对此类技术的信任。鉴于AI的发展取得了进展,需要对技术的信任进行相应的复杂理解。本文通过解释信任对使用AI技术意图的作用来解决这一需求。研究1根据大学生的调查回答,研究了信任在使用AI语音助手中的作用。一项路径分析证实,信任对使用AI的意图产生了重大影响,该意图通过感知到的有用性和参与者对语音助手的态度而运作。在研究2中,使用来自美国人群代表性样本的数据,使用探索性因素分析检查了不同的信任维度,这产生了两个维度:类似人类的信任和功能性信任。研究1中的路径分析结果在研究2中复制,证实了信任的间接影响以及感知的有用性,易用性和对使用意图的影响。此外,两个信任的维度在模型中都具有类似的效果模式,与功能相关的信任比人类般的信任表现出对使用意图的总影响。总体而言,在两项研究中,信任在接受AI技术方面的作用都很重要。这项研究有助于TAM在与AI相关的应用程序中的进步和应用,并提供了多维信任度量的衡量标准,可以在未来的值得信赖的AI研究中使用。
As AI-enhanced technologies become common in a variety of domains, there is an increasing need to define and examine the trust that users have in such technologies. Given the progress in the development of AI, a correspondingly sophisticated understanding of trust in the technology is required. This paper addresses this need by explaining the role of trust on the intention to use AI technologies. Study 1 examined the role of trust in the use of AI voice assistants based on survey responses from college students. A path analysis confirmed that trust had a significant effect on the intention to use AI, which operated through perceived usefulness and participants' attitude toward voice assistants. In study 2, using data from a representative sample of the U.S. population, different dimensions of trust were examined using exploratory factor analysis, which yielded two dimensions: human-like trust and functionality trust. The results of the path analyses from Study 1 were replicated in Study 2, confirming the indirect effect of trust and the effects of perceived usefulness, ease of use, and attitude on intention to use. Further, both dimensions of trust shared a similar pattern of effects within the model, with functionality-related trust exhibiting a greater total impact on usage intention than human-like trust. Overall, the role of trust in the acceptance of AI technologies was significant across both studies. This research contributes to the advancement and application of the TAM in AI-related applications and offers a multidimensional measure of trust that can be utilized in the future study of trustworthy AI.