Social identity in trusting artificial intelligence agents: Evidence from lab and online experiments

Yanqi Sun, Cheng Xu*, Hao Xu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

This paper explores human trust in artificial intelligence (AI), focusing on the effects of social categorization (ingroup vs. outgroup) and AI human-likeness through two pre-registered studies involving 160 participants each. The first study, a lab experiment in China, and the second, an online experiment representative of the United States, both utilized a trust game to assess trust across four conditions: ingroup-humanoid AI, ingroup-non-humanoid AI, outgroup-humanoid AI, and outgroup-non-humanoid AI. Results indicated higher trust for ingroup and humanoid AIs, with statistical significance. Mixed-design ANOVA was used to analyze the data, revealing significant main effects and interactions. The second study also identified an emotional connection as a mediator in trust, suggesting significant design implications for AI in trust-critical sectors like healthcare and autonomous transportation.

Original languageEnglish
Pages (from-to)5899-5916
Number of pages18
JournalManagerial and Decision Economics
Volume45
Issue number8
DOIs
Publication statusPublished - Dec 2024

Fingerprint

Dive into the research topics of 'Social identity in trusting artificial intelligence agents: Evidence from lab and online experiments'. Together they form a unique fingerprint.

Cite this