TY - JOUR
T1 - When customers know it’s AI: Experimental comparison of human and LLM-Based Communication in service recovery
AU - Xinyue Hao
AU - Dong, Dapeng
AU - Yuxing Zhang
AU - Emrah Demir
PY - 2025/7/29
Y1 - 2025/7/29
N2 - As generative AI (GAI) becomes increasingly integrated into customer service platforms, its ability to simulate human language raises new relational expectations, particularly in emotionally sensitive interactions. This study investigates how emotional intensity and identity disclosure shape user perceptions of GAI-authored service recovery messages. In a controlled experiment within the online food delivery context, participants evaluated identical service responses across two emotional conditions (routine vs. emotionally charged) and two identity conditions (AI vs. human, disclosed vs. undisclosed). Results reveal that while GAI is perceived as competent in low-emotion scenarios, its human-like language triggers negative reactions under high-emotion conditions, especially after its identity is disclosed. Users interpret simulated empathy as inauthentic, leading to what we term identity-contingent trust violations. Furthermore, participants with higher GAI familiarity were more critical, demonstrating a pattern of critical familiarity, where technical literacy heightens relational expectations. This study advances theories of human – AI interaction by integrating emotional context and identity perception into models of trust calibration. Practically, it highlights the need for role-appropriate GAI deployment and emotionally aware interaction design, where AI systems are matched to context-sensitive tasks and clearly framed as assistants, not surrogates, in situations requiring genuine emotional care.
AB - As generative AI (GAI) becomes increasingly integrated into customer service platforms, its ability to simulate human language raises new relational expectations, particularly in emotionally sensitive interactions. This study investigates how emotional intensity and identity disclosure shape user perceptions of GAI-authored service recovery messages. In a controlled experiment within the online food delivery context, participants evaluated identical service responses across two emotional conditions (routine vs. emotionally charged) and two identity conditions (AI vs. human, disclosed vs. undisclosed). Results reveal that while GAI is perceived as competent in low-emotion scenarios, its human-like language triggers negative reactions under high-emotion conditions, especially after its identity is disclosed. Users interpret simulated empathy as inauthentic, leading to what we term identity-contingent trust violations. Furthermore, participants with higher GAI familiarity were more critical, demonstrating a pattern of critical familiarity, where technical literacy heightens relational expectations. This study advances theories of human – AI interaction by integrating emotional context and identity perception into models of trust calibration. Practically, it highlights the need for role-appropriate GAI deployment and emotionally aware interaction design, where AI systems are matched to context-sensitive tasks and clearly framed as assistants, not surrogates, in situations requiring genuine emotional care.
U2 - 10.1080/13527266.2025.2540376
DO - 10.1080/13527266.2025.2540376
M3 - Article
SN - 1352-7266
SP - 1
EP - 28
JO - Journal of Marketing Communications
JF - Journal of Marketing Communications
ER -