To trust or not to trust a human(-like) AI - A scoping review and conjoint analyses on factors influencing anthropomorphism and trust
AI systems are becoming increasingly complex and human-like, and we interact with them more and more frequently. How does perceived human-likeness affect trust in AI systems? And what makes AI systems appear human in the first place? In a scoping review, we first examined the relationship between anthropomorphism and trust, although the operationalisation of anthropomorphism was very inconsistent. To address this gap, two conjoint analyses were conducted online focusing on four anthropomorphic characteristics identified in the review: name, appearance, voice, and communication style. The studies found that voice and communication style significantly influenced perceptions of human-likeness, while voice had a slightly stronger effect on trustworthiness. Overall, more human-like systems were perceived as more trustworthy across all attributes.
Practical Relevance: The findings highlight the need for a comprehensive, integrated approach to AI design that considers how design elements shape user perceptions and trust. Importantly, the context in which AI is used, particularly in the workplace, must always be considered.
This article is published in the Journal "Zeitschrift für Arbeitswissenschaft" (2025).
Bibliographic information
Title: To trust or not to trust a human(-like) AI - A scoping review and conjoint analyses on factors influencing anthropomorphism and trust.
in: Zeitschrift für Arbeitswissenschaft, 2025. pages: 1-31, DOI: 10.1007/s41449-025-00481-6