Who is AI? The Impact of Identity Labels in Multi-Round Trust Games

Liu Jiahui, Gu Ruolei, Wu Tingting, Luo Yi

Journal of Psychological Science ›› 2025, Vol. 48 ›› Issue (6) : 1294-1313.

PDF(2783 KB)
PDF(2783 KB)
Journal of Psychological Science ›› 2025, Vol. 48 ›› Issue (6) : 1294-1313. DOI: 10.16719/j.cnki.1671-6981.20250602
Computational Modeling and Artificial Intelligence

Who is AI? The Impact of Identity Labels in Multi-Round Trust Games

  • Liu Jiahui1, Gu Ruolei2, Wu Tingting3, Luo Yi1
Author information +
History +

Abstract

With the rapid development of artificial intelligence technology, intelligent agents are becoming increasingly prevalent in human life. Therefore, it is crucial to understand how identity labels influence human trust-based decision-making. Previous studies have revealed that there is a certain label effect on human trust. That is, even if all the participants are playing games with robots, when they are told that their partner is a robot, individuals cooperate less than when they are told that their partner is human(Ishowo-Oloko et al., 2019). In addition, the labels assigned to the participants themselves may also be one of the important factors influencing their trust.
The present study investigated how identity labels (human vs. AI) of investors and trustees influence human trust in a multi-round trust game (MRT) with computational modeling. This experiment was a behavioral study involving 84 investors (25 males, mean age 21.5 ± 2.5 years old) and 84 trustees (23 males, mean age 21.0 ± 2.0 years old). It employed a mixed design with 2 factors: 2 (Investor Label: Human vs. AI) × 2 (Trustee Label: Human vs. AI). But both investor and trustee are fixed human partners and they played 40 rounds MRT in 2 periods labeled as different labels, respectively. In each round of MRT, the investor (played by the real participant) received ¥20 and decided how much of it to send to the trustee (played by another real participant). The amount (a, ranging from 0 to 20) the investor sent was tripled and delivered to the trustee, who returned b (ranging from 0 to 3*a) back to the investor. The investor ended up with 20-a+b; the trustee with 3*a-b.
The results indicated that the investor’s investment amount increases as the trustee's return increases in the last round. As the interaction proceeds, the investor’s investment amount gradually increases. In addition, AI-labeled investors invested less to trustee, particularly interacting with human-labeled trustees. This suggests that the AI label reduces sensitivity to fairness norms and weakens impression management motives, as reflected in lower social preference parameters including envy and guilt in Fehr-Schmidt inequality aversion model (FS model). While investors generally invested more when they received higher returns from trustees in the last round, AI-labeled investors showed more random decision-making patterns (evidenced by elevated inverse temperature parameters in the Rescorla-Wagner model) specifically when interacting with human-labeled trustees. And learning rates did not differ across identity labels, indicating that the differences stem from decision-making strategies rather than learning efficiency. These results suggest that the effect of AI label on investors’ trust occurs through dual paths. One path is the weakening of motivation for impression management and sensitivity to static social norms and the reduction of the active pursuit of fair results. The other path is the decline in the utilization rate of dynamic learning strategies. When facing human, AI label reduce the investor’s adjustments based on historical feedback (the application of dynamic learning strategies decreases), and the behavior becomes more random.
For trustees, the repayment increases as the investor's investment and decreases as the interaction proceeds. Trustee returned more money to the human-labeled investor than to the AI-labeled investor. The AI-labeled trustee exhibited higher repayment and stronger guilt, suggesting they internalized a "service obligation" stereotype. Then interacting with AI-labeled investors, this trend becomes stronger indicating that human-labeled trustee showed less trust expectation for AI-labeled investor and less concern on social evaluation from the AI labeled investor. Meanwhile, the dynamic behavior of trustees primarily exhibits a dominant influence pattern driven by current investment amounts, with minimal impact from cumulative experience. Furthermore, this influence pattern becomes more pronounced for trustees with an AI-label, suggesting that the AI label further reinforces such myopic and immediate decision-making patterns.
These findings suggest that identity labels have a significant impact on individuals' trust decisions and are role-specific and context-specific. AI labels change individuals' decision-making patterns by weakening social norms (investors) or strengthening service obligations (trustees) and this indicates that AI label influence trust through dual paths of impression management mechanisms (altering social supervision expectations) and stereotype activation (internalizing behavioral characteristics of AI). This mechanism reveals the dynamic construction process of identity cognition, breaks through the traditional linear understanding of AI labels, and demonstrates its complex cognitive mechanism in social interaction - not only a passive trust intermediary, but also a dynamic cognitive switch that can actively reshape the boundaries of trust and cooperation, providing a new theoretical perspective for understanding the embedding process of artificial intelligence in social systems.

Key words

trust / label effect / social norm / impression management / stereotype

Cite this article

Download Citations
Liu Jiahui, Gu Ruolei, Wu Tingting, Luo Yi. Who is AI? The Impact of Identity Labels in Multi-Round Trust Games[J]. Journal of Psychological Science. 2025, 48(6): 1294-1313 https://doi.org/10.16719/j.cnki.1671-6981.20250602

References

[1] Abraham M., Grimm V., Neeß C., & Seebauer M. (2016). Reputation formation in economic transactions. Journal of Economic Behavior and Organization, 121, 1-14.
[2] Bai, S., & Zhang, X. (2025). My coworker is a robot: The impact of collaboration with AI on employees' impression management concerns and organizational citizenship behavior. International Journal of Hospitality Management, 128, 104179.
[3] Baronchelli, A. (2024). Shaping new norms for AI. Philosophical Transactions of the Royal Society B, 379(1897), 20230028.
[4] Bellucci, G. (2022). A model of trust. Games, 13(3), 39.
[5] Berg J., Dickhaut J., & McCabe K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122-142.
[6] Bolino M. C., Kacmar K. M., Turnley W. H., & Gilstrap J. B. (2008). A multi-level review of impression management motives and behaviors. Journal of Management, 34(6), 1080-1109.
[7] Bonnefon J. F., Rahwan I., & Shariff A. (2024). The moral psychology of Artificial Intelligence. Annual Review of Psychology, 75(1), 653-675.
[8] Borau S., Otterbring T., Laporte S., & Fosso Wamba S. (2021). The most human bot: Female gendering increases humanness perceptions of bots and acceptance of AI. Psychology and Marketing, 38(7), 1052-1068.
[9] Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology, 68(1), 627-652.
[10] Camerer, C. F. (2003). Behavioural studies of strategic thinking in games. Trends in Cognitive Sciences, 7(5), 225-231.
[11] De Freitas J., Agarwal S., Schmitt B., & Haslam N. (2023). Psychological factors underlying attitudes toward AI tools. Nature Human Behaviour, 7(11), 1845-1854.
[12] de Visser E. J., Monfort S. S., McKendrick R., Smith M. A., McKnight P. E., Krueger F., & Parasuraman R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331-349.
[13] Earp B. D., Mann S. P., Aboy M., Awad E., Betzler M., Botes M., & Clark M. S. (2025). Relational Norms for Human-AI Cooperation.ArXiv.
[14] Epley N., Waytz A., & Cacioppo J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886.
[15] Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415(6868), 137-140.
[16] Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics, 114(3), 817-868.
[17] Gambino A., Fox J., & Ratan R. A. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1, 71-85.
[18] Hancock P. A., Kessler T. T., Kaplan A. D., Stowers K., Brill J. C., Billings D. R., & Szalma J. L. (2023). How and why humans trust: A meta-analysis and elaborated model. Frontiers in Psychology, 14, 1081086.
[19] Henrich, J., & Muthukrishna, M. (2021). The origins and psychology of human cooperation. Annual Review of Psychology, 72(1), 207-240.
[20] Hula A., Moutoussis M., Will G. J., Kokorikou D., Reiter A. M., Ziegler G., & NSPN Consortium. (2021). Multi-round trust game quantifies inter-individual differences in social exchange from adolescence to adulthood. Computational Psychiatry, 5(1), 102.
[21] Ishowo-Oloko F., Bonnefon J. F., Soroye Z., Crandall J., Rahwan I., & Rahwan T. (2019). Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence, 1(11), 517-521.
[22] Johnson, N. D., & Mislin, A. A. (2011). Trust games: A meta-analysis. Journal of Economic Psychology, 32(5), 865-889.
[23] Jussim L., Nelson T. E., Manis M., & Soffin S. (1995). Prejudice, stereotypes, and labeling effects: Sources of bias in person perception. Journal of Personality and Social Psychology, 68(2), 228-246.
[24] Kessler T., Stowers K., Brill J. C., & Hancock P. A. (2017). Comparisons of human-human trust with other forms of human-technology trust. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1), 1303-1307.
[25] King-Casas B., Tomlin D., Anen C., Camerer C. F., Quartz S. R., & Montague P. R. (2005). Getting to know you: Reputation and trust in a two-person economic exchange. Science, 308(5718), 78-83.
[26] Krämer N. C., Von Der Pütten A., & Eimler S. (2012). Human-agent and human-robot interaction theory: Similarities to and differences from human-human interaction. In M. Ba Zacarias & J. V. de Oliveira (Eds.), Human-Computer Interaction: The Agency Perspective. Springer.
[27] Leary, M. R., & Kowalski, R. M. (1990). Impression management: A literature review and two-component model. Psychological Bulletin, 107(1), 34-47.
[28] Lippmann, W. (1965). Public opinion. http://infomotions.com/etexts/gutenberg/dirs/etext04/pbpnn10.htm.
[29] Makovi K., Sargsyan A., Li W., Bonnefon J. F., & Rahwan T. (2023). Trust within human-machine collectives depends on the perceived consensus about cooperative norms. Nature Communications, 14(1), 3108.
[30] McKee K. R., Bai X., & Fiske S. T. (2024). Warmth and competence in human-agent cooperation. Autonomous Agents and Multi-Agent Systems, 38(1), 23.
[31] Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
[32] Nass C., Steuer J., & Tauber E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems Boston.
[33] Oliveira M., Brands J., Mashudi J., Liefooghe B., & Hortensius R. (2024). Perceptions of artificial intelligence system' s aptitude to judge morality and competence amidst the rise of Chatbots. Cognitive Research: Principles and Implications, 9(1), 47.
[34] Pandit, R. (2017). Social perception and impression management in relation to Attribution Theory and individual decision making from development perspectives. International Journal of Science and Research, 6(9), 1955-1963.
[35] Rao A., Schmidt S. M., & Murray L. H. (1995). Upward impression management: Goals, influence strategies, and consequences. Human Relations, 48(2), 147-167.
[36] Rescorla, R. A. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and non-reinforcement. Classical Conditioning, Current Research and Theory, 2, 64-69.
[37] Rheu M., Dai Y., Meng J., & Peng W. (2024). When a chatbot disappoints you: Expectancy violation in human-chatbot interaction in a social support context. Communication Research, 51(7), 782-814.
[38] Rieger T., Kugler L., Manzey D., & Roesler E. (2024). The (Im) perfect automation schema: Who is trusted more, automated or human decision support? Human Factors, 66(8), 1995-2007.
[39] Robson S. E., Repetto L., Gountouna V. E., & Nicodemus K. K. (2020). A review of neuroeconomic gameplay in psychiatric disorders. Molecular Psychiatry, 25(1), 67-81.
[40] Steinmetz J., Sezer O., & Sedikides C. (2017). Impression mismanagement: People as inept self-presenters. Social and Personality Psychology Compass, 11(6), e12321.
[41] Suen, H. Y., & Hung, K. E. (2024). Revealing the influence of AI and its interfaces on job candidates' honest and deceptive impression management in asynchronous video interviews. Technological Forecasting and Social Change, 198, 123011.
[42] Tsai W. S., Lun D., Carcioppolo N., & Chuan C. H. (2021). Human versus chatbot: Understanding the role of emotion in health marketing communication for vaccines. Psychology and Marketing, 38(12), 2377-2392.
[43] Tschopp M., Gieselmann M., & Sassenberg K. (2023). Servant by default? How humans perceive their relationship with conversational AI. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 17(3), 1-11.
[44] Williams, G. Y., & Lim, S. (2024). Psychology of AI: How AI impacts the way people feel, think, and behave. Current Opinion in Psychology, 58, 101835.
[45] Yam K. C., Eng A., & Gray K. (2025). Machine replacement: A mind-role fit perspective. Annual Review of Organizational Psychology and Organizational Behavior, 12(1), 239-267.
[46] Yin Y., Jia N., & Wakslak C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences of the United States of America, 121(14), e2319112121.
[47] Zhang G., Chong L., Kotovsky K., & Cagan J. (2023). Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. Computers in Human Behavior, 139, 107536.
[48] Zimmerman A., Janhonen J., & Beer E. (2024). Human/AI relationships: Challenges, downsides, and impacts on human/human relationships. AI and Ethics, 4(4), 1555-1567.
PDF(2783 KB)

Accesses

Citation

Detail

Sections
Recommended

/