人工智能应用对人类道德的影响及认知神经机制*

郭雪俐, 苏凇, 刘超, 唐红红

心理科学 ›› 2025, Vol. 48 ›› Issue (6) : 1282-1293.

PDF(1696 KB)
中文  |  English
PDF(1696 KB)
心理科学 ›› 2025, Vol. 48 ›› Issue (6) : 1282-1293. DOI: 10.16719/j.cnki.1671-6981.20250601
计算建模与人工智能

人工智能应用对人类道德的影响及认知神经机制*

  • 郭雪俐1, 苏凇1, 刘超2,3,4, 唐红红**1
作者信息 +

How AI Reshapes Human Morality: Behavioral, Psychological, and Neural Perspectives

  • Guo Xueli1, Su Song1, Liu Chao2,3,4, Tang Honghong1
Author information +
文章历史 +

摘要

随着人工智能的迅猛发展及其在众多领域的广泛应用,其对人类道德的影响引发了学界的深刻思考。现有文献表明,AI应用可通过道德认知、道德情绪、社会评价与反应等心理和认知神经机制对人类道德产生影响。进一步,当前研究总结对比了AI在代理、建议、人机协作和道德模范四种应用情境下对人类道德行为产生的正面和负面影响。未来研究需要深入比较AI在不同应用情境下影响差异的机制,探究AI对人类道德的长期影响,并探索AI设计干预和提升道德影响的途径,为AI的设计、开发和应用提供理论支持和参考。

Abstract

The rapid development and widespread use of Artificial Intelligence (AI) have given rise to significant ethical concerns, with a key question being how AI influences human morality. This paper reviews relevant literature to examine the multifaceted impacts of AI on human morality.
First, we propose potential underlying mechanisms to illustrate how AI applications influence human morality. Studies show that people typically attribute less “mind” to AI than to humans. This perception fundamentally influences humans’ cognitive judgments and emotional responses during human-AI interactions compared to human-human interactions, resulting in distinct patterns of moral cognition, moral emotions, and social evaluations. The most notable influence on moral cognition may be how humans change their attribution patterns and judgments of responsibility in morally relevant behaviors within Human-AI interactions. According to previous studies, when AI systems cause harm, people tend to blame the AI more and attribute a higher degree of responsibility for the harm to the AI than to human agents. People consider AI as a “scapegoat” for immoral behaviors in many situations, facilitating their immoral behaviors. Meanwhile, Human-AI interactions could enhance humans’ tendency for utilitarianism in moral judgment, which may reduce prosocial behavior. Furthermore, human-AI interactions typically mitigate humans’ emotional responses toward stimuli or behaviors related to morality. This tendency may contribute to increasing immoral behavior and decreasing prosocial behavior. Specifically, when AI initiates prosocial actions instead of humans, people’s moral emotional response is diminished, which reduces their prosocial tendencies. Additionally, human-AI interactions could undermine people’s concerns for social image. When interacting with AI, individuals tend to adhere less to moral principles and become less sensitive to social evaluation. This reduced social awareness may subsequently increase unethical behavior or decrease prosocial behavior.
Then, we discuss the neural mechanisms underlying how human-AI interactions influence human morality. When people interact with AI rather than humans, their brain regions involved in social perception, emotion, and cognition show significant differences. Three primary brain networks demonstrate such differences: the interoception and human stereotype network, the social cognition network, and the valuation network. The interoception and human stereotype network primarily includes insula and MCC. These brain regions are less active during interactions with AI than with humans. The social cognition network involves the TPJ, STS, TP, SPL, PCC, and MTG. Activities in these regions are also decreased when people interact with AI. The valuation network primarily involves the MPFC, including the VMPFC and DMPFC. During human-AI interactions, compared to human-human interactions, both the ventromedial prefrontal cortex (VMPFC) and the dorsomedial prefrontal cortex (DMPFC) exhibit reduced activity during decision-making. This suggests diminished processing and evaluation of decision-related information.
Finally, we discuss the positive and negative effects of AI on human morality across four different forms of human-AI interaction: AI delegator, AI advisor, AI collaborator, and AI moral model. When AI plays as a delegator, people may use AI as a proxy for executing immoral actions, or promote cooperation and fairness to benefit themselves. When AI serves as an advisor, its immoral suggestions can promote deceptive behavior, yet its moral advice does not necessarily encourage honesty. When AI functions as a collaborator, it has a mixed impact on human moral behaviors, revealing both positive and negative effects. When AI acts as a moral role model, it shows the potential to positively shape moral cognition and behavior. Its impact on children’s moral development is particularly pronounced, surpassing that observed in adults.
Future research should focus more on exploring how different AI applications impact human morality, particularly their long-term effects. These efforts will provide valuable theoretical support and guidance for the design, development, and application of AI.

关键词

人工智能 / 人机交互 / 道德 / 社会脑

Key words

artificial intelligence / human-AI interaction / moral / social brain networks

引用本文

导出引用
郭雪俐, 苏凇, 刘超, 唐红红. 人工智能应用对人类道德的影响及认知神经机制*[J]. 心理科学. 2025, 48(6): 1282-1293 https://doi.org/10.16719/j.cnki.1671-6981.20250601
Guo Xueli, Su Song, Liu Chao, Tang Honghong. How AI Reshapes Human Morality: Behavioral, Psychological, and Neural Perspectives[J]. Journal of Psychological Science. 2025, 48(6): 1282-1293 https://doi.org/10.16719/j.cnki.1671-6981.20250601

参考文献

[1] 聂春燕, 汪涛. (2025). AI让人更环保?AI推荐对消费者的绿色产品购买意愿的影响. 南开管理评论, 28(3), 51-62.
[2] 徐岚, 陈全, 崔楠, 辜红. (2024). “共赢” vs.“牺牲”: 道德消费叙述框架对消费者算法推荐信任的影响. 心理学报, 56(2), 179-193.
[3] Anders S., Heussen Y., Sprenger A., Haynes J. D., & Ethofer T. (2015). Social gating of sensory information during ongoing communication. NeuroImage, 104, 189-198.
[4] Arango L., Singaraju S. P., & Niininen O. (2023). Consumer responses to AI-generated charitable giving ads. Journal of Advertising, 52(4), 486-503.
[5] Baek T. H., Bakpayev M., Yoon S., & Kim S. (2022). Smiling AI agents: How anthropomorphism and broad smiles increase charitable giving. International Journal of Advertising, 41(5), 850-867.
[6] Barclay, P., & Willer, R. (2007). Partner choice creates competitive altruism in humans. Proceedings of the Royal Society B: Biological Sciences, 274(1610), 749-753.
[7] Batson C. D., Kobrynowicz D., Dinnerstein J. L., Kampf H. C., & Wilson A. D. (1997). In a very different voice: Unmasking moral hypocrisy. Journal of Personality and Social Psychology, 72(6), 1335-1348.
[8] Batson C. D., Thompson E. R., & Chen H. (2002). Moral hypocrisy: Addressing some alternatives. Journal of Personality and Social Psychology, 83(2), 330-339.
[9] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
[10] Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88(1), 1-45.
[11] Bonnefon J. F., Rahwan I., & Shariff A. (2024). The moral psychology of artificial intelligence. Annual Review of Psychology, 75(1), 653-675.
[12] Chaminade T., Da Fonseca D., Rosset D., Cheng G., & Deruelle C. (2015). Atypical modulation of hypothalamic activity by social context in ASD. Research in Autism Spectrum Disorders, 10, 41-50.
[13] Cheetham M., Suter P., & Jäncke L. (2011). The human likeness dimension of the "uncanny valley hypothesis": Behavioral and functional MRI findings. Frontiers in Human Neuroscience, 5, 126.
[14] Chen, F., & Huang, S. C. (2023). Robots or humans for disaster response? Impact on consumer prosociality and possible explanations. Journal of Consumer Psychology, 33(2), 432-440.
[15] Cohn A., Gesche T., & Maréchal M. A. (2022). Honesty in the digital age. Management Science, 68(2), 827-845.
[16] Craig, A. D. (2009). How do you feel—now? The anterior insula and human awareness. Nature Reviews Neuroscience, 10(1), 59-70.
[17] Critchley, H. D., & Garfinkel, S. N. (2017). Interoception and emotion. Current Opinion in Psychology, 17, 7-14.
[18] Cross E. S., Riddoch K. A., Pratts J., Titone S., Chaudhury B., & Hortensius R. (2019). A neurocognitive investigation of the impact of socializing with a robot on empathy for pain. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1771), 20180034.
[19] Dang, J., & Liu, L. (2025). Dehumanization risks associated with artificial intelligence use. American Psychologist. Advance online publication.
[20] de Melo C. M., Marsella S., & Gratch J. (2018). Social decisions and fairness change when people' s interests are represented by autonomous agents. Autonomous Agents and Multi-Agent Systems, 32, 163-187.
[21] Delgado M. R., Schotter A., Ozbay E. Y., & Phelps E. A. (2008). Understanding overbidding: Using the neural circuitry of reward to design economic auctions. Science, 321(5897), 1849-1852.
[22] Di Cesare G., Fasano F., Errante A., Marchi M., & Rizzolatti G. (2016). Understanding the internal states of others by listening to action verbs. Neuropsychologia, 89, 172-179.
[23] Dietvorst, B. J., & Bartels, D. M. (2022). Consumers object to algorithms making morally relevant tradeoffs because of algorithms' consequentialist decision strategies. Journal of Consumer Psychology, 32(3), 406-424.
[24] Dixon M. L., Thiruchselvam R., Todd R., & Christoff K. (2017). Emotion and the prefrontal cortex: An integrative review. Psychological Bulletin, 143(10), 1033.
[25] Feier T., Gogoll J., & Uhl M. (2022). Hiding behind machines: Artificial agents may help to evade punishment. Science and Engineering Ethics, 28(2), 19.
[26] Fernández Domingos E., Terrucha I., Suchon R., Grujić J., Burguillo J. C., Santos F. C., & Lenaerts T. (2022). Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma. Scientific Reports, 12(1), 8492.
[27] Gill, T. (2020). Blame it on the self-driving car: How autonomous vehicles can alter consumer morality. Journal of Consumer Research, 47(2), 272-291.
[28] Giroux M., Kim J., Lee J. C., & Park J. (2022). Artificial intelligence and declined guilt: Retailing morality comparison between human and AI. Journal of Business Ethics, 178(4), 1027-1041.
[29] Gobbini M. I., Gentili C., Ricciardi E., Bellucci C., Salvini P., Laschi C., & Pietrini P. (2011). Distinct neural systems involved in agency and animacy detection. Journal of Cognitive Neuroscience, 23(8), 1911-1920.
[30] Gorny P. M., Renner B., & Schäfer L. (2023). Prosocial behavior among human workers in robot-augmented production teams—A field-in-the-lab experiment. Frontiers in Behavioral Economics, 2, 1220563.
[31] Grabenhorst, F., & Rolls, E. T. (2011). Value, pleasure and choice in the ventral prefrontal cortex. Trends in Cognitive Sciences, 15(2), 56-67.
[32] Granulo A., Caprioli S., Fuchs C., & Puntoni S. (2024). Deployment of algorithms in management tasks reduces prosocial motivation. Computers in Human Behavior, 152, 108094.
[33] Gray H. M., Gray K., & Wegner D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619.
[34] Gray K., Waytz A., & Young L. (2012a). The moral dyad: A fundamental template unifying moral judgment. Psychological Inquiry, 23(2), 206-215.
[35] Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.
[36] Gray K., Young L., & Waytz A. (2012b). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101-124.
[37] Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11(8), 322-323.
[38] Greene J. D., Nystrom L. E., Engell A. D., Darley J. M., & Cohen J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389-400.
[39] Greene J. D., Sommerville R. B., Nystrom L. E., Darley J. M., & Cohen J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.
[40] Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 998-1002.
[41] Haidt, J. (2008). Morality. Perspectives on Psychological Science, 3(1), 65-72.
[42] Harris, L. T. (2024). The neuroscience of human and artificial intelligence presence. Annual Review of Psychology, 75(1), 433-466.
[43] Heyder T., Passlack N., & Posegga O. (2023). Ethical management of human-AI interaction: Theory development review. The Journal of Strategic Information Systems, 32(3), 101772.
[44] Hogenhuis, A., & Hortensius, R. (2022). Domain-specific and domain-general neural network engagement during human-robot interactions. European Journal of Neuroscience, 56(10), 5902-5916.
[45] Hohensinn L., Willems J., Soliman M., Vanderelst D., & Stoll J. (2024). Who guards the guards with AI-driven robots? The ethicalness and cognitive neutralization of police violence following AI-robot advice. Public Management Review, 26(8), 2355-2379.
[46] Holthöwer, J., & Van Doorn, J. (2023). Robots do not judge: Service robots can alleviate embarrassment in service encounters. Journal of the Academy of Marketing Science, 51(4), 767-784.
[47] Huebner B., Dwyer S., & Hauser M. (2009). The role of emotion in moral psychology. Trends in Cognitive Sciences, 13(1), 1-6.
[48] Karpus J., Krüger A., Verba J. T., Bahrami B., & Deroy O. (2021). Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience, 24(6), 102679.
[49] Kim B., Wen R., de Visser E. J., Tossell C. C., Zhu Q., Williams T., & Phillips E. (2024). Can robot advisers encourage honesty?: Considering the impact of rule, identity, and role-based moral advice. International Journal of Human-Computer Studies, 184, 103217.
[50] Kim, H. Y., & McGill, A. L. (2024). AI-induced dehumanization. Journal of Consumer Psychology, 34(4), 123-145.
[51] Kim T., Lee H., Kim M. Y., Kim S., & Duhachek A. (2023). AI increases unethical consumer behavior due to reduced anticipatory guilt. Journal of the Academy of Marketing Science, 51(4), 785-801.
[52] Kobis N., Bonnefon J. F., & Rahwan I. (2021). Bad machines corrupt good morals. Nature Human Behavior, 5(6), 679-685.
[53] Krügel S., Ostermaier A., & Uhl M. (2023a). Algorithms as partners in crime: A lesson in ethics by design. Computers in Human Behavior, 138, 107483.
[54] Krügel S., Ostermaier A., & Uhl M. (2023b). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 4569.
[55] Lamm C., Decety J., & Singer T. (2011). Meta-analytic evidence for common and distinct neural networks associated with directly experienced pain and empathy for pain. NeuroImage, 54(3), 2492-2502.
[56] Lanz L., Briker R., & Gerpott F. H. (2024). Employees adhere more to unethical instructions from human than AI supervisors: Complementing experimental evidence with machine learning. Journal of Business Ethics, 189(3), 625-646.
[57] Leib M., Köbis N., Rilke R. M., Hagens M., & Irlenbusch B. (2024). Corrupted by algorithms? How AI-generated and human-written advice shape (dis) honesty. The Economic Journal, 134(658), 766-784.
[58] Li, H., & Zhang, R. (2024). Finding love in algorithms: Deciphering the emotional contexts of close encounters with AI chatbots. Journal of Computer-Mediated Communication, 29(5), zmae015.
[59] Liu, P., & Du, Y. (2022). Blame attribution asymmetry in human-automation cooperation. Risk Analysis, 42(8), 1769-1783.
[60] Liu Y., Wang X., Du Y., & Wang S. (2023). Service robots vs. human staff: The effect of service agents and service exclusion on unethical consumer behavior. Journal of Hospitality and Tourism Management, 55, 401-415.
[61] Maedche A., Legner C., Benlian A., Berger B., Gimpel H., Hess T., & Söllner M. (2019). AI-based digital assistants: Opportunities, threats, and research perspectives. Business and Information Systems Engineering, 61, 535-544.
[62] Mars R. B., Neubert F. X., Noonan M. P., Sallet J., Toni I., & Rushworth M. F. (2012). On the relationship between the "default mode network" and the "social brain". Frontiers in Human Neuroscience, 6, 189.
[63] Martin D. U., Perry C., MacIntyre M. I., Varcoe L., Pedell S., & Kaufman J. (2020). Investigating the nature of children's altruism using a social humanoid robot. Computers in Human Behavior, 104, 106149.
[64] McKee K. R., Bai X., & Fiske S. T. (2023). Humans perceive warmth and competence in artificial intelligence. iScience, 26(8), 107256.
[65] Mell J., Lucas G., Mozgai S., & Gratch J. (2020). The effects of experience on deception in human-agent negotiation. Journal of Artificial Intelligence Research, 68, 633-660.
[66] Melo C. D., Marsella S., & Gratch J. (2016). People do not feel guilty about exploiting machines. ACM Transactions on Computer-Human Interaction (TOCHI), 23(2), 1-17.
[67] Namkoong M., Park G., Park Y., & Lee S. (2024). Effect of gratitude expression of AI chatbot on willingness to donate. International Journal of Human-Computer Interaction, 40(20), 6647-6658.
[68] Nash K., Lea J. M., Davies T., & Yogeeswaran K. (2018). The bionic blues: Robot rejection lowers self-esteem. Computers in Human Behavior, 78, 59-63.
[69] Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
[70] Oliveira R., Arriaga P., Santos F. P., Mascarenhas S., & Paiva A. (2021). Towards prosocial design: A scoping review of the use of robots and virtual agents to trigger prosocial behaviour. Computers in Human Behavior, 114, 106547.
[71] Peter J., Kühne R., & Barco A. (2021). Can social robots affect children's prosocial behavior? An experimental study on prosocial robot models. Computers in Human Behavior, 120, 106712.
[72] Petisca S., Paiva A., & Esteves F. (2020). Perceptions of people’s dishonesty towards robots. In Peoceddings of the Social Robotics. ICSR 2020(pp. 132-142). Lecture Notes in Computer Science, Springer, Cham.
[73] Quach S., Cheah I., & Thaichon P. (2024). The power of flattery: Enhancing prosocial behavior through virtual influencers. Psychology and Marketing, 41(7), 1629-1648.
[74] Rosenthal-Von der Pütten, A. M., Krämer N. C., Maderwald S., Brand M., & Grabenhorst F. (2019). Neural mechanisms for accepting and rejecting artificial social partners in the uncanny valley. Journal of Neuroscience, 39(33), 6555-6570.
[75] Sabour S., Zhang W., Xiao X., Zhang Y., Zheng Y., Wen J., & Huang M. (2023). A chatbot for mental health support: Exploring the impact of Emohaa on reducing mental distress in China. Frontiers in Digital Health, 5, 1133987.
[76] Saxe, R., & Kanwisher, N. (2003). People thinking about thinking people: The role of the temporo-parietal junction in "theory of mind". NeuroImage, 19(4), 1835-1842.
[77] Schindler S., Kruse O., Stark R., & Kissler J. (2019). Attributed social context and emotional content recruit frontal and limbic brain regions during virtual feedback processing. Cognitive, Affective, and Behavioral Neuroscience, 19, 239-252.
[78] Schniter E., Shields T. W., & Sznycer D. (2020). Trust in humans and robots: Economically similar but emotionally different. Journal of Economic Psychology, 78, 102253.
[79] Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, 86, 401-411.
[80] Shank D. B., DeSanti A., & Maninger T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication and Society, 22(5), 648-663.
[81] Song-Nichols, K., & Young, A. G. (2020, July ). Gendered robots can change children' s gender stereotyping. Poster at The CogSci 2020 Virtual Meeting, Seattle, WA, United States.
[82] Stern D. N.(2010). Forms of vitality: Exploring dynamic experience in psychology, the arts, psychotherapy, and development. Oxford University Press.
[83] Tang H., Ye P., Wang S., Zhu R., Su S., Tong L., & Liu C. (2017). Stimulating the right temporoparietal junction with tDCS decreases deception in moral hypocrisy and unfairness. Frontiers in Psychology, 8, 2033.
[84] Tang P. M., Koopman J., Mai K. M., De Cremer D., Zhang J. H., Reynders P., & Chen I. (2023). No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. Journal of Applied Psychology, 108(11), 1766-1789.
[85] Turiel, E. (2002). The culture of morality: Social development, context, and conflict. Cambridge University Press.
[86] Vollmer A. L., Read R., Trippas D., & Belpaeme T. (2018). Children conform, adults resist: A robot group induced peer pressure on normative social conformity. Science Robotics, 3(21), eaat7111.
[87] Wang K., Lu L., Fang J., Xing Y., Tong Z., & Wang L. (2023). The downside of artificial intelligence (AI) in green choices: How AI recommender systems decrease green consumption. Managerial and Decision Economics, 44(6), 3346-3353.
[88] Wang, Y., & Quadflieg, S. (2015). In our own image? Emotional and neural processing differences when observing human-human vs. human-robot interactions. Social Cognitive and Affective Neuroscience, 10(11), 1515-1524.
[89] Williams R., Machado C. V., Druga S., Breazeal C., & Maes P. (2018, June). "My doll says it's ok": A study of children's conformity to a talking doll.Poster at Interaction Design and Children, Trondheim, Norway.
[90] Yam K. C., Tang P. M., Jackson J. C., Su R., & Gray K. (2023). The rise of robots increases job insecurity and maladaptive workplace behaviors: Multimethod evidence. Journal of Applied Psychology, 108(5), 850.
[91] Yang J., Chuenterawong P., Lee H., Tian Y., & Chock T. M. (2023). Human versus virtual influencer: The effect of humanness and interactivity on persuasive CSR messaging. Journal of Interactive Advertising, 23(3), 275-292.
[92] Yin Y., Jia N., & Wakslak C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14), e2319112121.
[93] Zhang R. Z., Kyung E. J., Longoni C., Cian L., & Mrkva K. (2025). AI-induced indifference: Unfair AI reduces prosociality. Cognition, 254, 105937.
[94] Zhou Y., Fei Z., He Y., & Yang Z. (2022). How human-chatbot interaction impairs charitable giving: The role of moral judgment. Journal of Business Ethics, 178(3), 849-865.
[95] Złotowski J., Yogeeswaran K., & Bartneck C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100, 48-54.

基金

*本研究得到国家自然科学基金项目(71872016,32441109,32271092,32130045)和中央高校基本科研业务费专项资金(1233200014)的资助

PDF(1696 KB)

Accesses

Citation

Detail

段落导航
相关文章

/