The Impact of Differential Interpersonal Relationships on People' Expectations for Moral Decision-Making of Intelligent Machines

Wu Mingzheng, Yan Mengyao, Lin Ming, Xiong Tianhang, Li Yang, Sun Xiaoling

Journal of Psychological Science ›› 2023, Vol. 46 ›› Issue (5) : 1204-1211.

PDF(1058 KB)
PDF(1058 KB)
Journal of Psychological Science ›› 2023, Vol. 46 ›› Issue (5) : 1204-1211. DOI: 10.16719/j.cnki.1671-6981.20230522
Social,Personality & Organizational Psychology

The Impact of Differential Interpersonal Relationships on People' Expectations for Moral Decision-Making of Intelligent Machines

  • Wu Mingzheng1, Yan Mengyao1, Lin Ming1, Xiong Tianhang2, Li Yang3, Sun Xiaoling4
Author information +
History +

Abstract

Considering the superior intelligence and autonomy of machines, it is about to be a reality that intelligent machines can independently complete decision-making tasks. How to ensure that the decisions made by intelligent machines conform to ethical norms has become the focus of domestic and foreign researchers. At present, most studies were carried out under the background of western culture, debating whether intelligent machines should adopt utilitarianism or deontology in moral dilemmas. This paper explores participants' expectations and evaluations of moral decision-making of intelligent machines under the background of the Differential Mode of Association, a typical phenomenon in the Chinese ethical system. 2 studies were conducted to test whether differential interpersonal relationships would influence participants' decision-making expectations for intelligent machines in moral dilemmas and the mediator role of mind perception.
In Study1, 185 participants were recruited to read an adapted trolley dilemma, in which an autonomous vehicle (AV) carrying five passengers was running into a landslide. It had to choose between going straight to hurt five passengers to save one stranger standing at the other side and swerving instead to the other side to hurt the stranger with an aim of saving its passengers. As the manipulation of differential interpersonal relationships, in conditions 1 and 2, one of the five was the AV's owner, a friend of the AV's owner respectively, and in condition 3, all of the five were strangers. Next, we measured participants' expectations for the AV's moral decision-making and their moral evaluations when the AV chose to save its passengers including its owner or owner's friend. In Study 2, to test the robustness of the impact of differential interpersonal relationships on moral evaluations of the favoritism behavior of intelligent machines and mechanism between them, 188 participants were asked to consider moral decision-making situations that were more realistic. According to the nature of behavior, these situations were divided into distributive favoritism situations and protective favoritism situations. Then, they were asked to finish the Mind Perception Questionnaire.
The results of Study 1 showed that the number of participants who expected the AV to save its passengers in conditions 1, 2 and 3 was the largest, medium and smallest respectively. In addition, Study 1 found that intelligent machines' favoritism decisions attained higher moral evaluations than fair decisions. These results indicated that differential interpersonal relationships may affect participants' moral expectations and evaluations for the intelligent machines. Study 2 showed that only in distributive favoritism situations, participants' moral evaluations of intelligent machines would be affected by differential interpersonal relationships, and participants gave higher moral evaluations for intelligent machines' favoritism decisions. Meanwhile, participants' experience perception rather than agency perception of intelligent machines acted as a mediator between differential interpersonal relationships and participants' moral evaluations.
The current studies partly testified that Chinese participants expected that the intelligent machines would take the closeness of interpersonal relationships into account when they encountered moral dilemmas. Moreover, the closer relationship between intelligent machines and the objects involved in moral decision-making, the stronger the emotion perceiving ability participants assumed the machines would have, and it would be more likely that participants expect the intelligent machines to make favoritism decisions eventually. The current research and its findings go beyond the framework of utilitarianism and deontology, providing a new insight into the ethical design of algorithms of intelligent machines.

Key words

intelligent machines / differential interpersonal relationships / moral decision-making / mind perception

Cite this article

Download Citations
Wu Mingzheng, Yan Mengyao, Lin Ming, Xiong Tianhang, Li Yang, Sun Xiaoling. The Impact of Differential Interpersonal Relationships on People' Expectations for Moral Decision-Making of Intelligent Machines[J]. Journal of Psychological Science. 2023, 46(5): 1204-1211 https://doi.org/10.16719/j.cnki.1671-6981.20230522

References

[1] 阿西莫夫. (2005). 机器人与帝国. 天地出版社..
[2] 费孝通. (2008). 乡土中国. 人民出版社..
[3] 刘纪璐, 谢晨云, 闵超琴, 谷龙, 项锐. (2018). 儒家机器人伦理. 思想与文化, 1, 18-40.
[4] 刘淑威. (2018). 人际关系差序性对偏私行为道德评价的影响 (硕士学位论文). 浙江大学, 杭州.
[5] 王娟. (2015). 人际关系差序性对道德敏感性的影响研究 (硕士学位论文). 湖南师范大学, 长沙.
[6] 闫恩双. (2017). 两种道德取向思考. 现代商贸工业, 38(5), 140-141.
[7] 颜志雄. (2014). 道德判断中的亲属偏见——来自行为学和电生理学的证据 (博士学位论文). 湖南师范大学, 长沙.
[8] 远征南. (2019). 人们对自主机器道德决策期望的探索性研究 (硕士学位论文). 浙江大学, 杭州.
[9] 朱熹. (1987). 大学·中庸·论语. 上海古籍出版社..
[10] Awad E., Dsouza S., Kim R., Schulz J., Henrich J., Shariff A., Bonnefon J. F., & Rahwan I. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
[11] Bergmann L. T., Schlicht L., Meixner C., König P., Pipa G., Boshammer S., & Stephan A. (2018). Autonomous vehicles require socio-political acceptance-an empirical and philosophical perspective on the problem of moral decision making. Frontiers in Behavioral Neuroscience, 12, Article 31.
[12] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
[13] Bonnefon J. F., Shariff A., & Rahwan I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
[14] Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5-15.
[15] Fowler, J. H., & Kam, C. D. (2007). Beyond the self: Social identity, altruism, and political participation. The Journal of Politics, 69(3), 813-827.
[16] Frank D. A., Chrysochou P., Mitkidis P., & Ariely D. (2019). Human decision-making biases in the moral dilemmas of autonomous vehicles. Scientific Reports, 9(1), Article 13080.
[17] Gray H. M., Gray K., & Wegner D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619.
[18] Gray K., Young L., & Waytz A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101-124.
[19] Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834.
[20] Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55-66.
[21] Haidt J.,& Joseph, C. (2007). The moral mind: How 5 sets of innate moral intuitions guide the development of many culture-specific virtues, and perhaps even modules In P Carruthers, S Laurence, & S Stich (Eds), The innate mind (pp 367-391) Oxford University Press How 5 sets of innate moral intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind (pp. 367-391). Oxford University Press.
[22] Hayes A. F.(2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press.
[23] Iyer R., Koleva S., Graham J., Ditto P., & Haidt J. (2012). Understanding libertarian morality: The psychological dispositions of self-identified libertarians. PLoS ONE, 7(8), Article e42366.
[24] Lotto L., Manfrinati A., & Sarlo M. (2014). A new set of moral dilemmas: Norms for moral acceptability, decision times, and emotional salience. Journal of Behavioral Decision Making, 27(1), 57-65.
[25] Malle B. F., Scheutz M., Arnold T., Voiklis J., & Cusimano C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, Portland, Oregon, USA.
[26] Misselhorn, C. (2020). Artificial systems with moral capacities? A research design and its implementation in a geriatric care system. Artificial Intelligence, 278, Article 103179.
[27] Moreira J. F. G., Tashjian S. M., Galván A., & Silvers J. A. (2020). Is social decision making for close others consistent across domains and within individuals? Journal of Experimental Psychology: General, 149(8), 1509-1526.
[28] Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
[29] Nass C., Moon Y., & Carney P. (1999). Are people polite to computers? Responses to computer-based interviewing systems. Journal of Applied Social Psychology, 29(5), 1093-1109.
[30] Nass C., Steuer J., & Tauber E. R. (1994). Computers are social actors. Proceedings of the SIGCHI conference on human factors in computing systems, Boston, MA, USA.
[31] Peterson, M., & Spahn, A. (2011). Can technological artefacts be moral agents? Science and Engineering Ethics, 17(3), 411-424.
[32] Reeves B.,& Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places Cambridge University Press How people treat computers, television, and new media like real people and places. Cambridge University Press.
[33] Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, 86, 401-411.
[34] Wallach W., Allen C., & Smit I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI and Society, 22(4), 565-582.
[35] Waytz A., Gray K., Epley N., & Wegner D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383-388.
PDF(1058 KB)

Accesses

Citation

Detail

Sections
Recommended

/