本文聚焦于我国的差序格局文化,将人际差序关系引入道德机器研究,通过两项实验探讨了人际差序关系对智能机器道德决策期望的影响及其心理机制。研究1发现,智能机器与其决策所涉及对象之间的人际差序关系越近,人们越期望其做出偏私决策,对偏私行为的道德评价也更高;研究2发现,这一效应主要体现在分配性偏私情境中,并且人们对智能机器的感受性感知在其间发挥着中介作用。
Abstract
Considering the superior intelligence and autonomy of machines, it is about to be a reality that intelligent machines can independently complete decision-making tasks. How to ensure that the decisions made by intelligent machines conform to ethical norms has become the focus of domestic and foreign researchers. At present, most studies were carried out under the background of western culture, debating whether intelligent machines should adopt utilitarianism or deontology in moral dilemmas. This paper explores participants' expectations and evaluations of moral decision-making of intelligent machines under the background of the Differential Mode of Association, a typical phenomenon in the Chinese ethical system. 2 studies were conducted to test whether differential interpersonal relationships would influence participants' decision-making expectations for intelligent machines in moral dilemmas and the mediator role of mind perception.
In Study1, 185 participants were recruited to read an adapted trolley dilemma, in which an autonomous vehicle (AV) carrying five passengers was running into a landslide. It had to choose between going straight to hurt five passengers to save one stranger standing at the other side and swerving instead to the other side to hurt the stranger with an aim of saving its passengers. As the manipulation of differential interpersonal relationships, in conditions 1 and 2, one of the five was the AV's owner, a friend of the AV's owner respectively, and in condition 3, all of the five were strangers. Next, we measured participants' expectations for the AV's moral decision-making and their moral evaluations when the AV chose to save its passengers including its owner or owner's friend. In Study 2, to test the robustness of the impact of differential interpersonal relationships on moral evaluations of the favoritism behavior of intelligent machines and mechanism between them, 188 participants were asked to consider moral decision-making situations that were more realistic. According to the nature of behavior, these situations were divided into distributive favoritism situations and protective favoritism situations. Then, they were asked to finish the Mind Perception Questionnaire.
The results of Study 1 showed that the number of participants who expected the AV to save its passengers in conditions 1, 2 and 3 was the largest, medium and smallest respectively. In addition, Study 1 found that intelligent machines' favoritism decisions attained higher moral evaluations than fair decisions. These results indicated that differential interpersonal relationships may affect participants' moral expectations and evaluations for the intelligent machines. Study 2 showed that only in distributive favoritism situations, participants' moral evaluations of intelligent machines would be affected by differential interpersonal relationships, and participants gave higher moral evaluations for intelligent machines' favoritism decisions. Meanwhile, participants' experience perception rather than agency perception of intelligent machines acted as a mediator between differential interpersonal relationships and participants' moral evaluations.
The current studies partly testified that Chinese participants expected that the intelligent machines would take the closeness of interpersonal relationships into account when they encountered moral dilemmas. Moreover, the closer relationship between intelligent machines and the objects involved in moral decision-making, the stronger the emotion perceiving ability participants assumed the machines would have, and it would be more likely that participants expect the intelligent machines to make favoritism decisions eventually. The current research and its findings go beyond the framework of utilitarianism and deontology, providing a new insight into the ethical design of algorithms of intelligent machines.
关键词
智能机器 /
人际差序关系 /
道德决策 /
心智感知
Key words
intelligent machines /
differential interpersonal relationships /
moral decision-making /
mind perception
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] 阿西莫夫. (2005). 机器人与帝国. 天地出版社..
[2] 费孝通. (2008). 乡土中国. 人民出版社..
[3] 刘纪璐, 谢晨云, 闵超琴, 谷龙, 项锐. (2018). 儒家机器人伦理. 思想与文化, 1, 18-40.
[4] 刘淑威. (2018). 人际关系差序性对偏私行为道德评价的影响 (硕士学位论文). 浙江大学, 杭州.
[5] 王娟. (2015). 人际关系差序性对道德敏感性的影响研究 (硕士学位论文). 湖南师范大学, 长沙.
[6] 闫恩双. (2017). 两种道德取向思考. 现代商贸工业, 38(5), 140-141.
[7] 颜志雄. (2014). 道德判断中的亲属偏见——来自行为学和电生理学的证据 (博士学位论文). 湖南师范大学, 长沙.
[8] 远征南. (2019). 人们对自主机器道德决策期望的探索性研究 (硕士学位论文). 浙江大学, 杭州.
[9] 朱熹. (1987). 大学·中庸·论语. 上海古籍出版社..
[10] Awad E., Dsouza S., Kim R., Schulz J., Henrich J., Shariff A., Bonnefon J. F., & Rahwan I. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
[11] Bergmann L. T., Schlicht L., Meixner C., König P., Pipa G., Boshammer S., & Stephan A. (2018). Autonomous vehicles require socio-political acceptance-an empirical and philosophical perspective on the problem of moral decision making. Frontiers in Behavioral Neuroscience, 12, Article 31.
[12] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34.
[13] Bonnefon J. F., Shariff A., & Rahwan I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.
[14] Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5-15.
[15] Fowler, J. H., & Kam, C. D. (2007). Beyond the self: Social identity, altruism, and political participation. The Journal of Politics, 69(3), 813-827.
[16] Frank D. A., Chrysochou P., Mitkidis P., & Ariely D. (2019). Human decision-making biases in the moral dilemmas of autonomous vehicles. Scientific Reports, 9(1), Article 13080.
[17] Gray H. M., Gray K., & Wegner D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619.
[18] Gray K., Young L., & Waytz A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101-124.
[19] Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834.
[20] Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55-66.
[21] Haidt J.,& Joseph, C. (2007). The moral mind: How 5 sets of innate moral intuitions guide the development of many culture-specific virtues, and perhaps even modules In P Carruthers, S Laurence, & S Stich (Eds), The innate mind (pp 367-391) Oxford University Press How 5 sets of innate moral intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind (pp. 367-391). Oxford University Press.
[22] Hayes A. F.(2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press.
[23] Iyer R., Koleva S., Graham J., Ditto P., & Haidt J. (2012). Understanding libertarian morality: The psychological dispositions of self-identified libertarians. PLoS ONE, 7(8), Article e42366.
[24] Lotto L., Manfrinati A., & Sarlo M. (2014). A new set of moral dilemmas: Norms for moral acceptability, decision times, and emotional salience. Journal of Behavioral Decision Making, 27(1), 57-65.
[25] Malle B. F., Scheutz M., Arnold T., Voiklis J., & Cusimano C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, Portland, Oregon, USA.
[26] Misselhorn, C. (2020). Artificial systems with moral capacities? A research design and its implementation in a geriatric care system. Artificial Intelligence, 278, Article 103179.
[27] Moreira J. F. G., Tashjian S. M., Galván A., & Silvers J. A. (2020). Is social decision making for close others consistent across domains and within individuals? Journal of Experimental Psychology: General, 149(8), 1509-1526.
[28] Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
[29] Nass C., Moon Y., & Carney P. (1999). Are people polite to computers? Responses to computer-based interviewing systems. Journal of Applied Social Psychology, 29(5), 1093-1109.
[30] Nass C., Steuer J., & Tauber E. R. (1994). Computers are social actors. Proceedings of the SIGCHI conference on human factors in computing systems, Boston, MA, USA.
[31] Peterson, M., & Spahn, A. (2011). Can technological artefacts be moral agents? Science and Engineering Ethics, 17(3), 411-424.
[32] Reeves B.,& Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places Cambridge University Press How people treat computers, television, and new media like real people and places. Cambridge University Press.
[33] Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, 86, 401-411.
[34] Wallach W., Allen C., & Smit I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI and Society, 22(4), 565-582.
[35] Waytz A., Gray K., Epley N., & Wegner D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383-388.
基金
*本研究得到教育部人文社会科学规划基金项目(19YJA190007)、安徽省社会科学创新发展研究课题攻关研究项目(2018CXF166)和浙江省科技厅软科学研究计划项目(2020C35080)的资助