[1] 阿西莫夫. (2005). 机器人与帝国. 天地出版社.. [2] 费孝通. (2008). 乡土中国. 人民出版社.. [3] 刘纪璐, 谢晨云, 闵超琴, 谷龙, 项锐. (2018). 儒家机器人伦理. 思想与文化, 1, 18-40. [4] 刘淑威. (2018). 人际关系差序性对偏私行为道德评价的影响 (硕士学位论文). 浙江大学, 杭州. [5] 王娟. (2015). 人际关系差序性对道德敏感性的影响研究 (硕士学位论文). 湖南师范大学, 长沙. [6] 闫恩双. (2017). 两种道德取向思考. 现代商贸工业, 38(5), 140-141. [7] 颜志雄. (2014). 道德判断中的亲属偏见——来自行为学和电生理学的证据 (博士学位论文). 湖南师范大学, 长沙. [8] 远征南. (2019). 人们对自主机器道德决策期望的探索性研究 (硕士学位论文). 浙江大学, 杭州. [9] 朱熹. (1987). 大学·中庸·论语. 上海古籍出版社.. [10] Awad E., Dsouza S., Kim R., Schulz J., Henrich J., Shariff A., Bonnefon J. F., & Rahwan I. (2018). The moral machine experiment. Nature, 563(7729), 59-64. [11] Bergmann L. T., Schlicht L., Meixner C., König P., Pipa G., Boshammer S., & Stephan A. (2018). Autonomous vehicles require socio-political acceptance-an empirical and philosophical perspective on the problem of moral decision making. Frontiers in Behavioral Neuroscience, 12, Article 31. [12] Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34. [13] Bonnefon J. F., Shariff A., & Rahwan I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576. [14] Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5-15. [15] Fowler, J. H., & Kam, C. D. (2007). Beyond the self: Social identity, altruism, and political participation. The Journal of Politics, 69(3), 813-827. [16] Frank D. A., Chrysochou P., Mitkidis P., & Ariely D. (2019). Human decision-making biases in the moral dilemmas of autonomous vehicles. Scientific Reports, 9(1), Article 13080. [17] Gray H. M., Gray K., & Wegner D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619. [18] Gray K., Young L., & Waytz A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101-124. [19] Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834. [20] Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55-66. [21] Haidt J.,& Joseph, C. (2007). The moral mind: How 5 sets of innate moral intuitions guide the development of many culture-specific virtues, and perhaps even modules In P Carruthers, S Laurence, & S Stich (Eds), The innate mind (pp 367-391) Oxford University Press How 5 sets of innate moral intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind (pp. 367-391). Oxford University Press. [22] Hayes A. F.(2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press. [23] Iyer R., Koleva S., Graham J., Ditto P., & Haidt J. (2012). Understanding libertarian morality: The psychological dispositions of self-identified libertarians. PLoS ONE, 7(8), Article e42366. [24] Lotto L., Manfrinati A., & Sarlo M. (2014). A new set of moral dilemmas: Norms for moral acceptability, decision times, and emotional salience. Journal of Behavioral Decision Making, 27(1), 57-65. [25] Malle B. F., Scheutz M., Arnold T., Voiklis J., & Cusimano C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, Portland, Oregon, USA. [26] Misselhorn, C. (2020). Artificial systems with moral capacities? A research design and its implementation in a geriatric care system. Artificial Intelligence, 278, Article 103179. [27] Moreira J. F. G., Tashjian S. M., Galván A., & Silvers J. A. (2020). Is social decision making for close others consistent across domains and within individuals? Journal of Experimental Psychology: General, 149(8), 1509-1526. [28] Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103. [29] Nass C., Moon Y., & Carney P. (1999). Are people polite to computers? Responses to computer-based interviewing systems. Journal of Applied Social Psychology, 29(5), 1093-1109. [30] Nass C., Steuer J., & Tauber E. R. (1994). Computers are social actors. Proceedings of the SIGCHI conference on human factors in computing systems, Boston, MA, USA. [31] Peterson, M., & Spahn, A. (2011). Can technological artefacts be moral agents? Science and Engineering Ethics, 17(3), 411-424. [32] Reeves B.,& Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places Cambridge University Press How people treat computers, television, and new media like real people and places. Cambridge University Press. [33] Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, 86, 401-411. [34] Wallach W., Allen C., & Smit I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI and Society, 22(4), 565-582. [35] Waytz A., Gray K., Epley N., & Wegner D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383-388. |