PDF(1440 KB)
Who Listens to AI? The Role of Attachment Styles in Adopting Artificial Intelligence Advice
Shu Cong, Jia Yongqi, He Lingnan
Journal of Psychological Science ›› 2026, Vol. 49 ›› Issue (3) : 514-523.
PDF(1440 KB)
PDF(1440 KB)
Who Listens to AI? The Role of Attachment Styles in Adopting Artificial Intelligence Advice
In the era of intelligence, artificial intelligence (AI) and humans have become two important sources of advice. Attachment style, a classic social trait in psychology, reflects an individual's stable mental representation of an advisor and their positive or negative evaluations. It provides an effective theoretical framework for explaining individuals’ inherent preferences for advice from humans or AI. Based on Bowlby's attachment theory, current research aims to explore how attachment styles and the source of advice (AI or humans) jointly influence individuals' advice adoption.
Two experiments were conducted to examine the interaction between attachment styles and advice source on advice-taking behavior. Experiment 1 utilized a 2 (attachment style: attachment anxiety, attachment avoidance) × 2 (advice source: human, AI) between-subjects design. A total of 196 participants were recruited through an online survey platform. The primary dependent variable was the weight of advice (WOA), measured using the Judge-Advisor System (JAS) paradigm, which quantifies the extent to which participants adjust their initial judgments based on the advice received. Attachment styles were assessed using the Chinese version of the Experiences in Close Relationships Scale (ECR). Control variables included gender, age, education level, and self-reported AI knowledge. Participants were asked to estimate the weight of a person in a photograph (initial assessment) and then received either human or AI-generated advice on the weight. They were subsequently asked to provide a final weight estimate (post-assessment). The difference between the initial and post-assessment estimates, relative to the advice received, was used to calculate the WOA. Experiment 2 employed an attachment priming paradigm to replicate and extend the findings from Experiment 1. A total of 248 participants were randomly assigned to one of three attachment priming conditions (secure attachment, attachment anxiety, attachment avoidance) and then completed the same advice-taking task as in Experiment 1.
The research reveals that (1) Compared with human advice, individuals with high attachment anxiety are more likely to adopt advice from AI. That is, as the level of attachment anxiety increases, individuals' adoption of AI advice significantly rises, while their adoption of human advice does not change significantly; (2) Individuals with high attachment avoidance do not show a significant preference for either AI advice or human advice. Individuals with high attachment anxiety may find AI advice more appealing due to its perceived objectivity and lack of social threat, which aligns with their desire for support without the fear of social rejection. In contrast, attachment avoidance does not appear to drive a clear preference for either AI or human advice, suggesting that the underlying mechanisms influencing advice acceptance may differ between attachment anxiety and avoidance. Future research should explore additional mediators and moderators that may explain these differential effects and consider the impact of attachment styles on advice-taking in more complex and ecologically valid decision-making scenarios.
In terms of theoretical contributions, this study introduces the attachment theory into the research on advice interaction between humans and artificial intelligence. It enriches the theoretical research findings on human-Machine interaction and provides a novel theoretical framework for understanding the contradictory phenomena of algorithm appreciation and algorithm aversion in the intelligent era. In terms of practical implications, the findings of this study provide insights for the design of AI-based decision support systems, especially in contexts where user trust and acceptance are crucial for effective human-Machine collaboration. For individual advice adoption, the study offers a reference for individuals to optimize their advice adoption and decision-making patterns in the intelligent era. Specifically, those who can rationally analyze the capability boundaries between AI and humans, and comprehensively evaluate advice from different sources, are more likely to make effective decisions. Individuals with high attachment anxiety may exhibit irrational algorithm appreciation, which could lead to the adoption of erroneous suggestions. This, in turn, may distort and negatively impact their cognition, necessitating attention to the potential risk of over-reliance.
advice taking / attachment styles / human-machine collaboration / algorithm appreciation / algorithm aversion
| [1] |
杜秀芳, 胡卫红, 王静, 李方. (2022). 情境公开性与印象管理动机对建议采纳的影响. 心理科学, 45(5), 1198-1205.
|
| [2] |
段锦云, 古晓花, 孙露莹. (2016). 外显自尊、内隐自尊及其分离对建议采纳的影响. 心理学报, 48(4), 371-384.
|
| [3] |
李彩娜, 孙颖, 拓瑞, 刘佳. (2016). 安全依恋对人际信任的影响:依恋焦虑的调节效应. 心理学报, 48(8), 989-1001.
|
| [4] |
李同归, 加藤和生. (2006). 成人依恋的测量:亲密关系经历量表(ECR)中文版. 心理学报, 38(3), 399-406.
|
| [5] |
罗希彤, 潘亚峰. (2023). 人际视角下的建议互动:决策、社会认知过程与计算神经机制. 科学通报, 68(Z2), 3809-3822.
|
| [6] |
申琦, 王璐瑜. (2021). 当“机器人”成为社会行动者:人机交互关系中的刻板印象. 新闻与传播研究, 28(2), 37-52+127.
|
| [7] |
王林辉, 胡晟明, 董直庆. (2022). 人工智能技术、任务属性与职业可替代风险:来自微观层面的经验证据. 管理世界, 38(7), 60-79.
|
| [8] |
温忠麟, 侯杰泰, 张雷. (2005). 调节效应与中介效应的比较和应用. 心理学报, 37(2), 268-274.
|
| [9] |
乌尔里希·贝克, 王武龙. (2003). 从工业社会到风险社会(上篇)——关于人类生存、社会结构和生态启蒙等问题的思考. 马克思主义与现实, 3, 26-45.
|
| [10] |
吴新慧. (2020). 数字信任与数字社会信任重构. 学习与实践, 10, 87-96.
|
| [11] |
徐惊蛰, 谢晓非. (2009). 决策过程中的建议采纳. 心理科学进展, 17(5), 1016-1025.
|
| [12] |
张艳梅, 杜秀芳, 王修欣. (2015). 焦虑、建议者善意程度对个体建议采纳的影响. 心理科学, 38(5), 1155-1161.
|
| [13] |
周浩, 龙立荣. (2004). 共同方法偏差的统计检验与控制方法. 心理科学进展, 12(6), 942-950.
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
| [54] |
|
| [55] |
|
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
|
| [60] |
|
| [61] |
|
| [62] |
|
| [63] |
|
| [64] |
|
| [65] |
|
| [66] |
|
| [67] |
|
| [68] |
|
| [69] |
|
| [70] |
|
| [71] |
|
| [72] |
|
| [73] |
|
| [74] |
|
| [75] |
|
| [76] |
|
| [77] |
|
| [78] |
|
| [79] |
|
| [80] |
|
| [81] |
|
| [82] |
|
/
| 〈 |
|
〉 |