Who Listens to AI? The Role of Attachment Styles in Adopting Artificial Intelligence Advice

Shu Cong, Jia Yongqi, He Lingnan

Journal of Psychological Science ›› 2026, Vol. 49 ›› Issue (3) : 514-523.

PDF(1440 KB)
PDF(1440 KB)
Journal of Psychological Science ›› 2026, Vol. 49 ›› Issue (3) : 514-523. DOI: 10.16719/j.cnki.1671-6981.20260301
Computational Modeling and Artificial Intelligence

Who Listens to AI? The Role of Attachment Styles in Adopting Artificial Intelligence Advice

Author information +
History +

Abstract

In the era of intelligence, artificial intelligence (AI) and humans have become two important sources of advice. Attachment style, a classic social trait in psychology, reflects an individual's stable mental representation of an advisor and their positive or negative evaluations. It provides an effective theoretical framework for explaining individuals’ inherent preferences for advice from humans or AI. Based on Bowlby's attachment theory, current research aims to explore how attachment styles and the source of advice (AI or humans) jointly influence individuals' advice adoption.

Two experiments were conducted to examine the interaction between attachment styles and advice source on advice-taking behavior. Experiment 1 utilized a 2 (attachment style: attachment anxiety, attachment avoidance) × 2 (advice source: human, AI) between-subjects design. A total of 196 participants were recruited through an online survey platform. The primary dependent variable was the weight of advice (WOA), measured using the Judge-Advisor System (JAS) paradigm, which quantifies the extent to which participants adjust their initial judgments based on the advice received. Attachment styles were assessed using the Chinese version of the Experiences in Close Relationships Scale (ECR). Control variables included gender, age, education level, and self-reported AI knowledge. Participants were asked to estimate the weight of a person in a photograph (initial assessment) and then received either human or AI-generated advice on the weight. They were subsequently asked to provide a final weight estimate (post-assessment). The difference between the initial and post-assessment estimates, relative to the advice received, was used to calculate the WOA. Experiment 2 employed an attachment priming paradigm to replicate and extend the findings from Experiment 1. A total of 248 participants were randomly assigned to one of three attachment priming conditions (secure attachment, attachment anxiety, attachment avoidance) and then completed the same advice-taking task as in Experiment 1.

The research reveals that (1) Compared with human advice, individuals with high attachment anxiety are more likely to adopt advice from AI. That is, as the level of attachment anxiety increases, individuals' adoption of AI advice significantly rises, while their adoption of human advice does not change significantly; (2) Individuals with high attachment avoidance do not show a significant preference for either AI advice or human advice. Individuals with high attachment anxiety may find AI advice more appealing due to its perceived objectivity and lack of social threat, which aligns with their desire for support without the fear of social rejection. In contrast, attachment avoidance does not appear to drive a clear preference for either AI or human advice, suggesting that the underlying mechanisms influencing advice acceptance may differ between attachment anxiety and avoidance. Future research should explore additional mediators and moderators that may explain these differential effects and consider the impact of attachment styles on advice-taking in more complex and ecologically valid decision-making scenarios.

In terms of theoretical contributions, this study introduces the attachment theory into the research on advice interaction between humans and artificial intelligence. It enriches the theoretical research findings on human-Machine interaction and provides a novel theoretical framework for understanding the contradictory phenomena of algorithm appreciation and algorithm aversion in the intelligent era. In terms of practical implications, the findings of this study provide insights for the design of AI-based decision support systems, especially in contexts where user trust and acceptance are crucial for effective human-Machine collaboration. For individual advice adoption, the study offers a reference for individuals to optimize their advice adoption and decision-making patterns in the intelligent era. Specifically, those who can rationally analyze the capability boundaries between AI and humans, and comprehensively evaluate advice from different sources, are more likely to make effective decisions. Individuals with high attachment anxiety may exhibit irrational algorithm appreciation, which could lead to the adoption of erroneous suggestions. This, in turn, may distort and negatively impact their cognition, necessitating attention to the potential risk of over-reliance.

Key words

advice taking / attachment styles / human-machine collaboration / algorithm appreciation / algorithm aversion

Cite this article

Download Citations
Shu Cong , Jia Yongqi , He Lingnan. Who Listens to AI? The Role of Attachment Styles in Adopting Artificial Intelligence Advice[J]. Journal of Psychological Science. 2026, 49(3): 514-523 https://doi.org/10.16719/j.cnki.1671-6981.20260301

References

[1]
杜秀芳, 胡卫红, 王静, 李方. (2022). 情境公开性与印象管理动机对建议采纳的影响. 心理科学, 45(5), 1198-1205.
[2]
段锦云, 古晓花, 孙露莹. (2016). 外显自尊、内隐自尊及其分离对建议采纳的影响. 心理学报, 48(4), 371-384.
[3]
李彩娜, 孙颖, 拓瑞, 刘佳. (2016). 安全依恋对人际信任的影响:依恋焦虑的调节效应. 心理学报, 48(8), 989-1001.
[4]
李同归, 加藤和生. (2006). 成人依恋的测量:亲密关系经历量表(ECR)中文版. 心理学报, 38(3), 399-406.
[5]
罗希彤, 潘亚峰. (2023). 人际视角下的建议互动:决策、社会认知过程与计算神经机制. 科学通报, 68(Z2), 3809-3822.
[6]
申琦, 王璐瑜. (2021). 当“机器人”成为社会行动者:人机交互关系中的刻板印象. 新闻与传播研究, 28(2), 37-52+127.
[7]
王林辉, 胡晟明, 董直庆. (2022). 人工智能技术、任务属性与职业可替代风险:来自微观层面的经验证据. 管理世界, 38(7), 60-79.
[8]
温忠麟, 侯杰泰, 张雷. (2005). 调节效应与中介效应的比较和应用. 心理学报, 37(2), 268-274.
[9]
乌尔里希·贝克, 王武龙. (2003). 从工业社会到风险社会(上篇)——关于人类生存、社会结构和生态启蒙等问题的思考. 马克思主义与现实, 3, 26-45.
[10]
吴新慧. (2020). 数字信任与数字社会信任重构. 学习与实践, 10, 87-96.
[11]
徐惊蛰, 谢晓非. (2009). 决策过程中的建议采纳. 心理科学进展, 17(5), 1016-1025.
[12]
张艳梅, 杜秀芳, 王修欣. (2015). 焦虑、建议者善意程度对个体建议采纳的影响. 心理科学, 38(5), 1155-1161.
[13]
周浩, 龙立荣. (2004). 共同方法偏差的统计检验与控制方法. 心理科学进展, 12(6), 942-950.
[14]
Ahn, J., Kim, J., & Sung, Y. (2021). AI-powered recommendations: The roles of perceived similarity and psychological distance on persuasion. International Journal of Advertising, 40, 1366-1384.
[15]
Ainsworth, M., Blehar, M. C., Waters, E., & Wall, S. (1979). Patterns of attachment: A psychological study of the strange situation. Journal of Child Psychology and Psychiatry, 23, 373-380.
[16]
Anders, S. L., & Tucker, J. S. (2000). Adult attachment style, interpersonal communication competence, and social support. Personal Relationships, 7(4), 379-389.
[17]
Araujo, T., Helberger, N., Kruikemeier, S., & Vreese, C. H. D. 2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI and Society, 35(3), 611-623.
[18]
Bartholomew, K., & Horowitz, L. M. (1991). Attachment styles among young adults: A test of a four-category model. Journal of Personality and Social Psychology, 61(2), 226-44.
[19]
Bartz, J. A. (2004). Close relationships and the working self-concept: Implicit and explicit effects of priming attachment on agency and communion. Personality and Social Psychology Bulletin, 30(11), 1389-1401.
[20]
Besikci, E. (2017). Soliciting relationship advice: On the predictive roles of relationship commitment and romantic attachment (Unpublished doctorial dissertation). Purdue University.
[21]
Bowlby, J. (1969). Attachment and loss. Basic Books.
[22]
Bowlby, J. (1982). Attachment and loss: Retrospect and prospect. American Journal of Orthopsychiatry, 52(4), 664-678.
[23]
Branch, S.E., & Dorrance Hall, E. (2018). Advice in intimate relationships. In E. L. MacGeorge & Eds.), The Oxford handbook of advice (pp.91-109). Oxford University Press.
[24]
Brehm, S. S. (1992). Intimate relationships. Mcgraw-hill Book Company.
[25]
Burton, J. W., Stein, M., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239.
[26]
Chua, A. Y., Pal, A., & Banerjee, S. (2023). AI-enabled investment advice: Will users buy it? Computers in Human Behavior, 138, 107481.
[27]
Collins, N., L., & Allard, L. M. (2004). Cognitive representations of attachment:The content and function of working models. In M. B. Brewer & Eds.), Social cognition (pp.75-101). Blackwell Publishing.
[28]
Collins, N. L., & Feeney, B. C. (2004). Working models of attachment shape perceptions of social support: Evidence from experimental and observational studies. Journal of Personality and Social Psychology, 87(3), 363-83.
[29]
Deriu, V., Pozharliev, R., & Angelis, M. D. (2024). How trust and attachment styles jointly shape job candidates' AI receptivity. Journal of Business Research, 179, 114717.
[30]
Diaconescu, A. O., Stecy, M., Kasper, L., Burke, C. J., & Tobler, P. N. (2020). Neural arbitration between social and individual learning systems. eLife, 9, 54051.
[31]
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2014). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology General, 144(1), 114-126.
[32]
Dijkstra, J. J. (1999). User agreement with incorrect expert system advice. Behaviour and Information Technology, 18(6), 399-411.
[33]
Dijkstra, J. J., Liebrand, W. B. G., & Timminga, E. (1998). Persuasiveness of expert systems. Behaviour and Information Technology, 17(3), 155-163.
[34]
Dunn, J. R., & Schweitzer, M. E. (2005). Feeling and believing: The influence of emotion on trust. Journal of Personality and Social Psychology, 88(5), 736.
[35]
Dziergwa, M., Kaczmarek, M., Kaczmarek, P., Kędzierski, J., & Wadas-szydłowska, K. (2017). Long-term cohabitation with a social robot: A case study of the influence of human attachment patterns. International Journal of Social Robotics, 10, 163-176.
[36]
Faul, F., Erdfelder, E., Buchner, A., & Lang, A. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149-1160.
[37]
Feeney, J. A., & Ryan, S. M. (1994). Attachment style and affect regulation: Relationships with health behavior and family experiences of illness in a student sample. Health Psychology, 13(4), 334.
[38]
Feng, B., & Macgeorge, E. L. (2006). Predicting receptiveness to advice: Characteristics of the problem, the advice-giver, and the recipient. Southern Communication Journal, 71(1), 67-85.
[39]
Fitzsimons, G. J., & Lehmann, D. R. (2004). Reactance to recommendations: When unsolicited advice yields contrary responses. Marketing Science, 23(1), 82-94.
[40]
Fraley, R. C., & Shaver, P. R. (1998). Airport separations: A naturalistic study of adult attachment dynamics in separating couples. Journal of Personality and Social Psychology, 75(5), 1198-1212.
[41]
Francesca, G., & Maurice, E. S. (2008). Blinded by anger or feeling the love: How emotions influence advice taking. Journal of Applied Psychology, 93(5), 1165-1173.
[42]
Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607.
[43]
Gillath, O., Sesko, A. K., Shaver, P. R., & Chun, D. S. (2010). Attachment, authenticity, and honesty: Dispositional and experimentally induced security can reduce self- and other-deception. Journal of Personality and Social Psychology, 98(5), 841-855.
[44]
Gino, F., & Moore, D. A. (2010). Effects of task difficulty on use of advice. Journal of Behavioral Decision Making, 20(1), 21-35.
[45]
Godek, J., & Murray, B., K. (2008). Willingness to pay for advice: The role of rational and experiential processing. Organizational Behavior and Human Decision Processes, 106(1), 77-87.
[46]
Granulo, A., Fuchs, C., & Puntoni, S. (2020). Preference for human (vs. robotic) labor is stronger in symbolic consumption contexts. Journal of Consumer Psychology, 31(1), 72-80.
[47]
Hazan, C., & Shaver, P. (1987). Romantic love conceptualized as an attachment process. Journal of Personality and Social Psychology, 52(3), 511-524.
[48]
Hertz, N., & Wiese, E. (2019). Good advice is beyond all price, but what if it comes from a machine? Journal of Experimental Psychology: Applied, 25(3), 386-395.
[49]
Hilbert, M., & Darmon, D. (2020). How complexity and uncertainty grew with algorithmic trading. Entropy, 22(5), 499.
[50]
Kidd, C., & Birhane, A. (2023). How AI can distort human beliefs. Science, 380, 1222 - 1223.
[51]
Lee, Y. C., Yamashita, N., Huang, Y., & Fu, W. (2020). "I hear you, I feel you":Encouraging deep self-disclosure through a chatbot. Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems. Honolulu, USA.
[52]
Logg, J., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
[53]
Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The 'Word-of-Machine' effect. Journal of Marketing, 86, 91-108.
[54]
Lourenço, C. J., Dellaert, B. G., & Donkers, B. (2020). Whose algorithm says so: The relationships between type of firm, perceptions of trust and expertise, and the acceptance of financial robo-advice. Journal of Interactive Marketing, 49, 107-124.
[55]
Lucas, G. M., Albert, R., Jonathan, G., Stefan, S., Giota, S., Jill, B., & Louis-philippe, M. (2017). Reporting mental health symptoms: Breaking down barriers to care with virtual human interviewers. Frontiers in Robotics and AI, 4, 51.
[56]
Lucas, G. M., Gratch, J., King, A., & Morency, L. P. (2014). It's only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94-100.
[57]
Mcguire, J., Cremer, D. D., & Cruys, T. V. D. (2024). Establishing the importance of co-creation and self-efficacy in creative collaboration with artificial intelligence. Scientific Reports, 14, 1.
[58]
Meijerink, J., & Bondarouk, T. (2023). The duality of algorithmic management: Toward a research agenda on HRM algorithms, autonomy and value creation. Human Resource Management Review, 33, 100876.
[59]
Mikulincer, M., & Shaver, P. R. (2001). Attachment theory and intergroup bias: Evidence that priming the secure base schema attenuates negative reactions to out-groups. Journal of Personality and Social Psychology, 81(1), 97-115.
[60]
Ognibene, T. C., & Collins, N. L. (1998). Adult attachment styles, perceived social support and coping strategies. Journal of Social and Personal Relationships, 15(3), 323-345.
[61]
Overall, N. C., & Sibley, C. G. (2010). Attachment and dependence regulation within daily interactions with romantic partners. Personal Relationships, 16(2), 239-261.
[62]
Palmeira, M. (2020). Advice in the presence of external cues: The impact of conflicting judgments on perceptions of expertise. Organizational Behavior and Human Decision Processes, 156(4), 82-96.
[63]
Rabb, N., Law, T., Chita-tegmark, M., & Scheutz, M. (2021). An attachment framework for human-robot interaction. International Journal of Social Robotics, 14, 539-559.
[64]
Rodriguez, L. M., Coy, A., & Hadden, B. W. (2021). The attachment dynamic: Dyadic patterns of anxiety and avoidance in relationship functioning. Journal of Social and Personal Relationships, 38, 971-994.
[65]
Sarmiento-lawrence, I. G., & Swol, L. M. V. (2023). Testing the role of attachment styles in advice response theory. Southern Communication Journal, 88, 117-130.
[66]
Shaffer, P. A., Vogel, D. L., & Wei, M. (2006). The mediating roles of anticipated risks, anticipated benefits, and attitudes on the decision to seek professional help: An attachment perspective. Journal of Counseling Psychology, 53(4), 442-452.
[67]
Shaffer, V. A., Probst, C. A., Merkle, E. C., Arkes, H. R., & Medow, M. A. (2012). Why do patients derogate physicians who use a computer-based diagnostic support system? Medical Decision Making, 33(1), 108-118.
[68]
Sibley, C. G., & Overall, N. C. (2008). Modeling the hierarchical structure of attachment representations: A test of domain differentiation. Personality and Individual Differences, 44, 238-249.
[69]
Simmons, B. L., Gooty, J., Nelson, D. L., & Little, L. M. (2009). Secure attachment: Implications for hope, trust, burnout, and performance. Journal of Organizational Behavior, 30(2), 233-247.
[70]
Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2021). My chatbot companion: A study of human-chatbot relationships. International Journal of Human-Computer Studies, 149, 102601.
[71]
Sniezek, J. A., Schrah, G. E., & Dalal, R. S. (2004). Improving judgement with prepaid expert advice. Journal of Behavioral Decision Making, 17(3), 173-190.
[72]
Sniezek, J. A., & Swol, L. M. V. (2001). Trust, confidence, and expertise in a judge-advisor system. Organizational Behavior and Human Decision Processes, 84(2), 288-307.
[73]
Stever, G. S. (2013). Mediated vs. parasocial relationships: An attachment perspective. Journal of Media Psychology, 17(3), 1-39.
[74]
Tsvetkova, M., Yasseri, T., Pescetelli, N., & Werner, T. (2024). A new sociology of humans and machines. Nature Human Behaviour, 8, 1864-1876.
[75]
Vogel, D. L., & Wei, M. (2005). Adult attachment and help-seeking intent: The mediating roles of psychological distress and perceived social support. Journal of Counseling Psychology, 52(3), 347-357.
[76]
White, T. B. (2005). Consumer trust and advice acceptance: The moderating roles of benevolence, expertise, and negative emotions. Journal of Consumer Psychology, 15(2), 141-148.
[77]
Xie, T., & Pentina, I. (2022). Attachment theory as a framework to understand relationships with social chatbots:A case study of replika. Proceedings of the 55th Hawaii International Conference on System Sciences. Maui, USA.
[78]
Yan, L., Greiff, S., Teuber, Z., & Gašević D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nature Human Behaviour, 8, 1839-1850.
[79]
Yaniv, I. (2004). Receiving other people's advice: Influence and benefit. Organizational Behavior and Human Decision Processes, 93( 1), 1-13.
[80]
Yaniv, I., & Kleinberger, E. (2000). Advice taking in decision making: Egocentric discounting and reputation formation. Organizational Behavior and Human Decision Processes, 83(2), 260-281.
[81]
Yaniv, I., & Milyavsky, M. (2007). Using advice from multiple sources to revise and improve judgments. Organizational Behavior and Human Decision Processes, 103(1), 104-120.
[82]
Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: A systematic review. Smart Learning Environments, 11(1), 1.
PDF(1440 KB)

Accesses

Citation

Detail

Sections
Recommended

/