AI算法视域下决策建议来源对验证性信息偏差的影响*

凌斌, 贺筱颖, 王晓辰

心理科学 ›› 2025, Vol. 48 ›› Issue (5) : 1185-1196.

PDF(1180 KB)
中文  |  English
PDF(1180 KB)
心理科学 ›› 2025, Vol. 48 ›› Issue (5) : 1185-1196. DOI: 10.16719/j.cnki.1671-6981.20250514
社会、人格与管理

AI算法视域下决策建议来源对验证性信息偏差的影响*

  • 凌斌1, 贺筱颖1, 王晓辰**2,3,4
作者信息 +

The Influence of Advice Sources on Confirmatory Information Bias in the Context of AI Algorithms

  • Ling Bin1, He Xiaoying1, Wang Xiaochen2,3,4
Author information +
文章历史 +

摘要

基于认知经济模型与精细加工可能性模型,通过四个实验探索决策建议来源对验证性信息偏差的影响。研究1a探讨决策建议来源对验证偏差的影响;研究1b排除建议呈现阶段对决策建议来源与验证偏差关系的影响;研究2探索决策确定性的中介作用;研究3探索信息透明度的调节作用。结果发现:自我决策的验证偏差在不同的建议来源中存在显著差异,其中AI算法建议引发的验证偏差低于人类建议,决策确定性起中介作用;而在初始决策后呈现建议时,这两类建议来源引发的验证偏差没有明显差异;信息透明度调节了决策建议来源与决策确定性的关系以及决策确定性的中介作用,透明度越低,决策确定性的中介作用越强;反之越弱。

Abstract

Confirmatory information bias represents significantly impedes timely adjustments to initial decision-making, leading to detrimental effects on decision quality. Previous empirical studies on confirmatory information bias have focused on the factors influencing individual independent decision-making. Nevertheless, individual decisions rarely occur in isolation; they often involve external decision-making advice. At the same time, the rise of algorithm-assisted decision-making enriches the sources of suggestions, from complete human judgment to deep participation of artificial intelligence (AI). As algorithm-based decision advice becomes increasingly prevalent, understanding how individuals exhibit confirmatory bias towards algorithmic advice is crucial for ensuring timely correction. However, the underlying mechanisms of how individuals search for and evaluate decision information in advice adoption models remain unclear. This study aims to explore how the sources of decision advice influence individuals' confirmatory information bias and the mechanisms underlying this effect.
Based on the cognitive economy model of confirmatory information bias, we propose that the sources of decision advice have distinct effects on individuals' confirmatory biases. To test these hypotheses, we designed four experiments. Study 1 employed a scenario-based experimental design to examine the impact of human-expert advice versus algorithmic advice on confirmatory bias, comparing these effects with those observed in a self-decision control group. Study 2 investigated how the stage at which decision advice was presented influenced the relationship between the sources of advice and confirmatory bias. To enhance the robustness and generalizability of the findings, Study 3 utilized new situational materials to further explore the mechanisms by which the sources of decision advice affected confirmatory information bias, while also examining the mediating role of decision certainty. In addition, this study also excluded the alternative explanations of identity threat, familiarity with AI technology and mood. Finally, Study 4 analyzed the moderating effect of information transparency on the relationship between the sources of decision advice and decision certainty.
The results showed that receiving decision advice significantly reduced confirmatory bias compared to self-decision scenarios. Notably, algorithmic advice resulted in lower levels of confirmatory bias than human-expert advice. This effect was mediated by decision certainty, indicating that AI algorithmic advice decreased decision certainty, which in turn reduced confirmatory bias. Furthermore, when decision advice was presented after the initial decision had been made, there was no significant difference in confirmatory bias between AI and human-expert advice. The study also found that information transparency played a critical moderating role in the relationship between the sources of decision advice and decision certainty. Specifically, when the transparency of the advice was low, the effect of decision certainty was strengthened. When the transparency of the advice was high, the effect was weakened or even nullified.
In summary, the sources of decision advice have a profound impact on confirmatory information bias, with decision certainty serving as a key mediating factor. The relationship between the sources of advice and decision certainty is significantly moderated by information transparency, with lower transparency conditions strengthening this relationship and higher transparency conditions weakening or eliminating it. Under the background of algorithm-aided decision-making, the study has important implications for designing channels that influence human cognition of algorithmic advice.
Our findings have several important implications. First, they expand research on confirmatory information bias from traditional decision-making models to the realm of AI algorithms. By comparing bias differences between AI algorithmic advice and human-expert advice, this study provides a new theoretical perspective on the relationship between AI algorithms and human decision-making biases. Second, this study extends the application of the cognitive economy model in the context of AI. By introducing this model to compare two types of advice, it treats decision certainty as a theoretical mechanism for explaining confirmatory bias, offering a novel lens for understanding human-AI perception differences. Finally, it highlights the role of information transparency. Transparency of decision advice is essential for the cognitive economy model to function effectively in AI contexts, offering insights for the design and theoretical exploration of AI-assisted decision systems.

关键词

验证性信息偏差 / 算法决策 / 决策确定性 / 信息透明度

Key words

confirmatory information bias / algorithmic decision-making / decision certainty / information transparency

引用本文

导出引用
凌斌, 贺筱颖, 王晓辰. AI算法视域下决策建议来源对验证性信息偏差的影响*[J]. 心理科学. 2025, 48(5): 1185-1196 https://doi.org/10.16719/j.cnki.1671-6981.20250514
Ling Bin, He Xiaoying, Wang Xiaochen. The Influence of Advice Sources on Confirmatory Information Bias in the Context of AI Algorithms[J]. Journal of Psychological Science. 2025, 48(5): 1185-1196 https://doi.org/10.16719/j.cnki.1671-6981.20250514

参考文献

[1] 段锦云, 方俊燕, 任小云. (2021). 建议距离和线索丰富性对建议采纳的影响. 心理科学, 44(4), 968-974.
[2] 段锦云, 孙露莹. (2016). 信息完整性缓解权力差距对建议表面采纳的影响. 人类工效学, 22(6), 34-39.
[3] 惠青山, 赵俊峰, 姜红梅, 苟思颖, 易文璋, 张慧君. (2024). 人与机器,谁的建议更容易被采纳?不同决策情境下建议者类型对建议采纳的影响研究. 管理工程学报, 38(1), 74-87.
[4] 何赛克, 张培杰, 张玮光, 於世为, 曾大军. (2023). 大模型时代下的决策范式转变. 中国地质大学学报(社会科学版), 23(4), 82-91.
[5] 柯青, 丁梦雅, 曹雅宁, 李嘉雯. (2024). 时间效应下的健康信息说服机制研究——基于精细加工可能性模型的实证. 情报学报, 43(3), 274-286.
[6] 凌斌. (2013). 行为决策中的选择性信息呈现:基于多重理论整合的视角. 心理科学进展, 21(11), 2036-2046.
[7] 凌斌, 王重鸣. (2014). 时间距离对于验证性信息加工的影响. 心理学报, 46(8), 1176-1191.
[8] 钱一敏, 潘发达. (2022). 认知努力:一个趋利避害的心理资源调节机制. 人类工效学, 28(3), 81-86.
[9] 申先军. (2021). 决策者的大脑——领导者如何善用大脑,做出高质量决策. 清华管理评论, 5, 27-33.
[10] 沈旺, 王俊尧, 李昕娱, 王丽辰. (2024). 突发事件影响下考虑公平与透明对信息服务满意度的影响研究. 图书情报工作, 68(12), 31-42.
[11] 徐惊蛰, 谢晓非. (2009). 决策过程中的建议采纳. 心理科学进展, 17(5), 1016-1025.
[12] 张语嫣, 许丽颖. (2022). 算法拒绝的三维动机理论. 心理科学进展, 30(5), 1093.
[13] 赵一骏, 许丽颖, 喻丰, 金旺龙. (2024). 感知不透明性增加职场中的算法厌恶. 心理学报, 56(4), 497-514.
[14] Acikgoz Y., Davison K. H., Compagnone M., & Laske M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399-416.
[15] Bashkirova, A., & Krpan, D. (2024). Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance. Computers in Human Behavior: Artificial Humans, 2(1), 100066.
[16] Bauer K., von Zahn M., & Hinz O. (2023). Expl(AI)ned: The impact of explainable artificial intelligence on users' information processing. Information Systems Research, 34(4), 1582-1602.
[17] Boyacı T., Canyakmaz C., & De Véricourt F. (2024). Human and machine: The impact of machine input on decision making under cognitive limitations. Management Science, 70(2), 1258-1275.
[18] Buçinca Z., Malaya M. B., & Gajos K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-21.
[19] Burton J. W., Stein M. K., & Jensen T. B. (2019). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220-239.
[20] Celiktutan B., Cadario R., & Morewedge C. K. (2024). People see more of their biases in algorithms. Proceedings of the National Academy of Sciences, 121(16), e2317602121.
[21] Chen Q., Yin C., & Gong Y. (2025). Would an AI chatbot persuade you: An empirical answer from the elaboration likelihood model. Information Technology and People, 38(2), 937-962.
[22] Dalal, R. S., & Bonaccio, S. (2010). What types of advice do decision-makers prefer? Organizational Behavior and Human Decision Processes, 112(1), 11-23.
[23] Fischer, P. (2011). Selective exposure, decision uncertainty, and cognitive economy: A new theoretical perspective on confirmatory information search. Social and Personality Psychology Compass, 5, 751-762.
[24] Fischer P., Kastenmüller A., Greitemeyer T., Fischer J., Frey D., & Crelley D. (2011). Threat and selective exposure: The moderating role of threat and decision context on confirmatory information search after decisions. Journal of Experimental Psychology: General, 140(1), 51-62.
[25] Hart W., Albarracín D., Eagly A. H., Brechan I., Lindberg M. J., & Merrill L. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555-588.
[26] Hayes, A. F. (2018). Partial, conditional, and moderated moderated mediation: Quantification, inference, and interpretation. Communication Monographs, 85(1), 4-40.
[27] Ireland, L. (2020). Who errs? Algorithm aversion, the source of judicial error, and public support for self-help behaviors. Journal of Crime and Justice, 43(2), 174-192.
[28] Li J. (2022). The confirmation bias effect in advice taking and its underlying explaining mechanisms. Advances in Psychology, 12(08), 2786-2797.
[29] Liu Y., Andreas B. E., Auh S., Merlo O., & Chun, H. E. H. (2015). Service firm performance transparency: How, when, and why does it pay off? Journal of Service Research, 18(4), 451-467.
[30] Longoni C., Bonezzi A., & Morewedge C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650.
[31] O' Connor, S., & Liu, H. (2024). Gender bias perpetuation and mitigation in AI technologies: Challenges and opportunities. AI and Society, 39(4), 2045-2057.
[32] Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123-205.
[33] Prahl, A., & Van Swol, L. (2021). Out with the humans, in with the machines?: Investigating the behavioral and psychological effects of replacing human advisors with a machine. Human-Machine Communication, 2, 209-234.
[34] Rabinovitch H., Budescu D. V., & Meyer Y. B. (2024). Algorithms in selection decisions: Effective, but unappreciated. Journal of Behavioral Decision Making, 37(2), e2368.
[35] Schrah G. E., Dalal R. S., & Sniezek J. A. (2006). No decision-maker is an island: Integrating expert advice with information acquisition. Journal of Behavioral Decision Making, 19(1), 43-60.
[36] Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting and Electronic Media, 64(4), 541-565.
[37] Vicente, L., & Matute, H. (2023). Humans inherit artificial intelligence biases. Scientific Reports, 13(1), 15737.
[38] Yogeeswaran, K., & Dasgupta, N. (2014). The devil is in the details: Abstract versus concrete construals of multiculturalism differentially impact intergroup relations. Journal of Personality and Social Psychology, 106(5), 772-789.

基金

*本研究得到国家社会科学基金(24BGL157)、中央高校基本科研业务费(B250207063)、浙江省自然科学基金(LMS25G020004)和浙江省高等教育“十四五”研究生教改项目(MS25G020006)的资助

PDF(1180 KB)

评审附件

Accesses

Citation

Detail

段落导航
相关文章

/