公众能接受AI健康新闻吗?基于信任中介与负向预期违背调节的实验证据*

那宇翔, 刘映萱, 赖凯声

心理科学 ›› 2025, Vol. 48 ›› Issue (4) : 972-984.

PDF(1374 KB)
中文  |  English
PDF(1374 KB)
心理科学 ›› 2025, Vol. 48 ›› Issue (4) : 972-984. DOI: 10.16719/j.cnki.1671-6981.20250417
计算建模与人工智能

公众能接受AI健康新闻吗?基于信任中介与负向预期违背调节的实验证据*

  • 那宇翔, 刘映萱, 赖凯声**
作者信息 +

Can the Public Accept AI-Generated Health News?Experimental Evidence from Trust Mediation and Negative Expectancy Violation Moderation

  • Na Yuxiang, Liu Yingxuan, Lai Kaisheng
Author information +
文章历史 +

摘要

AI健康新闻在促进传递健康知识、提升医疗服务等健康传播领域存在巨大潜力,但目前缺乏直接探究公众接受AI健康新闻及其心理机制的实证证据。通过两项被试间实验检验了公众对AI健康新闻的信任与接受以及负向预期违背这一心理因素的调节作用。结果发现,相比人工新闻,公众对AI健康新闻的信息接受意愿更低,并且对AI健康新闻的感知信任中介了新闻代理类型(AI vs. 人类)影响信息接受意愿的作用。此外,负向预期违背调节了新闻代理类型与感知信任之间的关系,即公众的负向预期违背越高,对AI健康新闻与人工健康新闻的感知信任差距越大。

Abstract

The potential of artificial intelligence (AI) in health journalism is becoming increasingly recognized, offering promising opportunities to enhance information dissemination. By automating routine news production, AI enables journalists to allocate more time and resources to strategic health initiatives, policy development, and in-depth investigative reporting. However, the public’s trust in and acceptance of AI-generated health information remain underexplored. Prior research suggests that individuals may actively reject or avoid AI-provided information, known as “AI information avoidance”. This poses a significant challenge to the effectiveness of AI-driven health news. If the public distrusts or resists health information disseminated by AI systems, the potential benefits of AI in health communication could be severely undermined.
In this context, it is critical to examine the underlying psychological processes that influence the public’s engagement with AI health news. The expectation violation theory is a valuable framework for understanding these dynamics because it illuminates how deviations from psychological expectations can affect trust and acceptance. This study aims to explore the psychological mechanisms underlying the public’s perception of AI news in the context of expectation violation. By clarifying these relationships, we hope to strengthen the role of AI as a transformative tool for realizing the ideal vision of health journalism. Specifically, this study explores the effects of news agents (AI vs. human) on the public’s willingness to accept information and the moderating role of negative expectation violation through two between-subject experiments and a moderating mediator model based on expectation violation theory.
The results of experiment 1 showed a significant difference in subjects’ willingness to accept health news information written by AI agent and human agent (t = - 6.75, p < .001, Cohen’ s d = 1.49). Thus, Hypothesis 1 was supported. Perceived trust mediated the relationship between news agent and willingness to accept information. Type of news agent had a significant effect on perceived trust (β = - 1.28, t = - 7.41, p < .001), whereas perceived trust had a significant effect on willingness to accept information (β = .94, t = 23.27, p < .001). Results of the mediation analysis showed a nonsignificant direct effect and a significant indirect effect (β = - 1.20, 95% CI = [-1.54, -.88]). The mediation effect was a full mediation effect. Therefore, Hypothesis 2 was supported. Furthermore, negative expectation violation moderated the relationship between news agent type and perceived trust (β = -.22, t = - 2.34, p = .02), and the public’s negative expectation violation exacerbated the detrimental effect of AI authors as agents on their perceived trust. Hypothesis 3 was supported. Experiment 2 further validated the interaction effect between news agent type and negative expectation violation on perceived trust by manipulating the degree of negative expectation violation of the subjects in a more rigorous experimental design. The results showed that the results of experiment 1 were validated.
Our study reveals a significant public’s preference for health news authored by human agents over AI agents, underscoring the critical role of news agent in shaping information acceptance. Importantly, perceived trust emerges as a central mediating mechanism explaining this disparity in the public’s reception. Furthermore, our findings highlight the moderating role of negative expectation violation in the relationship between news agents (human vs. AI) and perceived trust. Negative expectation violation can further exacerbate the negative impact of AI news agent on perceived trust. These results emphasize the importance of strategically managing public’s expectations in the deployment of AI-driven health news.
At the theoretical level, this study expanded the scope of genre types in AI news research, focusing specifically on the public’s trust in and willingness to accept information in AI health news. Meanwhile, by revealing the moderating role of negative expectation violation in the relationship between news agent type and public perceived trust, this study provided an explanatory reference for the inconsistent findings that existed in previous studies on AI news trust. On the practical level, this study revealed the existence of negative expectation violation when the public is exposed to AI health news and confirmed the impact of negative feedback due to failed expectations on the public’s willingness to accept health information.

关键词

AI新闻 / 人工智能 / 预期违背 / 信任 / 健康传播

Key words

AI-generated news / artificial intelligence / expectancy violation / trust / health communication

引用本文

导出引用
那宇翔, 刘映萱, 赖凯声. 公众能接受AI健康新闻吗?基于信任中介与负向预期违背调节的实验证据*[J]. 心理科学. 2025, 48(4): 972-984 https://doi.org/10.16719/j.cnki.1671-6981.20250417
Na Yuxiang, Liu Yingxuan, Lai Kaisheng. Can the Public Accept AI-Generated Health News?Experimental Evidence from Trust Mediation and Negative Expectancy Violation Moderation[J]. Journal of Psychological Science. 2025, 48(4): 972-984 https://doi.org/10.16719/j.cnki.1671-6981.20250417

参考文献

[1] 新华智云. (2020). @所有新闻人,你有一个疫情报道机器人请查收!. https://www.sohu.com/a/370550811_120250948
[2] 姚琦, 周赟. (2022). 主观还是客观:AI写作对新闻可信度的影响机制研究. 现代传播(中国传媒大学学报), 44 (10), 127-135+145.
[3] 曾祥敏, 刘思琦. (2024). 真假“智”辨:生成式人工智能参与事实核查的价值、逻辑与强化. 新媒体与社会, 2, 74-88+402-403.
[4] 张宇, 曹林. (2024). 消失的对话性:作为伪命题的AI时评. 全球传媒学刊, 11(3), 42-56.
[5] AI, HLEG. (2020). Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment. https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
[6] Aljohani, A. (2025). AI-driven decision-making for personalized elderly care: a fuzzy MCDM-based framework for enhancing treatment recommendations. BMC Medical Informatics and Decision Making, 25(1), 1-16.
[7] Bagozzi, R. P. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8(4), 244-254.
[8] Barrolleta, L. A. L. R., & Sandoval-Martín, T. (2024). Artificial intelligence versus journalists: The quality of automated news and bias by authorship using a Turing test. Anàlisi, 70, 15-36.
[9] Bishop, M. (2019). Healthcare social media for consumer informatics. In M. Edmunds, C. Hass, & E. Holve (Eds.), Consumer informatics and digital health: solutions for health and health care(pp.61-86). Springer.
[10] Böhm R., Jörling M., Reiter L., & Fuchs C. (2023). People devalue generative AI’s competence but not its advice in addressing societal and personal challenges. Communications Psychology, 1(1), Article 32.
[11] Bosma A. K., Mulder E., Pemberton A., & Vingerhoets A. J. (2018). Observer reactions to emotional victims of serious crimes: Stereotypes and expectancy violations. Psychology, Crime and Law, 24(9), 957-977.
[12] Chauhan S., Gajpal Y., Bhardwaj B., Kumar P., Goyal S., Yang X., & Bhardwaj A. K. (2023). Examining consumers' behavioral intentions towards online home services applications. Journal of Global Information Management, 31(1), 1-26.
[13] Chua, A. Y., & Banerjee, S. (2018). Intentions to trust and share online health rumors: An experiment with medical professionals. Computers in Human Behavior, 87, 1-9.
[14] Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), Article eadn5290.
[15] Duan, P. (2024). Analyzing drivers of attitudes toward machine video news: A Xinhua Zhiyun case study. Global Media and China, 9(3), 384-401.
[16] Farangi M. R., Nejadghanbar H., & Hu G. (2024). Use of generative AI in research: Ethical considerations and emotional experiences. Ethics and Behavior, Advance online publication.
[17] Faraj S., Pachidi S., & Sayegh K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62-70.
[18] Gerend, M. A., & Maner, J. K. (2011). Fear, anger, fruits, and veggies: Interactive effects of emotion and message framing on health behavior. Health Psychology, 30(4), 420-423.
[19] Graefe, A., & Bohlken, N. (2020). Automated journalism: A meta-analysis of readers' perceptions of human-written in comparison to automated news. Media and Communication, 8(3), 50-59.
[20] Graefe A., Haim M., Haarmann B., & Brosius H. B. (2018). Readers' perception of computer-generated news: Credibility, expertise, and readability. Journalism, 19(5), 595-610.
[21] Grosser, K. M. (2016). Trust in online journalism. A conceptual model of risk-based trust in the online context. Digital Journalism, 4(8), 1036-1057.
[22] Guenther L., Bischoff J., Löwe A., Marzinkowski H., & Voigt M. (2019). Scientific evidence and science journalism: Analysing the representation of (un) certainty in German print and online media. Journalism Studies, 20(1), 40-59.
[23] Haim, M., & Graefe, A. (2017). Automated news: Better than expected? Digital Journalism, 5(8), 1044-1059.
[24] Henss, L., & Pinquart, M. (2024). Coping with expectation violations in education: The role of optimism bias and need for cognitive closure. European Journal of Psychology of Education, 39(3), 2303-2323.
[25] Hong, J. W. (2021). Artificial intelligence (AI), Don't surprise me and stay in your lane: An experimental testing of perceiving humanlike performances of AI. Human Behavior and Emerging Technologies, 3(5), 1023-1032.
[26] Hong J. W., Chang H. C. H., & Tewksbury D. (2024). Can AI become walter cronkite? Testing the machine heuristic, the hostile media effect, and political news written by artificial intelligence. Digital Journalism, 13(4), 845-868.
[27] Jang W., Chun J. W., Kim S., & Kang Y. W. (2023). The effects of anthropomorphism on how people evaluate algorithm-written news. Digital Journalism, 11(1), 103-124.
[28] Jeong, J. S., & Lee, S. (2018). The influence of information appraisals and information behaviors on the acceptance of health information: A study of television medical talk shows in South Korea. Health Communication, 33(8), 972-979.
[29] Jonas E., Schulz-Hardt S., Frey D., & Thelen N. (2001). Confirmation bias in sequential information search after preliminary decisions: An expansion of dissonance theoretical research on selective exposure to information. Journal of Personality and Social Psychology, 80(4), 557.
[30] Kožuh, I., & Čakš, P. (2023). Social media fact-checking: The effects of news literacy and news trust on the intent to verify health-related information. Healthcare, 11(20), Article 2796.
[31] Le, X. C. (2023). Determinants of health information acceptance to COVID-19 avoidance: The lens of information acceptance model and elaboration likelihood model. The Bottom Line, 36(1), 29-51.
[32] Lermann Henestrosa, A., & Kimmerle, J. (2024). The effects of assumed AI vs. human authorship on the perception of a GPT-generated text. Journalism and Media, 5(3), 1085-1097.
[33] Longoni C., Bonezzi A., & Morewedge C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650.
[34] Meng F., Guo X., Peng Z., Lai K. H., & Zhao X. (2019). Investigating the adoption of mobile health services by elderly users: Trust transfer model and survey study. JMIR mHealth and uHealth, 7(1), Article e12269.
[35] Nabi R. L., Dobmeier C. M., Robbins C. L., Pérez Torres D., & Walter N. (2024). Effects of scanning health news headlines on trust in science: An emotional framing perspective. Health Communication, 39, 1-13.
[36] Nabi, R. L., & Green, M. C. (2015). The role of a narrative' s emotional flow in promoting persuasive outcomes. Media Psychology, 18(2), 137-162.
[37] Park, R. E. (1940). News as a Form of Knowledge: A Chapter in the Sociology of Knowledge. American Journal of Sociology, 45(5), 669-686.
[38] Parratt-Fernández S., Mayoral-Sánchez J., & Mera-Fernández M. (2021). The application of artificial intelligence to journalism: An analysis of academic production. Profesional De La Información, 30(3), e300317.
[39] Proksch S., Schühle J., Streeb E., Weymann F., Luther T., & Kimmerle J. (2024). The impact of text topic and assumed human vs. AI authorship on competence and quality assessment. Frontiers in Artificial Intelligence, 7, Article 1412710.
[40] Robertson C., Woods A., Bergstrand K., Findley J., Balser C., & Slepian M. J. (2023). Diverse patients' attitudes towards Artificial Intelligence (AI) in diagnosis. PLOS Digital Health, 2(5), Article 0000237.
[41] Schepman, A., & Rodway, P. (2020). Initial validation of the general attitudes towards Artificial Intelligence Scale. Computers in Human Behavior Reports, 1, Article 100014.
[42] Shen, L., & Dillard, J. P. (2005). Psychometric properties of the Hong psychological reactance scale. Journal of Personality Assessment, 85(1), 74-81.
[43] Shin, D. (2022). How do people judge the credibility of algorithmic sources? Ai & Society, 37(1), 1-16.
[44] Sullivan Y., de Bourmont M., & Dunaway M. (2022). Appraisals of harms and injustice trigger an eerie feeling that decreases trust in artificial intelligence systems. Annals of Operations Research, 308, 525-548.
[45] Sundar, S. S., & Nass, C. (2001). Conceptualizing sources in online news. Journal of Communication, 51(1), 52-72.
[46] Waddell, T. F. (2018). A robot wrote this? How perceived machine authorship affects news credibility. Digital Journalism, 6(2), 236-255.
[47] Wang, S., & Huang, G. (2024). The impact of machine authorship on news audience perceptions: A meta-analysis of experimental studies. Communication Research, 51(7), 815-842.
[48] Whittlestone J., Nyrup R., Alexandrova A., Dihal K., & Cave S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: Aroadmap for research. Nuffield Foundation.
[49] Wischnewski, M., & Krämer, N. (2024). Does polarizing news become less polarizing when written by an AI? Investigating the perceived credibility of news attributed to a machine in the light of the confirmation bias. Journal of Media Psychology: Theories, Methods, and Applications.Advance online publication.
[50] Wölker, A., & Powell, T. E. (2021). Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism, 22(1), 86-103.
[51] Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Available at SSRN, Article 3312874.
[52] Zhang Y., Tan W., & Lee E. J. (2024). Consumers' responses to personalized service from medical artificial intelligence and human doctors. Psychology and Marketing, 41(1), 118-133.
[53] Zhao, P., & Yan, K. (2022). “Not as attractive and communicatively competent as I expected”: The effects of expectancy violations on relational outcomes during modality switching in online dating. Computers in Human Behavior, 131, Article 107203.
[54] Zheng L., Elhai J. D., Miao M., Wang Y., Wang Y., & Gan Y. (2022). Health-related fake news during the COVID-19 pandemic: Perceived trust and information search. Internet Research, 32(3), 768-789.
[55] Zheng Y., Zhong B., & Yang F. (2018). When algorithms meet journalism: The user perception to automated news in a cross-cultural context. Computers in Human Behavior, 86, 266-275.
[56] Zhou F., Lin Y., Mou J., Cohen J., & Chen S. (2023). Understanding the dark side of gamified interactions on short-form video platforms: Through a lens of expectations violations theory. Technological Forecasting and Social Change, 186, Article 122150.

基金

*本研究得到中央高校基本科研业务费专项(23JNQMX50)的资助

PDF(1374 KB)

评审附件

Accesses

Citation

Detail

段落导航
相关文章

/