大语言模型在多领域表现出极大的发展潜力和应用价值,但技术的“双刃剑”效应也引发了公众对其发展价值和潜在风险的广泛争议。研究基于ABC态度模型,结合大语言模型驱动的计算文本分析与多分类逻辑回归方法,系统探究公众议题、情绪表达与使用倾向之间的关联机制。结果表明,不同议题下公众情绪和使用倾向存在显著差异;公众对大语言模型的使用倾向受到议题和情绪共同影响,其中情绪因素的作用尤为显著,情绪的积极转向既降低倾向不使用的意愿又提升倾向使用的意愿。通过剖析公众态度的三个维度,为心理学的公众态度研究提供结合社交媒体数据和大语言模型的新型分析模式,也从公众视角为人工智能治理提供启示。
Abstract
Advances in research and technological innovation have demonstrated significant potential and application value of Large Language Models (LLMs) in fields such as healthcare, education, and journalism. However, their widespread adoption has also highlighted the “double-edged sword” nature of this technology. Challenges including accountability dilemmas caused by algorithmic black boxes, societal fairness concerns stemming from occupational displacement anxiety, and privacy risks associated with massive data processing have sparked widespread public debates about their value and potential risks. Consequently, understanding public attitudes and addressing concerns have become pivotal to balancing technological innovation with social acceptance. The proliferation of internet access and social media platforms has opened new paradigms and observational avenues for public attitude research. Existing studies on public perceptions of LLMs often focus on individual and external factors. This research leverages rich social media comment data and employs the ABC attitude model to identify key discussion topics about LLMs, analyzes the distribution of emotional and behavioral tendencies across these topics, and investigates the interaction mechanisms between public concerns, emotional expressions, and usage intentions.
The study is conducted in two phases. In the first stage, data are collected from public comments on LLMs on the Douyin platform. Using Few-shot Prompting and human collaboration, it leverages the semantic understanding and generation capabilities of large language models to automatically categorize text into topics, emotions, and usage intentions. During the combination of large language models and manual clustering, the study references the Unified Theory of Acceptance and Use of Technology (UTAUT2) and the Value-Based Adoption Model to assist in thematic clustering. In the second stage, the study employs Multinomial Logistic Regression model to explore the impact mechanisms of topics on emotions and usage intentions of large language models. The independent variable is public discussion topics, sentiment serves as the dependent variable in Model 1, while usage intentions are the dependent variables in Models 2 and 3. Control variables account for regional differences and time effects.
Key findings reveal: (1) Overall, negative sentiment and no tendency dominated the comments, but there were significant differences in public sentiment and usage tendency across topics. (2) Public usage intentions toward LLMs are jointly shaped by discussion topics and emotional factors. Specifically, performance expectancy, effort expectancy, and hedonic motivation discussion topics demonstrated positive effects on both emotions and usage intentions. Discussions about performance and effort expectancies significantly reduced negative emotions while enhancing usage intentions. Discussions about hedonic motivation not only fostered positive emotions but also mitigated behavioral resistance and increased adoption willingness. Conversely, discussions about price value negatively impacted emotions, significantly decreasing the likelihood of positive emotional expressions. Notably, emotional factors played a particularly crucial role, simultaneously reducing resistance to LLM usage and strengthening adoption intentions. Public sentiment and usage intentions toward LLMs have not shown significant regional divides. However, over time, the public’s positive sentiment toward large language models has gradually become more rational, while behavioral resistance to them has gradually diminished.
This research contributes to psychological studies of public attitudes by introducing a novel analytical paradigm that integrates social media data with LLM methodologies, while adding a citizen-centric perspective into AI governance. First, the collaborative framework combining LLM processing with human-guided topic clustering effectively leverages LLMs’ superior text comprehension capabilities, overcoming the technical complexity and interpretational rigidity of traditional topic modeling approaches. The integration of manual validation and theoretical frameworks significantly enhances analytical accuracy and theoretical relevance. Second, by systematically mapping core public discussion topics and their associated emotional/behavioral patterns, and elucidating the mechanisms through which topics and emotions shape usage intentions, this study deepens the psychological understanding of attitude structures and public perception dynamics toward emerging AI technologies. Future research should expand multi-platform data collection, extend the research cycle, and explore commonalities and differences in public attitudes in cross-cultural contexts.
关键词
大语言模型 /
公众态度 /
计算文本分析 /
人工智能治理
Key words
large language models /
public attitudes /
computational text analysis /
artificial intelligence governance
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] 孟天广, 张静, 曹迥仪. (2024). 社交媒体空间公众大模型认知:主题、态度与传播. 苏州大学学报(哲学社会科学版), 45(5), 181-190.
[2] Ajzen, I. (1989). Attitude structure and behavior. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 241-274). Lawrence Erlbaum Associates.
[3] Alenzi, H. A., & Miskon, S. (2024). Government to citizens communication via social media platforms: Literature review. International Journal of Academic Research in Business and Social Sciences, 14(5), 1040-1055.
[4] Annepaka, Y., & Pakray, P. (2025). Large language models: A survey of their development, capabilities, and applications. Knowledge and Information Systems, 67(3), 2967-3022.
[5] Asutay, E., & Västfjäll, D. (2024). Affective integration in experience, judgment, and decision-making. Communications Psychology, 2(1), 126.
[6] Bono Rossello N., Simonofski A., & Castiaux A. (2025). Artificial intelligence for digital citizen participation: Design principles for a collective intelligence architecture. Government Information Quarterly, 42(2), 102020.
[7] Deeks A. S.(2025). The algorithmic black box. In A. S. Deeks (Ed.), The double black box: National security, artificial intelligence, and the struggle for democratic accountability (pp. 64-105). Oxford University Press.
[8] Deiner M. S., Honcharov V., Li J. W., Mackey T. K., Porco T. C., & Sarkar U. (2024). Large language models can enable inductive thematic analysisof a social media corpus in a single prompt:human validationstudy. Jmir Infodemiology, 4, Article e59641.
[9] DeWall C. N., Baumeister R. F., Chester D. S., & Bushman B. J. (2016). How often does currently felt emotion predict social behavior and judgment? A meta-analytic test of two theories. Emotion Review, 8(2), 136-143.
[10] Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Harcourt Brace Jovanovich College Publishers.
[11] Feng W. Y., Li Y. H., Ma C. H., & Yu L. S. (2025). From ChatGPT to sora: Analyzing public opinions and attitudes on generative artificial intelligence in social media. IEEE Access, 13, 14485-14498.
[12] Flanders S., Nungsari M., & Cheong Wing Loong, M. (2025). AI coding with few-shot prompting for thematic analysis. ArXiv.
[13] Fu X. Y., Sanchez T. W., Li C., & Junqueira J. R. (2024). Deciphering public voices in the digital era. Journal of the American Planning Association, 90(4), 728-741.
[14] Gilardi F., Alizadeh M., & Kubli M. (2023). ChatGPT outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences of the United States of America, 120(30), Article e2305016120.
[15] Gnambs T., Stein J. P., Zinn S., Griese F., & Appel M. (2025). Attitudes, experiences, and usage intentions of artificial intelligence: A population study in Germany. Telematics and Informatics, 98, 102265.
[16] Heseltine, M., & von Hohenberg, B. C. (2024). Large language models as a substitute for human experts in annotating political text. Research and Politics, 11(1), Article 20531680241236239.
[17] Huang Z., Chen F., & Zhao X. (2025). Towards cognition-emotion-behaviour models of nonsuicidal self-injury: A knowledge graph approach. Health Information Science and Systems, 13(1), 26.
[18] Koonchanok R., Pan Y. L., & Jang H. (2024). Public attitudes toward chatgpt on twitter: Sentiments, topics, and occupations. Social Network Analysis and Mining, 14(1), Article 106.
[19] Kwon, J., & Vogt, C. A. (2010). Identifying the role of cognitive, affective, and behavioral components in understanding residents' attitudes toward place marketing. Journal of Travel Research, 49(4), 423-435.
[20] Lee S., Chung M., Kim N., & Jones-Jang S. M. (2024). Public perceptions of ChatGPT: Exploring how nonexperts evaluate its risks and benefits. Technology, Mind, and Behavior. Advance online published.
[21] Lian Y., Tang H. T., Xiang M. T., & Dong X. F. (2024). Public attitudes and sentiments toward ChatGPT in China: A text mining analysis based on social media. Technology in Society, 76, Article 102442.
[22] Liu B. S., Xu Y. H., Yang Y., & Lu S. J. (2021). How public cognition influences public acceptance of CCUS in China: Based on the ABC (affect, behavior, and cognition) model of attitudes. Energy Policy, 156, Article 112390.
[23] Liu, Y., & Lyu, Z. (2025). Changes in public perception of ChatGPT: A text mining perspective based on social media. International Journal of Human-Computer Interaction, 41(13), 8265-8279.
[24] Lu L., Zhao J. L., & Chen H. R. (2024). Investigating OTA employees' double-edged perceptions of ChatGPT: The moderating role of organizational support. International Journal of Hospitality Management, 120, Article 103753.
[25] Nader K., Toprac P., Scott S., & Baker S. (2024). Public understanding of artificial intelligence through entertainment media. AI and Society, 39(2), 713-726.
[26] Papagiannidis E., Mikalef P., & Conboy K. (2025). Responsible artificial intelligence governance: A review and research framework. The Journal of Strategic Information Systems, 34(2), 101885.
[27] Rosenberg M. J.,& Hovland, C. I. (1960). Cognitive, affective and behavioral components of attitudes. In M. J. Rosenberg & C. I. Hovland (Eds.), Attitude organization and change: An analysis of consistency among attitude components (pp. 1-14). Yale University Press.
[28] Sargın, S. (2024). Antecedents and consequences of consumers' attitudes towards artificial intelligence in social media. Business and Economics Research Journal, 15(3), 229-256.
[29] Shetty D. K., Vijaya Arjunan R., Cenitta D., Makkithaya K., Hegde N. V., Bhatta B S. R., Salu S., Aishwarya T. R., Bhat P., & Pullela P. K. (2025). Analyzing AI regulation through literature and current trends. Journal of Open Innovation: Technology, Market, and Complexity, 11(1), 100508.
[30] Svenningsson J., Höst G., Hultén M., & Hallström J. (2022). Students' attitudes toward technology: Exploring the relationship among affective, cognitive and behavioral components of the attitude construct. International Journal of Technology and Design Education, 32(3), 1531-1551.
[31] Tamilmani K., Rana N. P., & Dwivedi Y. K. (2021). Consumer acceptance and use of information technology: A meta-analytic evaluation of UTAUT2. Information Systems Frontiers, 23(4), 987-1005.
[32] Liu Y., Meng X. T., & Li A. (2023). AI' s ethical implications: Job displacement. Advances in Computer and Communication, 4(3), 138-142.
[33] Vandercammen L., Hofmans J., Theuns P., & Kuppens P. (2014). On the role of specific emotions in autonomous and controlled motivated behaviour. European Journal of Personality, 28(5), 437-448.
[34] Venkatesh V., Thong J. Y. L., & Xu X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. Mis Quarterly, 36(1), 157-178.
[35] Wang, S., & Liang, Z. (2024). What does the public think about artificial intelligence? An investigation of technological frames in different technological context. Government Information Quarterly, 41(2), 101939.
[36] Wang Z., Chu Z., Doan T. V., Ni S., Yang M., & Zhang W. (2024). History, development, and principles of large language models: An introductory survey. AI and Ethics, 5(3), 1-17.
[37] Xing Y., Zhang J. Z., Teng G., & Zhou X. (2024). Voices in the digital storm: Unraveling online polarization with ChatGPT. Technology in Society, 77, 102534.
[38] Xu Z., Fang Q., Huang Y., & Xie M. (2024). The public attitude towards ChatGPT on reddit: A study based on unsupervised learning from sentiment analysis and topic modeling. PLoS ONE, 19(5), e0302502.
[39] Zhang J. W., Fu M. M., Zhang H. C., Li C. Y., Zheng W. F., & Hua W. J. (2024). Prosocial or deviant? The mechanism of emotion on cyber social behavior. Current Psychology, 43(44), 34281-34296.
[40] Zhang R., Li H. W., Qian X. Y., Jiang W. B., & Chen H. X. (2025). On large language models safety, security, and privacy: A survey. Journal of Electronic Science and Technology, 23(1), 100301.
基金
*本研究得到中国工程院重大咨询研究项目子课题“智能制造与机器人领域三链一体化协同创新发展研究”(2023-JB-10-04)的资助