视听觉通道信息与情绪效价对共情准确性任务表现的影响*

王淼, 张丽颖, 付新玮, 王毅, 姜越, 曹原, 王艳郁, 陈楚侨

心理科学 ›› 2025, Vol. 48 ›› Issue (3) : 567-576.

PDF(668 KB)
中文  |  English
PDF(668 KB)
心理科学 ›› 2025, Vol. 48 ›› Issue (3) : 567-576. DOI: 10.16719/j.cnki.1671-6981.20250306
基础、实验与功效

视听觉通道信息与情绪效价对共情准确性任务表现的影响*

  • 王淼1,2,3, 张丽颖1, 付新玮1, 王毅2,3, 姜越4, 曹原5, 王艳郁**1, 陈楚侨**2,3
作者信息 +

Investigating the Differential Effects of Audio-visual Information and Emotional Valence on Empathic Accuracy

  • Wang Miao1,2,3, Zhang Liying1, Fu Xinwei1, Wang Yi2,3, Jiang Yue4, Cao Yuan5, Wang Yanyu1, Raymond C. K. Chan2,3
Author information +
文章历史 +

摘要

采用中文版共情准确性任务对85名大学生进行测试,探讨不同材料类型(仅音频、真人有声视频和虚拟人有声视频)和情绪效价(积极、消极)对共情准确性任务表现的影响。结果显示,真人有声视频条件的情感共情评分显著优于虚拟人有声视频条件;共情准确性和认知共情评分上未发现材料类型间的显著差异。在认知共情和情感共情评分上,材料类型与情绪效价的交互作用显著:观看积极视频时,仅音频和真人有声视频条件的共情评分显著高于虚拟人有声视频;观看消极视频时,真人有声视频条件的情感共情评分显著高于仅音频条件。以上结果表明,不同情绪效价在视听觉信息影响共情准确性任务表现中具有重要作用,强调了真人表情等视觉信息在共情过程中的重要性。

Abstract

Background and Aims: Empathy involves the communication and understanding of social information between individuals in specific contexts. Empirical evidence suggests that auditory information can affect one’s empathic ability more than visual information, but the differential effects of sensory modalities of information on empathic accuracy remain unclear. This study aimed to examine the effects of auditory and different visual modalities on empathic accuracy based on the Chinese version of the Empathic Accuracy Task (EAT). We hypothesized that (1) performance of cognitive empathy in avatar audio-video condition would be significantly lower than performance in the auditory-only and human audio-video condition. (2) There was significant interaction between emotional valence and Modality-Condition in Cognitive empathy. Specifically, cognitive empathy was significantly higher in the human audio-video condition compared to the audio-only conditions for positive-valenced videos, while there was no significant difference among the three experimental conditions for negative-valenced videos.
Method: We recruited 85 college students to complete the Chinese version of the EAT in three different conditions, i.e., (1) auditory-only condition, (2) avatar audio-video (visual information is less-than human audio-video condition) condition, and (3) human audio-video condition. The EAT had 12 video clips (6 positive and 6 negative) with a character describing his/her emotional autographical event in each video clip. Participants were asked to rate the character’s emotional states continuously and to respond to questions concerning perspective taking, emotional contagion, empathic concern, and willingness/effort to help.
Results: The 3 (Modality-Condition: auditory-only, avatar audio-video and human audio-video) x 2 (Valence: positive and negative) ANOVA model found significant Modality-Condition main effect on emotional contagion score (F(2, 168) = 3.08, p = .049), with the human audio-video condition (M = 7.01, SD = 1.26) eliciting higher degrees of emotional contagion than the avatar audio-video condition (M = 6.74, SD = 1.28). However, the Modality-Condition main effect on empathy accuracy and perspective taking scores were non-significant. The Valence main effects on empathic accuracy (F(1, 84) = 10.16, p < .01), emotional contagion (F(1, 84) = 6.45, p < .05) and perspective taking (F(1, 84) = 14.01, p < .001) were significant. Empathic responses were enhanced in videos depicting positive moods relative to those depicting negative moods. The Modality-Condition-by-Valence interaction on perspective taking (F(2, 168) =7.57, p < .01) and emotional contagion (F(2, 168) = 6.48, p < .01) were significant. Simple effect analysis found that, for positive-valenced videos, both perspective taking and emotional contagion scores were significantly lower in the Avatar audio-video condition (M = 7.15, SD = 1.36; M = 6.69, SD = 1.53) compared to the audio-only (M = 7.59, SD = 1.03; M = 7.14, SD = 1.30) and human audio-video (M=7.57, SD = 1.26; M = 7.17, SD = 1.51) conditions. In contrast, for negatively valenced videos, emotional contagion was higher in the human audio-video condition (M = 6.84, SD = 1.44) than the audio-only condition (M = 6.52, SD = 1.35). However, the Modality-Condition-by-Valence interaction was not significant for empathy accuracy.
Conclusions: This study investigated the impact of audio-visual information on empathy by comparing audio-only and human audio-video conditions, differentiating between positive and negative emotional valence. The findings highlighted that human facial expressions significantly enhance emotional empathy in negative emotional contexts when matched with auditory information. Additionally, by introducing human and avatar audio-video condition, the study manipulated different levels of visual information. Our findings suggested the impacts of visual information on empathy varied with emotional valence. The Avatar audio-video condition undermined empathy in positive-valenced scenarios. Together, our work elucidated the effects of emotional valence of visual information on empathy performance, implicating the role of human visual cues in empathy processing.

关键词

共情准确性 / 认知共情 / 情感共情 / 视听觉通道信息

Key words

empathic accuracy / cognitive empathy / affective empathy / audio-visual information

引用本文

导出引用
王淼, 张丽颖, 付新玮, 王毅, 姜越, 曹原, 王艳郁, 陈楚侨. 视听觉通道信息与情绪效价对共情准确性任务表现的影响*[J]. 心理科学. 2025, 48(3): 567-576 https://doi.org/10.16719/j.cnki.1671-6981.20250306
Wang Miao, Zhang Liying, Fu Xinwei, Wang Yi, Jiang Yue, Cao Yuan, Wang Yanyu, Raymond C. K. Chan. Investigating the Differential Effects of Audio-visual Information and Emotional Valence on Empathic Accuracy[J]. Journal of Psychological Science. 2025, 48(3): 567-576 https://doi.org/10.16719/j.cnki.1671-6981.20250306

参考文献

[1] 陈晓宇, 王美馨, 高雅婷, 蒋重清. (2016). 视觉与听觉情绪信息关系判断中的交互作用. 心理科学, 39(4), 842-848.
[2] 郭晓栋, 郑泓, 阮盾, 胡丁鼎, 王毅, 王艳郁, 陈楚侨. (2023). 认知和情感共情与负性情绪: 情绪调节的作用机制. 心理学报, 55(6), 892-904.
[3] 孙炳海, 岳腾宇, 李伟健, 邵雨婷. (2023). 耳听为“实”, 眼见为“虚”: 推断任务对共情准确性视听通道效应的影响. 心理科学, 46(2), 299-306.
[4] Adobe Premiere Pro[Computer software]. (2003). Retrieved from https://www.adobe.com/hk_zh/products/premiere.html
[5] Bennett, C., & Šabanovic, S. (2013). Perceptions of affective expression in a minimalist robotic face. In 2013 8th ACM/IEEE international conference on human-robot interaction (HRI)(pp. 81-82). IEEE.
[6] Boulic R., Bécheiraz P., Emering L., & Thalmann D. (1997). Integration of motion control techniques for virtual human and avatar real-time animation. In Proceedings of the ACM symposium on virtual reality software and technology (pp. 111-118). ACM.
[7] Eckland N. S., Huang A. B., & Berenbaum H. (2020). Empathic accuracy: Associations with prosocial behavior and self-insecurity. Emotion, 20(7), 1306-1310.
[8] Fabri M., Moore D., & Hobbs D. (2004). Mediating the expression of emotion in educational collaborative virtual environments: an experimental study. Virtual Reality, 7(2), 66-81.
[9] FeldmanHall O., Dalgleish T., Evans D., & Mobbs D. (2015). Empathic concern drives costly altruism. NeuroImage, 105, 347-356.
[10] Gesn, P. R., & Ickes, W. (1999). The development of meaning contexts for empathic accuracy: Channel and sequence effects. Journal of Personality and Social Psychology, 77(4), 746-761.
[11] Hall, J. A., & Schmid Mast, M. (2007). Sources of accuracy in the empathic accuracy paradigm. Emotion, 7(2), 438-446.
[12] Harvey P. O., Zaki J., Lee J., Ochsner K., & Green M. F. (2013). Neural substrates of empathic accuracy in people with schizophrenia. Schizophrenia Bulletin, 39(3), 617-628.
[13] Hatfield E., Bensman L., Thornton P. D., & Rapson R. L. (2014). New perspectives on emotional contagion: A review of classic and recent research on facial mimicry and contagion. Interpersona: An International Journal on Personal Relationships, 8(2), 159-179.
[14] Hess, U., & Blairy, S. (2001). Facial mimicry and emotional contagion to dynamic emotional facial expressions and their influence on decoding accuracy. International Journal of Psychophysiology, 40(2), 129-141.
[15] Heyes, C. (2018). Empathy is not in our genes. Neuroscience and Biobehavioral Reviews, 95, 499-507.
[16] Hu D. D., Guo X. D., Zheng H., Yan C., Lui S. S. Y., Wang Y. Y., … Chan, R. C. K. (2024). Empathic accuracy in individuals with schizotypal personality traits. PsyCh Journal, 13(5), 813-823.
[17] Ickes, W. (1993). Empathic accuracy. Journal of Personality, 61(4), 587-610.
[18] Ickes W., Bissonnette V., Garcia S., & Stinson L. L. (1990). Implementing and using the dyadic interaction paradigm. In C. Hendricks & M. Clark (Eds.), Research methods in personality and social psychology (pp. 16-44). Sage.
[19] Jospe K., Genzer S., Klein Selle N., Ong D., Zaki J., & Perry A. (2020). The contribution of linguistic and visual cues to physiological synchrony and empathic accuracy. Cortex, 132, 296-308.
[20] Jospe K., Genzer S., Mansano L., Ong D., Zaki J., Soroker N., & Perry A. (2022). Impaired empathic accuracy following damage to the left hemisphere. Biological Psychology, 172, Article 108380.
[21] Kraus, M. W. (2017). Voice-only communication enhances empathic accuracy. American Psychologist, 72(7), 644-654.
[22] Ku J., Jang H. J., Kim K. U., Kim J. H., Park S. H., Lee J. H., .. Kim S. I. (2005). Experimental results of affective valence and arousal to avatar's facial expressions. CyberPsychology and Behavior, 8(5), 493-503.
[23] Landmann, E. (2023). I can see how you feel—Methodological considerations and handling of Noldus’s FaceReader software for emotion measurement. Technological Forecasting and Social Change, 197, Article 122889.
[24] Lecrubier Y., Sheehan D. V., Weiller E., Amorim P., Bonora I., Sheehan K. H., … Dunbar G. C. (1997). The Mini International Neuropsychiatric Interview (MINI). A short diagnostic structured interview: Reliability and validity according to the CIDI. European Psychiatry, 12(5), 224-231.
[25] Martingano A. J., Konrath S., Henritze E., & Brown A. D. (2023). The limited benefits of using virtual reality 360° videos to promote empathy and charitable giving. Nonprofit and Voluntary Sector Quarterly, 52(5), 1434-1457.
[26] Mirnig N., Strasser E., Weiss A., Kühnlenz B., Wollherr D., & Tscheligi M. (2015). Can you read my face? A methodological variation for assessing facial expressions of robotic heads. International Journal of Social Robotics, 7, 63-76.
[27] Noël S., Dumoulin S., & Lindgaard G. (2009). Interpreting human and avatar facial expressions. In 12th IFIP TC 13 International Conference on Human-Computer Interaction-INTERACT 2009 (pp. 98-110). Springer.
[28] Noldus, L. P. J. J. (2014). FaceReader:Tool for automated analysis of facial expression: Version 6.0. Wageningen: Noldus Information.
[29] Oliver L. D., Neufeld R. W. J., Dziobek I., & Mitchell, D. G. V. (2016). Distinguishing the relationship between different aspects of empathic responding as a function of psychopathic, autistic, and anxious traits. Personality and Individual Differences, 99, 81-88.
[30] Ong D. C., Jospe K., Reddan M., Wu Z. X., Kahhale I., Chen P., … Zaki J. (2023). People optimally and flexibly process emotional information across multiple modalities. PsyArXiv.
[31] Paulmann, S., & Pell, M. D. (2011). Is there an advantage for recognizing multi-modal emotional stimuli?. Motivation and Emotion, 35(2), 192-201.
[32] Pavey L., Greitemeyer T., & Sparks P. (2012). "I help because I want to, not because you tell me to": Empathy increases autonomously motivated helping. Personality and Social Psychology Bulletin, 38(5), 681-689.
[33] Porter, S., & Ten Brinke, L. (2008). Reading between the lies: Identifying concealed and falsified emotions in universal facial expressions. Psychological Science, 19(5), 508-514.
[34] Preston, S. D., & De Waal, F. B. M. (2002). Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences, 25(1), 1-20.
[35] Raij A. B., Johnsen K., Dickerson R. F., Lok B. C., Cohen M. S., Duerson M., … Lind D. S. (2007). Comparing interpersonal interactions with a virtual human to those with a real human. IEEE Transactions on Visualization and Computer Graphics, 13(3), 443-457.
[36] Reallusion Character Creator [Computer software]. (2021). Retrieved from https://www.reallusion.com/character-creator/
[37] Reallusion iClone [Computer software]. (2022). Retrieved from https://www.reallusion.com/iclone/
[38] Rijnders R. J. P., Terburg D., Bos P. A., Kempes M. M., & van Honk J. (2021). Unzipping empathy in psychopathy: Empathy and facial affect processing in psychopaths. Neuroscience and Biobehavioral Reviews, 131, 1116-1126.
[39] Roth D., Bloch C., Schmitt J., Frischlich L., Latoschik M. E., & Bente G. (2019). Perceived authenticity, empathy, and pro-social intentions evoked through avatar-mediated self-disclosures. In Proceedings of mensch und computer 2019 (pp. 21-30). ACM.
[40] Rum, Y., & Perry, A. (2020). Empathic accuracy in clinical populations. Frontiers in Psychiatry, 11, Article 457.
[41] Shamay-Tsoory, S. G. (2011). The neural bases for empathy. The Neuroscientist, 17(1), 18-24.
[42] Sung E. C., Han D.-I. D., Bae S., & Kwon O. (2022). What drives technology-enhanced storytelling immersion? The role of digital humans. Computers in Human Behavior, 132, Article 107246.
[43] Yin Y. D., Jia N., & Wakslak C. J. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences of the United States of America, 121(14), Article e2319112121.
[44] Zaki, J. (2014). Empathy: A motivated account. Psychological Bulletin, 140(6), 1608-1647.
[45] Zaki J., Bolger N., & Ochsner K. (2009). Unpacking the informational bases of empathic accuracy. Emotion, 9(4), 478-487.
[46] Zhang H. M., Chen X. H., Chen S. D., Li Y. S., Chen C. M., Long Q. S., & Yuan J. J. (2018). Facial expression enhances emotion perception compared to vocal prosody: Behavioral and fMRI studies. Neuroscience Bulletin, 34(5), 801-815.

基金

*本研究得到国家自然科学基金(32061160468)和山东省自然科学基金(ZR2021MC103)的资助

PDF(668 KB)

评审附件

Accesses

Citation

Detail

段落导航
相关文章

/