心理科学 ›› 2025, Vol. 48 ›› Issue (3): 567-576.DOI: 10.16719/j.cnki.1671-6981.20250306

• 基础、实验与功效 • 上一篇    下一篇

视听觉通道信息与情绪效价对共情准确性任务表现的影响*

王淼1,2,3, 张丽颖1, 付新玮1, 王毅2,3, 姜越4, 曹原5, 王艳郁**1, 陈楚侨**2,3   

  1. 1山东第二医科大学心理学院,潍坊,261053;
    2中国科学院心理研究所心理健康重点实验室,神经心理学与应用认知神经科学实验室,北京,100101;
    3中国科学院大学心理系,北京,100049;
    4香港理工大学高等教育研究和发展研究所,香港特别行政区;
    5香港大学社会工作及社会行政学系,香港特别行政区
  • 出版日期:2025-05-20 发布日期:2025-05-30
  • 通讯作者: **陈楚侨,E-mail:rckchan@psych.ac.cn;王艳郁,E-mail:wangyanyu@sdsmu.edu.cn
  • 基金资助:
    *本研究得到国家自然科学基金(32061160468)和山东省自然科学基金(ZR2021MC103)的资助

Investigating the Differential Effects of Audio-visual Information and Emotional Valence on Empathic Accuracy

Wang Miao1,2,3, Zhang Liying1, Fu Xinwei1, Wang Yi2,3, Jiang Yue4, Cao Yuan5, Wang Yanyu1, Raymond C. K. Chan2,3   

  1. 1School of Psychology, Shandong Second Medical University, Weifang, 261053;
    2Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101;
    3Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049;
    4Institute for Higher Education Research and Development, The Hong Kong Polytechnic University, Hong Kong SAR;
    5Department of Social Work and Social Administration, The University of Hong Kong, Hong Kong SAR
  • Online:2025-05-20 Published:2025-05-30

摘要: 采用中文版共情准确性任务对85名大学生进行测试,探讨不同材料类型(仅音频、真人有声视频和虚拟人有声视频)和情绪效价(积极、消极)对共情准确性任务表现的影响。结果显示,真人有声视频条件的情感共情评分显著优于虚拟人有声视频条件;共情准确性和认知共情评分上未发现材料类型间的显著差异。在认知共情和情感共情评分上,材料类型与情绪效价的交互作用显著:观看积极视频时,仅音频和真人有声视频条件的共情评分显著高于虚拟人有声视频;观看消极视频时,真人有声视频条件的情感共情评分显著高于仅音频条件。以上结果表明,不同情绪效价在视听觉信息影响共情准确性任务表现中具有重要作用,强调了真人表情等视觉信息在共情过程中的重要性。

关键词: 共情准确性, 认知共情, 情感共情, 视听觉通道信息

Abstract: Background and Aims: Empathy involves the communication and understanding of social information between individuals in specific contexts. Empirical evidence suggests that auditory information can affect one’s empathic ability more than visual information, but the differential effects of sensory modalities of information on empathic accuracy remain unclear. This study aimed to examine the effects of auditory and different visual modalities on empathic accuracy based on the Chinese version of the Empathic Accuracy Task (EAT). We hypothesized that (1) performance of cognitive empathy in avatar audio-video condition would be significantly lower than performance in the auditory-only and human audio-video condition. (2) There was significant interaction between emotional valence and Modality-Condition in Cognitive empathy. Specifically, cognitive empathy was significantly higher in the human audio-video condition compared to the audio-only conditions for positive-valenced videos, while there was no significant difference among the three experimental conditions for negative-valenced videos.
Method: We recruited 85 college students to complete the Chinese version of the EAT in three different conditions, i.e., (1) auditory-only condition, (2) avatar audio-video (visual information is less-than human audio-video condition) condition, and (3) human audio-video condition. The EAT had 12 video clips (6 positive and 6 negative) with a character describing his/her emotional autographical event in each video clip. Participants were asked to rate the character’s emotional states continuously and to respond to questions concerning perspective taking, emotional contagion, empathic concern, and willingness/effort to help.
Results: The 3 (Modality-Condition: auditory-only, avatar audio-video and human audio-video) x 2 (Valence: positive and negative) ANOVA model found significant Modality-Condition main effect on emotional contagion score (F(2, 168) = 3.08, p = .049), with the human audio-video condition (M = 7.01, SD = 1.26) eliciting higher degrees of emotional contagion than the avatar audio-video condition (M = 6.74, SD = 1.28). However, the Modality-Condition main effect on empathy accuracy and perspective taking scores were non-significant. The Valence main effects on empathic accuracy (F(1, 84) = 10.16, p < .01), emotional contagion (F(1, 84) = 6.45, p < .05) and perspective taking (F(1, 84) = 14.01, p < .001) were significant. Empathic responses were enhanced in videos depicting positive moods relative to those depicting negative moods. The Modality-Condition-by-Valence interaction on perspective taking (F(2, 168) =7.57, p < .01) and emotional contagion (F(2, 168) = 6.48, p < .01) were significant. Simple effect analysis found that, for positive-valenced videos, both perspective taking and emotional contagion scores were significantly lower in the Avatar audio-video condition (M = 7.15, SD = 1.36; M = 6.69, SD = 1.53) compared to the audio-only (M = 7.59, SD = 1.03; M = 7.14, SD = 1.30) and human audio-video (M=7.57, SD = 1.26; M = 7.17, SD = 1.51) conditions. In contrast, for negatively valenced videos, emotional contagion was higher in the human audio-video condition (M = 6.84, SD = 1.44) than the audio-only condition (M = 6.52, SD = 1.35). However, the Modality-Condition-by-Valence interaction was not significant for empathy accuracy.
Conclusions: This study investigated the impact of audio-visual information on empathy by comparing audio-only and human audio-video conditions, differentiating between positive and negative emotional valence. The findings highlighted that human facial expressions significantly enhance emotional empathy in negative emotional contexts when matched with auditory information. Additionally, by introducing human and avatar audio-video condition, the study manipulated different levels of visual information. Our findings suggested the impacts of visual information on empathy varied with emotional valence. The Avatar audio-video condition undermined empathy in positive-valenced scenarios. Together, our work elucidated the effects of emotional valence of visual information on empathy performance, implicating the role of human visual cues in empathy processing.

Key words: empathic accuracy, cognitive empathy, affective empathy, audio-visual information