本研究采用面孔和声音刺激,考察工作记忆负载对视听多通道刺激注意捕获的影响。实验1和实验2分别操纵言语和视觉工作记忆负载。结果发现相比单通道听觉刺激,双通道刺激能够更有效捕获注意。并且,视听多通道促进效应受到负载类型和通道类型的影响,表现为随着言语工作记忆负载升高,听觉从多通道促进的获益会升高,而视觉从多通道促进的获益没有显著变化。与之相反,随着视觉工作记忆负载升高,听觉从多通道促进的获益没有显著变化,视觉从多通道促进的获益则显著降低。结果支持视听多通道刺激的注意捕获受负载影响,且在不同负载类型下,视、听通道从多感觉促进效应的获益存在不同。
Abstract
In daily lives, we often receive information from different modalities. Previous studies have found that multisensory stimuli can capture attention more effectively (i.e., multisensory enhancement) compared to unisensory stimuli. In the past decades, researchers have paid increasing attention to multisensory enhancement. However, it remains unclear whether the cognitive load affects the attention capture of multisensory stimuli. Some studies have shown that attention capture by multisensory stimuli is not influenced by the load, while others have demonstrated that attention capture by multisensory stimuli is modulated by the load. By adopting complex face and syllable stimuli, the present study aims to explore whether attention capture by audiovisual stimuli is modulated by the working memory load. Moreover, we are also interested in whether the multisensory enhancement for each modality is affected by the type of working memory load.
In present study, participants were required to perform a working memory task and a gender identification task, while verbal (in Experiment 1) or visual (in Experiment 2) working memory load was manipulated, respectively. Two within-participant variables were used: working memory load (low load vs. high load) and modality (visual, auditory vs. audiovisual). In Experiment 1, two or six numbers were presented at the center of the screen for 1000 ms, and participants were required to remember these numbers and their sequence. Then a visual (a face), auditory (a syllable), or a bimodal audiovisual target was presented for 700 ms, followed by a 1500 ms blank screen. Participants were instructed to judge the gender of the target as quickly and accurately as possible. At the end of each trial, a number in the memory set was presented and participants were asked to report the next number. In Experiment 2, spatial orientations were used in the working memory task instead of visual numbers, that is, participants had to remember the orientation of each line and the locations of the lines. Other experimental settings in Experiment 2 were the same as those in Experiment 1.
For the two experiments, a significant main effect of modality was found. Responses to multisensory targets were significantly faster than those to auditory targets, indicating that bimodal audiovisual stimuli could capture attention more effectively than unisensory stimuli. Our results support that, compared to unisensory auditory stimuli, bimodal audiovisual stimuli are more efficient for attention capture. Moreover, a significant interaction between working memory load and modality was found. That is, the crossmodal benefit was affected both by the working memory load and sensory modality. With the increase of verbal working memory load, the benefit of auditory stimuli from multisensory enhancement increased significantly, while the benefit of visual stimuli from multisensory enhancement remained unchanged. In contrast, with the increase of visual working memory load, the benefit of auditory stimuli from multisensory enhancement remained unchanged, while the benefit of visual stimuli from multisensory enhancement decreased significantly. These results suggest that working memory load affects the attention capture of multisensory stimuli, and the benefits of visual and auditory modalities from multisensory enhancement are modulated by the type of working memory load.
关键词
工作记忆负载 /
多感觉促进效应 /
视觉 /
听觉 /
注意
Key words
working memory load /
multisensory enhancement /
visual /
auditory /
attention
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] 龚栩, 黄宇霞, 王妍, 罗跃嘉. (2011). 中国面孔表情图片系统的修订. 中国心理卫生杂志, 25(1), 40-46.
[2] 彭姓, 常若松, 任桂琴, 王爱君, 唐晓雨. (2018). 外源性注意与多感觉整合的交互关系. 心理科学进展, 26(12), 2129-2140.
[3] 王春地, 王大辉. (2021). 振动触觉频率信息的工作记忆容量及存储机制. 心理科学进展, 29(7), 1141-1148.
[4] Alsius A., Navarra J., Campbell R., & Soto-Faraco S. (2005). Audiovisual integration of speech falters under high attention demands. Current Biology, 15(9), 839-843.
[5] Barutchu A., Spence C., & Humphreys G. W. (2018). Multisensory enhancement elicited by unconscious visual stimuli. Experimental Brain Research, 236(2), 409-417.
[6] Doehrmann, O., & Naumer, M. J. (2008). Semantics and the multisensory brain: How meaning modulates processes of audio-visual integration. Brain Research, 1242, 136-150.
[7] Föcker J., Hölig C., Best A., & Röder B. (2011). Crossmodal interaction of facial and vocal person identity information: An event-related potential study. Brain Research, 1385, 229-245.
[8] Ho C., Reed N., & Spence C. (2007). Multisensory in-car warning signals for collision avoidance. Human Factors, 49(6), 1107-1114.
[9] Huang S., Li Y., Zhang W., Zhang B., Liu X. Z., Mo L., & Chen Q. (2015). Multisensory competition is modulated by sensory pathway interactions with fronto-sensorimotor and default-mode network regions. The Journal of Neuroscience, 35(24), 9064-9077.
[10] Huestegge, S. M., & Raettig, T. (2020). Crossing gender borders: Bidirectional dynamic interaction between face-based and voice-based gender categorization. Journal of Voice, 34(3), 487.
[11] Koelewijn T., Bronkhorst A., & Theeuwes J. (2010). Attention and the multiple stages of multisensory integration: A review of audiovisual studies. Acta Psychologica, 134(3), 372-384.
[12] Li Y. Y., Li Z. M., Deng A. H., Zheng H. W., Chen J. X., Ren Y. N., & Yang W. P. (2021). The modulation of exogenous attention on emotional audiovisual integration. I-Perception, 12(3), Article 20416695211018714.
[13] Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390(6657), 279-281.
[14] Lunn J., Sjoblom A., Ward J., Soto-Faraco S., & Forster S. (2019). Multisensory enhancement of attention depends on whether you are already paying attention. Cognition, 187, 38-49.
[15] Marucci M., Di Flumeri G., Borghini G., Sciaraffa N., Scandola M., Pavone E. F., Babiloni F., Betti V., & Aricò P. (2021). The impact of multisensory integration and perceptual load in virtual reality settings on performance, workload and presence. Scientific Reports, 11(1), Article 4831.
[16] Matthijssen S. J. M. A., van Schie K., & van den Hout, M. A. (2019). The effect of modality specific interference on working memory in recalling aversive auditory and visual memories. Cognition and Emotion, 33(6), 1169-1180.
[17] Naert L., Bonato M., & Fias W. (2018). Asymmetric spatial processing under cognitive load. Frontiers in Psychology, 9, Article 583.
[18] Santangelo V., Ho C., & Spence C. (2008). Capturing spatial attention with multisensory cues. Psychonomic Bulletin and Review, 15(2), 398-403.
[19] Santangelo, V., & Spence, C. (2007). Multisensory cues capture spatial attention regardless of perceptual load. Journal of Experimental Psychology: Human Perception and Performance, 33(6), 1311-1321.
[20] Sims C. R., Jacobs R. A., & Knill D. C. (2012). An ideal observer analysis of visual working memory. Psychological Review, 119(4), 807-830.
[21] Talsma D., Senkowski D., Soto-Faraco S., & Woldorff M. G. (2010). The multifaceted interplay between attention and multisensory integration. Trends in Cognitive Sciences, 14(9), 400-410.
[22] Talsma, D., & Woldorff, M. G. (2005). Selective attention and multisensory integration: Multiple phases of effects on the evoked brain activity. Journal of Cognitive Neuroscience, 17(7), 1098-1114.
[23] van der Burg E., Olivers C. N. L., Bronkhorst A. W., & Theeuwes J. (2008). Pip and pop: Nonspatial auditory signals improve spatial visual search. Journal of Experimental Psychology: Human Perception and Performance, 34(5), 1053-1065.
[24] Vatakis A., Ghazanfar A. A., & Spence C. (2008). Facilitation of multisensory integration by the "unity effect" reveals that speech is special. Journal of Vision, 8(9), 14.1-14.11.
[25] Yue Z. Z., Jiang Y. Z., Li Y., Wang P. F., & Chen Q. (2015). Enhanced visual dominance in far space. Experimental Brain Research, 233(10), 2833-2843.
[26] Zimmer, U., & Macaluso, E. (2007). Processing of multisensory spatial congruency can be dissociated from working memory and visuo-spatial attention. The European Journal of Neuroscience, 26(6), 1681-1691.
基金
*本研究得到国家自然科学基金面上项目(32371113)和中山大学中央高校基本科研业务专项资金(22wklj04)的资助