Journal of Psychological Science ›› 2021, Vol. 44 ›› Issue (4): 844-849.

Previous Articles     Next Articles

Differences in Facial Processing Patterns on Speechreading of Hearing-impaired Students: Evidence from Eye Movements

  

  • Received:2019-09-09 Revised:2020-06-23 Online:2021-07-20 Published:2021-07-20
  • Contact: Jiang-Hua LEI

听障学生汉语唇读面部加工方式的差异:来自眼动的证据

雷江华,肖冉,张奋,宫慧娜,范佳露   

  1. 华中师范大学
  • 通讯作者: 雷江华

Abstract: The acquisition and comprehension of speechreading depends on the storage and processing of facial information. However, at present, whether the facial processing of speechreading depends on local processing or holistic processing is still unknown. So far, there are mainly three theoretical hypothesises, including Compensatory Strategy supporting local processing, Gaze Direction Assumption supporting parallel processing and Social-Tuning Pattern supporting holistic processing. Based on the contradiction on the current speechreading facial processing patterns, this study aimed to explore what kinds of facial processing patterns were used by hearing-impaired students in Chinese speechreading and what kinds of processing patterns that those hearing-impaired students with different spreechreading skills adopt. This study was the first time to focus on the facial processing patterns of Chinese speechreading for hearing-impaired participants. To be specific, this study further analyzed the local processing patterns of the eye and mouth regions which in the prespeech, speech and postspeech stages for hearing-impaired students with different speechreading abilities. This experiment adopted a three-factor mixed experimental design, which was 2 (face areas: eye, mouth) × 3 (discourse stages: prespeech stage, speech stage, postspeech stage) × 2(sentences speechreading abilities: low-speechreading group, high-speechreading group). The experiment selected students with hearing impairment aged between 15 to 20 years old from two deaf schools, used the Experiment Builder software to design the experimental programs, and adopted the Eyelink 1000Plus eye tracker to record participants’ eye movement and behavior data. Results were listed as followed: (1) During prespeech and postspeech stage, all eye movement indexes in eye regions were significantly higher than that in mouth regions, while this study got a contrary results during speech stage, which showed that the movements indexes in mouth regions were significantly higher than that in eye regions. (2) On the eye regions of speech stage and the mouth regions of post speech stage, significant differences were found in IA Dwell Time %, IA Fixation %, and Average Fix Pupil Size for hearing-impaired students with different speechreading abilities. (3) The Social-Tuning Score of the high-speechreading group (M = 0.56) was significantly higher than that of the low-speechreading group (M = 0.42). These results indicated that hearing-impaired students adopted the social-Tuning patterns which manifested as a “eye-mouth-eye” type of facial processing pattern. The high-speechreading group had a rather higher efficiency on holistic processing, and they had higher abilities to process the information from eyes and mouth parallelly. In other words, when the mouth information stopped, they could detach their attention from mouth, and their attention can be transferred to eye regions quickly, so this facial processing pattern of speechreading preferred to support the Gaze Direction Assumption and Social-Tuning Pattern. While although the low-speechreading group also showed Social-Tuning Pattern, its holistic processing efficiency was quite low. The low-speechreading group paid too much attention to the mouth regions, and failed to extract useful information from the eye regions. Chinese speechreading did not only need the extraction of speech information, but also need to grasp the extra language information such as tone, rhythm, facial expression, etc., and other social information such as emotion and intention, etc. Therefore, with the cost of losing eye information, the local processing pattern failed to help hearing impaired students with low speechreading abilities to obtain better speech compensation.

Key words: students with hearing impairment, speechreading ability, facial processing, eye movement

摘要: 为探讨高低唇读理解能力听障学生唇读面部加工方式的差异,研究采用视频—图片匹配范式并结合眼动技术,考察高低唇读能力组语前-语中-语后和整体面部加工方式。结果发现,虽然两组都表现出社会协调模式,但高唇读能力组社会协调分数更高,且眼部维持时间更长。表明高唇读能力者整体加工和眼部、口形并行加工能力强,支持凝视假说和社会协调模式;低唇读能力者整体加工效率低,更依赖口形,未能通过补偿策略获得良好的补偿效果。

关键词: 听障学生 唇读理解能力 面部加工 眼动