|
Differences in Facial Processing Patterns on Speechreading of Hearing-impaired Students: Evidence from Eye Movements
2021, 44(4):
844-849.
The acquisition and comprehension of speechreading depends on the storage and processing of facial information. However, at present, whether the facial processing of speechreading depends on local processing or holistic processing is still unknown. So far, there are mainly three theoretical hypothesises, including Compensatory Strategy supporting local processing, Gaze Direction Assumption supporting parallel processing and Social-Tuning Pattern supporting holistic processing.
Based on the contradiction on the current speechreading facial processing patterns, this study aimed to explore what kinds of facial processing patterns were used by hearing-impaired students in Chinese speechreading and what kinds of processing patterns that those hearing-impaired students with different spreechreading skills adopt. This study was the first time to focus on the facial processing patterns of Chinese speechreading for hearing-impaired participants. To be specific, this study further analyzed the local processing patterns of the eye and mouth regions which in the prespeech, speech and postspeech stages for hearing-impaired students with different speechreading abilities. This experiment adopted a three-factor mixed experimental design, which was 2 (face areas: eye, mouth) × 3 (discourse stages: prespeech stage, speech stage, postspeech stage) × 2(sentences speechreading abilities: low-speechreading group, high-speechreading group). The experiment selected students with hearing impairment aged between 15 to 20 years old from two deaf schools, used the Experiment Builder software to design the experimental programs, and adopted the Eyelink 1000Plus eye tracker to record participants’ eye movement and behavior data.
Results were listed as followed: (1) During prespeech and postspeech stage, all eye movement indexes in eye regions were significantly higher than that in mouth regions, while this study got a contrary results during speech stage, which showed that the movements indexes in mouth regions were significantly higher than that in eye regions. (2) On the eye regions of speech stage and the mouth regions of post speech stage, significant differences were found in IA Dwell Time %, IA Fixation %, and Average Fix Pupil Size for hearing-impaired students with different speechreading abilities. (3) The Social-Tuning Score of the high-speechreading group (M = 0.56) was significantly higher than that of the low-speechreading group (M = 0.42).
These results indicated that hearing-impaired students adopted the social-Tuning patterns which manifested as a “eye-mouth-eye” type of facial processing pattern. The high-speechreading group had a rather higher efficiency on holistic processing, and they had higher abilities to process the information from eyes and mouth parallelly. In other words, when the mouth information stopped, they could detach their attention from mouth, and their attention can be transferred to eye regions quickly, so this facial processing pattern of speechreading preferred to support the Gaze Direction Assumption and Social-Tuning Pattern. While although the low-speechreading group also showed Social-Tuning Pattern, its holistic processing efficiency was quite low. The low-speechreading group paid too much attention to the mouth regions, and failed to extract useful information from the eye regions. Chinese speechreading did not only need the extraction of speech information, but also need to grasp the extra language information such as tone, rhythm, facial expression, etc., and other social information such as emotion and intention, etc. Therefore, with the cost of losing eye information, the local processing pattern failed to help hearing impaired students with low speechreading abilities to obtain better speech compensation.
Related Articles |
Metrics
|