Advances of Eye Movement Data Analysis in Face Processing

Wang Lihui, Liu Meng, Wang Zhenni

Journal of Psychological Science ›› 2025, Vol. 48 ›› Issue (2) : 268-279.

PDF(1149 KB)
PDF(1149 KB)
Journal of Psychological Science ›› 2025, Vol. 48 ›› Issue (2) : 268-279. DOI: 10.16719/j.cnki.1671-6981.20250202
General Psychology, Experimental Psychology & Ergonomics

Advances of Eye Movement Data Analysis in Face Processing

  • Wang Lihui1,2, Liu Meng1, Wang Zhenni1,2
Author information +
History +

Abstract

Eye-tracking has long been a classic and popular research method in psychological studies. Traditional analysis of eye-movement data mainly focuses on the spatial distribution and the duration of the eye fixations. In the current review, we use eye movement in face processing as an example to introduce the new methods of data analysis that have been developed in recent years.
In the first part of the review, we briefly introduce the traditional methods of data analysis and discuss their limitations. The main traditional approach is to gather the fixations during face processing and to plot the fixation distribution in the form of a heatmap to show the critical facial regions for information processing. However, in the spatial dimension, the boundaries of the regions of interest (ROI) are often poorly defined, limiting the power to obtain highly quantitative results and to make conclusive statistical inferences; in the temporal domain, the dependencies between the sequential fixations are often not quantified.
In the second and major part of the review, we discuss how the application of machine learning and computational modeling to the analysis of eye movement data can advance the understanding of the cognitive mechanism of visual processing. Based on recently published work, we introduce three new methods for eye movement data analysis in face processing. We elucidate the technical implementation and the open-source toolkits supporting these methods. We also cover the scientific questions and the statistical inferences related to these methods. The first method concerns how to combine machine-learning approaches to quantify the clustering of fixations and to define accurate boundaries of the face ROIs. In contrast to the intuitive fixation densities shown by the traditional heatmaps, the machine-learning approaches render quantitatively separated fixation clusters and the specific landmarks of the face ROIs. The second method is to take into account the multidimensional features of the eye movement data to reveal structural patterns of visual processing. The model trained with the multivariate eye movement features can be used to recognize and predict specific patterns of eye movements. Importantly, the model has the advantage of recognizing and predicting the pattern of microsaccades, which is not covered by the traditional method. The representational similarity analysis can provide quantitative distinctions between the different patterns of eye movement data. The third method concerns the modeling of the dependencies between the fixation sequences. The hidden Markov model quantifies the transitional probabilities between the fixation clusters to show the statistical dependencies between the fixation sequences. Based on the model, individual-level eye movement strategies can be distinguished, and the order of the eye movement pattern can be quantified in terms of entropy. The most recent model developed by the recent artificial intelligence (AI) technique is to use the top-notch Transformer model to train and predict the fixation sequences.
In the third part of the review, we summarize how the advances in eye movement data analysis benefit both basic research and clinical applications. Specifically, we highlighted that the traditional methods, which are largely theory-driven, and the newly developed methods, which are largely data-driven, should not be treated as exclusive. Instead, the two types of methods are mutually complementary to advance the understanding of face processing, and the good practice of combining the two kinds of methods will benefit future studies. Although the current work focused on face processing, the introduced methods reflect how visual information is obtained and processed in general. The basic principle can be generalized and the methods can be applied to other areas such as memory, text reading, and the identification of various mental disorders. In combination with the booming AI techniques, the ongoing and further development of eye movement data analysis would advance these investigations. Although the current work focuses on the relation between face images and eye movements, the methods can also help to understand the neural mechanism of face processing by modeling the function between the eye movements and the neural activities during face processing. In summary, the current work provides new perspectives and methodological foundations for both basic research and applications of eye tracking.

Key words

eye tracking / machine learning / spatiotemporal characteristics / face processing.

Cite this article

Download Citations
Wang Lihui, Liu Meng, Wang Zhenni. Advances of Eye Movement Data Analysis in Face Processing[J]. Journal of Psychological Science. 2025, 48(2): 268-279 https://doi.org/10.16719/j.cnki.1671-6981.20250202

References

[1] 崔冬, 韩晓雅, 陈贺, 韩俊霞, 李小俚, 康健楠. (2020) 基于面孔加工异常的孤独症儿童识别.科学通报, 65, 2128-2135.
[2] 龚栩,黄宇霞,王妍,罗跃嘉.(2011)中国面孔表情图片系统的修订. 中国心理卫生杂志, 25, 40-46.
[3] 彭聃龄. (2012). 普通心理学. 北京师范大学出版社..
[4] Arizpe J., Kravitz D. J., Yovel G., & Baker C. I. (2012). Start position strongly influences fixation patterns during face processing: Difficulties with eye movements as a measure of information use. PLoS ONE, 7, e31106.
[5] Avidan, G., & Behrmann, M. (2021). Spatial integration in normal face processing and its breakdown in congenital prosopagnosia. Annual Review of Vision Science, 7, 301-321.
[6] Bahill A. T., Clark M. R., & Stark L. (1975). The main sequence, a tool for studying human eye movements. Mathematical Biosciences, 24(3-4), 191-204.
[7] Bicanski, A. & Burgess, N. (2019). A computational model of visual recognition via grid cells. Current Biology, 29, 979-990.
[8] Blais C., Jack R. E., Scheepers C., Fiset D., & Caldara R. (2008). Culture shapes how we look at faces. PLoS ONE, 7, e31106.
[9] Broda M. D., Haddad T., & de Haas B. (2023). Quick, eyes! Isolated upper face regions but not artificial features elicit rapid saccades. Journal of Vision, 23(2), 5-5.
[10] Castaldi E., Burr D., Turi M., & Binda P. (2020). Fast saccadic eye-movements in humans suggest that numerosity perception is automatic and direct. Proceedings. Biological Sciences, 287(1935), 20201884.
[11] Chan C. Y. H., Chan A. B., Lee T. M. C., & Hsiao J. H. (2018). Eye movement patterns in face recognition are associated with cognitive decline in older adults. Psychonomic Bulletin and Review, 25, 2200-2207.
[12] Chan F. H. F., Barry T. J., Chan A. B., & Hsiao J. H. (2020). Understanding visual attention to face emotions in social anxiety using hidden Markov models. Cognition and Emotion, 34, 1704-1710.
[13] Chuk T., Chan A. B., Shimojo S., & Hsiao J. H. (2020). Eye movement analysis with switching hidden Markov models. Behavior Research Methods, 52(3), 1026-1043.
[14] Ekman P.,& Friesen, W. V. (1978). Facial action coding system: A technique for the measurement of facial movement Consulting Psychologist Press A technique for the measurement of facial movement. Consulting Psychologist Press.
[15] Engbert, R., & Mergenthaler, K. (2006). Microsaccades are triggered by low retinal image slip. Proceedings of the National Academy of Sciences, 103(18), 7192-7197.
[16] Engbert, R., & Kliegl, R. (2003). Microsaccades uncover the orientation of covert attention. Vision Research, 43, 1035-1045.
[17] Falck-Ytter, T. & von Hofsten, C. (2011). How special is social looking in ASD: A review. Progress in Brain Research, 189, 209-222.
[18] Franzen L., Stark Z., & Johnson A. P. (2021). Individuals with dyslexia use a different visual sampling strategy to read text. Scientific Reports, 11, 6449.
[19] Gunther V., Kropidlowski A., Schmidt F. M., Koelkebeck K., Kersting A., & Suslow T. (2021). Attentional processes during emotional face perception in social anxiety disorder: A systematic review and meta-analysis of eye-tracking findings. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 111, 110353.
[20] Hanke M., Halchenko Y. O., Sederberg P. B., Hanson S. J., Haxby J. V., & Pollmann S. (2009). PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data. Neuroinformatics, 7, 37-53.
[21] Haxby J., Gobbini M. I., Furey M. L., Ishai A., Schouten, J. L. & Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539), 2425-2430.
[22] Holmqvist K., Nystrom M., Anderson R., Dewhurst R., Jarodzka H., & van de Weijer, J. (2011). Eye tracking: A comprehensive guide to methods and measures Oxford University Press A comprehensive guide to methods and measures. Oxford University Press.
[23] Hsiao, J. H. W., & Cottrell, G. (2008). Two fixations suffice in face recognition. Psychological Science, 19, 998-1006.
[24] Hsiao J. H., Lan H., Zheng Y., & Chan A. B. (2021). Eye movement analysis with Hidden Markov Models (EMHMM) with co-clustering. Behavior Research Methods, 53, 2473-2486.
[25] Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face communication. Annual Review of Psychology, 68, 269-297.
[26] Jiang Y., Guo Z., Tavakoli H. R., Leiva L. A., & Oulasvirta A. (2024). EyeFormer: predicting personalized scanpaths with transformer-guided reinforcement learning. Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, New York, USA.
[27] Keehn B., Monahan P., Enneking B., Ryan T., Swigonski N., & Keehn R. M. (2024). Eye-tracking biomarkers and autism diagnosis in primary care. JAMA Network Open, 7(5), e2411190.
[28] King, D. E. (2009). Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10, 1755-1758.
[29] Kragel, J. E., & Voss, J. L. (2021). Temporal context guides visual exploration during scene recognition. Journal of Experimental Psychology: General, 150(5), 873-889.
[30] Kragel, J. E., & Voss, J. L. (2022). Looking for the neural basis of memory. Trends in Cognitive Sciences, 26(1), 53-65.
[31] Kriegeskorte N., Mur M., & Bandettini P. (2008). Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, 4.
[32] Liu M., Zhan J., & Wang L. (2024). Specified functions of the first two fixations in face recognition: Sampling the general-to-specific facial information. iScience, 27, 110686.
[33] Martinez-Conde S., Otero-Millan J., & Macknik S. L. (2013). The impact of microsaccades on vision: Towards a unified theory of saccadic function. Nature Reviews Neuroscience, 14(2), 83-96.
[34] Maurer, D., Grand, R. L. & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255-260.
[35] Mega, L. F., & Volz, K. G. (2017). Intuitive face judgments rely on holistic eye movement pattern. Frontiers in Psychology, 8, 1005.
[36] Mehoudar E., Arizpe J., Baker C. I., & Yovel G. (2014). Faces in the eye of the beholder: unique and stable eye scanning patterns of individual observers. Journal of Vision, 14(7), 6-6.
[37] Metzger A., Ennis R. J., Doerschner K., & Toscani M. (2024). Perceptual task drives later fixations and long latency saccades, while early fixations and short latency saccades are more automatic. Perception, 53(8), 501-511.
[38] Miellet S., Caldara R., & Schyns P. G. (2011). Local Jekyll and global Hyde: The dual identity of face identification. Psychological Science, 22, 1518-1526.
[39] Nichols, T. E., & Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: A primer with examples. Human Brain Mapping, 15, 1-25.
[40] Parker P. R. L., Martins D. M., Leonard E. S. P., Casey N. M., Sharp S. L., Abe E. T. T., Smear M. C., Yates J. L., Mitchell J. F., & Niell C. M. (2023). A dynamic sequence of visual processing initiated by gaze shifts. Nature Neuroscience, 26, 2192-2202.
[41] Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O., Blondel M., Prettenhofer P., Weiss R., Dubourg V., Vanderplas J., Passos A., Cournapeau D., Brucher M., Perrot M., & Duchesnay É. (2011). Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12, 282-2830.
[42] Pereira, F. & Botvinick, M. (2011). Information mapping with pattern classifiers: A comparative study. NeuroImage, 56, 476-496.
[43] Peterson, M. F., & Eckstein, M. P. (2013). Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation. Psychological Science, 24, 1216-1225.
[44] Peterson M. F., Zaun I., Hoke H., Jiahui G., Duchaine B., & Kanwisher N. (2019). Eye movements and retinotopic tuning in developmental prosopagnosia. Journal of Vision, 19(9), 7-7.
[45] Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372-422.
[46] Rao, R. P. N. (2024). A sensory-motor theory of the neocortex. Nature Neuroscience, 27, 1221-1235.
[47] Ricci, M., Serre, T. (2022). Hierarchical models of the visual system. In D., Jaeger & R. Jung,(eds) Encyclopedia of computational neuroscience (pp.1533-1546). Springer.
[48] Rousseeuw, P J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20, 53-65.
[49] Schurgin M. W., Nelson J., Iida S., Ohira H., Chiao J. Y., & Franconeri S. L. (2014). Eye movements during emotion recognition in faces. Journal of Vision, 14(13), 14-14.
[50] Shelchkova N., Tang C., & Poletti M. (2019). Task-driven visual exploration at the foveal scale. Proceedings of National Academy of Sciences, 116, 5811-5818.
[51] Spiering, L., & Dimigen, O. (2025). (Micro) saccade-related potentials during face recognition: A study combining EEG, eye-tracking, and deconvolution modeling. Attention, Perception, and Psychophysics, 87(1), 133-154.
[52] Stacchi L., Ramon M., Lao J., & Caldara R. (2019). Neural representations of faces are tuned to eye movements. Journal of Neuroscience, 39(21), 4113-4123.
[53] Stringer C., Pachitariu M., Steinmetz N., Reddy C. B., Carandini M., & Harris K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. Science, 364, 255.
[54] Stuart N., Whitehouse A., Palermo R., Bothe E., & Badcock N. (2023). Eye gaze in autism spectrum disorder: a review of neural evidence for the eye avoidance hypothesis. Journal of Autism and Developmental Disorders, 53(5), 1884-1905.
[55] Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A. N., & Kaiser L. (2017). Attention is all you need. Conference on Neural Information Processing System, New York, USA
[56] Wang L., Baumgartener F., Kaule F., Hanke M., & Pollmann S. (2019). Individual face- and house-related eye movement patterns distinctly activate FFA and PPA. Nature Communications, 10, 5532.
[57] Wang Z., Meghanathan R. N., Pollmann S., & Wang L. (2024a). Common structure of saccades and microsaccades in visual perception. Journal of Vision, 24(4), 20-20.
[58] Wang Z., Zhang C., Guo Q., Fan Q., & Wang L. (2024b). Concurrent oculomotor hyperactivity and deficient anti-saccade performance in obsessive-compulsive disorder. Journal of Psychiatric Research, 180, 402-410.
[59] Wegner-Clemens K., Rennig J., & Beauchamp M. S. (2020). A relationship between Autism-Spectrum Quotient and face viewing behavior in 98 participants. PLoS ONE, 15(4), E0230866.
[60] White, D., & Burton, A. M. (2022). Individual differences and the multidimensional nature of face perception. Nature Reviews Psychology, 1, 287-300.
[61] Wilcockson T. D. W., Burns E. J., Xia B., Tree J., & Crawford T. J. (2020). Atypically heterogeneous vertical first fixations to faces in a case series of people with developmental prosopagnosia. Visual Cognition, 28(4), 311-323.
[62] Winkler A. M., Ridgway G. R., Webster M. A., Smith, S. M. & Nichols T. E. (2014). Permutation inference for the general linear model. NeuroImage, 92, 81-397.
[63] Wynn J. S., Shen K., & Ryan J.D. (2019). Eye movements actively reinstate spatiotemporal mnemonic content. Vision, 3, 21.
[64] Wynn J. S., Ryan J. D., & Buchsbaum B. R. (2020). Eye movements support behavioral pattern completion. Proceedings of the National Academy of Sciences, 117(11), 6246-6254.
[65] Yitzhak N., Pertzov Y., Guy N., & Aviezer H. (2020). Many ways to see your feelings: Successful facial expression recognition occurs with diverse patterns of fixation distributions. Emotion, 22(5), 844-860.
[66] Yu G., Herman J. P., Katz L. N., & Krauzlis R. J. (2022). Mirosaccades as a marker not as a cause for attention-related modulation. eLife, 11, e74168.
[67] Zhang D., Liu X., Xu L., Li Y., Xu Y., Xiao M. Q., & Wang J. (2022). Effective differentiation between depressed patients and controls using discriminative eye movement features. Journal of Affective Disorders, 307, 237-243.
[68] Zhang D., Xu L., Liu X., Cui H., Wei Y., Zheng W. S., & Wang J. (2024). Eye movement characteristics for predicting a transition to psychosis: Longitudinal changes and implications. Schizophrenia Bulletin, sbae001.
PDF(1149 KB)

Accesses

Citation

Detail

Sections
Recommended

/