Relationship Between Facial Areas With the Greatest Increase in Non-local Contrast and Gaze Fixations in Recognizing Emotional Expressions




face, emotion, eye movements, nonlocal contrast, second-order visual mechanisms


The aim of our study was to analyze gaze fixations in recognizing facial emotional expressions in comparison with o the spatial distribution of the areas with the greatest increase in the total (nonlocal) luminance contrast. It is hypothesized that the most informative areas of the image that getting more of the observer’s attention are the areas with the greatest increase in nonlocal contrast. The study involved 100 university students aged 19-21 with normal vision. 490 full-face photo images were used as stimuli. The images displayed faces of 6 basic emotions (Ekman’s Big Six) as well as neutral (emotionless) expressions. Observer’s eye movements were recorded while they were the recognizing expressions of the shown faces. Then, using a developed software, the areas with the highest (max), lowest (min), and intermediate (med) increases in the total contrast in comparison with the surroundings were identified in the stimulus images at different spatial frequencies. Comparative analysis of the gaze maps with the maps of the areas with min, med, and max increases in the total contrast showed that the gaze fixations in facial emotion classification tasks significantly coincide with the areas characterized by the greatest increase in nonlocal contrast. Obtained results indicate that facial image areas with the greatest increase in the total contrast, which preattentively detected by second-order visual mechanisms, can be the prime targets of the attention.


Download data is not yet available.


Açık, A., Onat, S., Schumann, F., Einhäuser, W., & König, P. (2009). Effects of luminance contrast and its modifications on fixation behavior during free viewing of images from different categories. Vision research, 49(12), 1541-1553.

Allen, P. A., Lien, M. C., & Jardin, E. (2017). Age-related emotional bias in processing two emotionally valenced tasks. Psychological research, 81(1), 289-308.

Atkinson, A. P., & Smithson, H. E. (2020). The impact on emotion classification performance and gaze behavior of foveal versus extrafoveal processing of facial features. Journal of experimental psychology: Human perception and performance, 46(3), 292–312.

Babenko, V. V., & Ermakov, P. N. (2015). Specificity of brain reactions to second-order visual stimuli. Visual neuroscience, 32.

Babenko, V. V., Ermakov, P. N., & Bozhinskaya, M. A. (2010). Relationship between the Spatial-Frequency Tunings of the First-and the Second-Order Visual Filters. Psikhologicheskii Zhurnal, 31(2), 48-57. (In Russian).

Babenko, V.V. (1989). A new approach to the problem of visual perception mechanisms. In Problems of Neurocybernetics, ed. Kogan, A. B., pp. 10–11. Rostov-on-Don, USSR: Rostov University Pub. (In Russian).

Belousova, A., & Belousova, E. (2020). Gnostic emotions of students in solving of thinking tasks. International Journal of Cognitive Research in Science, Engineering and Education, 8(2), 27-34.

Bergen, J. R., & Julesz, B. (1983). Parallel versus serial processing in rapid pattern discrimination. Nature, 303(5919), 696-698.

Betts, L. R., & Wilson, H. R. (2010). Heterogeneous structure in face-selective human occipito-temporal cortex. Journal of Cognitive Neuroscience, 22(10), 2276-2288.

Bindemann, M., Scheepers, C., & Burton, A. M. (2009). Viewpoint and center of gravity affect eye movements to human faces. Journal of vision, 9(2), 1-16.

Bindemann, M., Scheepers, C., Ferguson, H. J., & Burton, A. M. (2010). Face, body, and center of gravity mediate person detection in natural scenes. Journal of Experimental Psychology: Human Perception and Performance, 36(6), 1477.

Bombari, D., Mast, F. W., & Lobmaier, J. S. (2009). Featural, configural, and holistic face-processing strategies evoke different scan patterns. Perception, 38(10), 1508-1521.

Bruce, N. D. & Tsotsos, J. K. 2005). Saliency based on information maximization. In Advances in neural information processing systems, 18, 155-162.

Budanova, I. (2021). The Dark Triad of personality in psychology students and eco-friendly behavior. In E3S Web of Conferences (Vol. 273, p. 10048). EDP Sciences.

Butler, S., Blais, C., Gosselin, F., Bub, D., & Fiset, D. (2010). Recognizing famous people. Attention, Perception, & Psychophysics, 72(6), 1444-1449.

Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., & Durand, F. (2018). What do different evaluation metrics tell us about saliency models?. IEEE transactions on pattern analysis and machine intelligence, 41(3), 740-757.

Cabeza, R., & Kato, T. (2000). Features are also important: Contributions of featural and configural processing to face recognition. Psychological science, 11(5), 429-433.

Cauchoix, M., Barragan-Jason, G., Serre, T., & Barbeau, E. J. (2014). The neural dynamics of face detection in the wild revealed by MVPA. Journal of Neuroscience, 34(3), 846-854.

Chubb, C., & Sperling, G. (1989). Two motion perception mechanisms revealed through distance-driven reversal of apparent motion. Proceedings of the National Academy of Sciences, 86(8), 2985-2989.

Collin, C. A., Rainville, S., Watier, N., & Boutet, I. (2014). Configural and featural discriminations use the same spatial frequencies: A model observer versus human observer analysis. Perception, 43(6), 509-526.

Collishaw, S. M., & Hole, G. J. (2000). Featural and configurational processes in the recognition of faces of different familiarity. Perception, 29(8), 893-909.

Comfort, W. E., & Zana, Y. (2015). Face detection and individuation: Interactive and complementary stages of face processing. Psychology & Neuroscience, 8(4), 442.

Crouzet, S. M., & Thorpe, S. J. (2011). Low-level cues and ultra-fast face detection. Frontiers in psychology, 2, 342.

Crouzet, S. M., Kirchner, H., & Thorpe, S. J. (2010). Fast saccades toward faces: face detection in just 100 ms. Journal of vision, 10(4), 16-16.

Dakin, S. C., & Mareschal, I. (2000). Sensitivity to contrast modulation depends on carrier spatial frequency and orientation. Vision research, 40(3), 311-329.

Einhäuser, W., & König, P. (2003). Does luminance-contrast contribute to a saliency map for overt visual attention?. European Journal of Neuroscience, 17(5), 1089-1097.

Einhäuser, W., Rutishauser, U., Frady, E. P., Nadler, S., König, P., & Koch, C. (2006). The relation of phase noise and luminance contrast to overt attention in complex visual stimuli. Journal of vision, 6(11), 1-1.

Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: scanning emotional facial expressions. Emotion, 11(4), 860-865.

Ekman, P. (1992). An argument for basic emotions. Cognition & emotion, 6(3-4), 169-200.

Fodor, J. (1983). Modularity of Mind: An Essay on Faculty Psychology. Cambridge, Mass: MIT Press.

Fodor, J. A. (2000). The mind doesn’t work that way: The scope and limits of computational psychology. MIT press. Retrieved from

Frey, H. P., König, P., & Einhäuser, W. (2007). The role of first-and second-order stimulus features for human overt attention. Perception & Psychophysics, 69(2), 153-161.

Fuchs, I., Ansorge, U., Redies, C., & Leder, H. (2011). Salience in paintings: bottom-up influences on eye fixations. Cognitive Computation, 3(1), 25-36.

Gao, D., Han, S., & Vasconcelos, N. (2009). Discriminant saliency, the detection of suspicious coincidences, and applications to visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(6), 989-1005.

Gao, D., & Vasconcelos , N. (2007). Bottom-up saliency is a discriminant process. Proceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision. 4408851

Graham, N. V. (2011). Beyond multiple pattern analyzers modeled as linear filters (as classical V1 simple cells): Useful additions of the last 25 years. Vision research, 51(13), 1397-1430.

Guyader, N., Chauvin, A., Boucart, M., & Peyrin, C. (2017). Do low spatial frequencies explain the extremely fast saccades towards human faces?. Vision research, 133, 100-111.

Harris, A., & Aguirre, G. K. (2008). The representation of parts and wholes in face-selective cortex. Journal of Cognitive Neuroscience, 20(5), 863-878.

Honey, C., Kirchner, H., & VanRullen, R. (2008). Faces in the cloud: Fourier power spectrum biases ultrarapid face detection. Journal of vision, 8(12), 9-9.

Hou, W., Gao, X., Tao, D., & Li, X. (2013). Visual saliency detection using information divergence. Pattern Recognition, 46(10), 2658-2669.

Hou, X., & Zhang, L. (2007, June). Saliency detection: A spectral residual approach. In 2007 IEEE Conference on computer vision and pattern recognition (pp. 1-8). Ieee.

Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature reviews neuroscience, 2(3), 194-203.

Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 20(11), 1254-1259.

Kanwisher, N. (2000). Domain specificity in face perception. Nature neuroscience, 3(8), 759-763.

Kingdom, F. A., & Keeble, D. R. (1999). On the mechanism for scale invariance in orientation-defined textures. Vision Research, 39(8), 1477-1489.

Kingdom, F.A.A., Prins, N., & Hayes, A. (2003). Mechanism independence for texture-modulation detection is consistent with a filter-rectify-filter mechanism. Vis. Neurosci., 20, 65-76.

Kosonogov , V., Vorobyeva , E., Kovsh , E., & Ermakov , P. (2019). A review of neurophysiological and genetic correlates of emotional intelligence. International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE), 7(1), 137–142.

Landy, M. S., & Oruç, I. (2002). Properties of second-order spatial frequency channels. Vision research, 42(19), 2311-2329.

Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H., Hawk, S. T., & Van Knippenberg, A. D. (2010). Presentation and validation of the Radboud Faces Database. Cognition and emotion, 24(8), 1377-1388.

Leder, H., & Bruce, V. (1998). Local and Relational Aspects of Face Distinctiveness. The Quarterly Journal of Experimental Psychology Section A, 51(3), 449–473.

Li, G., Yao, Z., Wang, Z., Yuan, N., Talebi, V., Tan, J., ... & Baker, C. L. (2014). Form-cue invariant second-order neuronal responses to contrast modulation in primate area V2. Journal of Neuroscience, 34(36), 12081-12092.

Liu, J., Harris, A., & Kanwisher, N. (2002). Stages of processing in face perception: an MEG study. Nature neuroscience, 5(9), 910-916.

Liu, J., Harris, A., & Kanwisher, N. (2010). Perception of face parts and face configurations: an fMRI study. Journal of cognitive neuroscience, 22(1), 203-211.

Liu, J., Higuchi, M., Marantz, A., & Kanwisher, N. (2000). The selectivity of the occipitotemporal M170 for faces. Neuroreport, 11(2), 337-341.

Liu, L., & Ioannides, A. A. (2010). Emotion separation is completed early and it depends on visual field presentation. PloS one, 5(3), e9790.

Lobmaier, J. S., Klaver, P., Loenneker, T., Martin, E., & Mast, F. W. (2008). Featural and configural face processing strategies: evidence from a functional magnetic resonance imaging study. Neuroreport, 19(3), 287-291.

Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska directed emotional faces (KDEF). CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, 91(630), 2-2.

Luria, S. M., & Strauss, M. S. (1978). Comparison of Eye Movements over Faces in Photographic Positives and Negatives. Perception, 7(3), 349–358.

Marat, S., Rahman, A., Pellerin, D., Guyader, N., & Houzet, D. (2013). Improving visual saliency by adding ‘face feature map’and ‘center bias’. Cognitive Computation, 5(1), 63-75.

Meinhardt-Injac, B., Persike, M., & Meinhardt, G. (2010). The time course of face matching by internal and external features: Effects of context and inversion. Vision Research, 50(16), 1598-1611.

Mertens, I., Siegmund, H., & Grüsser, O. J. (1993). Gaze motor asymmetries in the perception of faces during a memory task. Neuropsychologia, 31(9), 989-998.

Näsänen, R. (1999). Spatial frequency bandwidth used in the recognition of facial images. Vision research, 39(23), 3824-3833.

Olszanowski, M., Pochwatko, G., Kuklinski, K., Scibor-Rylski, M., Lewinski, P., & Ohme, R. K. (2015). Warsaw set of emotional facial expression pictures: a validation study of facial display photographs. Frontiers in psychology, 5, 1516.

Pantic, M., Valstar, M., Rademaker, R., & Maat, L. (2005, July). Web-based database for facial expression analysis. In 2005 IEEE international conference on multimedia and Expo (pp. 5-pp). IEEE.

Pele, O., & Werman, M. (2009, September). Fast and robust earth mover’s distances. In 2009 IEEE 12th international conference on computer vision (pp. 460-467). IEEE.

Perazzi, F., Krähenbühl, P., Pritch, Y., & Hornung, A. (2012, June). Saliency filters: Contrast based filtering for salient region detection. In 2012 IEEE conference on computer vision and pattern recognition (pp. 733-740). IEEE.

Peterson, M. F., & Eckstein, M. P. (2012). Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences, 109(48), E3314-E3323.

Reddy, L., Wilken, P., & Koch, C. (2004). Face-gender discrimination is possible in the near-absence of attention. Journal of vision, 4(2), 106-117.

Reynaud, A., & Hess, R. F. (2012). Properties of spatial channels underlying the detection of orientation-modulations. Experimental brain research, 220(2), 135-145.

Rivolta, D. (2014). Cognitive and neural aspects of face processing. In Prosopagnosia (pp. 19-40). Springer, Berlin, Heidelberg.

Rossion, B., Dricot, L., Devolder, A., Bodart, J. M., Crommelinck, M., Gelder, B. D., & Zoontjes, R. (2000). Hemispheric asymmetries for whole-based and part-based face processing in the human fusiform gyrus. Journal of cognitive neuroscience, 12(5), 793-802.

Royer, J., Blais, C., Charbonneau, I., Déry, K., Tardif, J., Duchaine, B., ... & Fiset, D. (2018). Greater reliance on the eye region predicts better face recognition ability. Cognition, 181, 12-20.

Ruiz-Soler, M., & Beltran, F. S. (2006). Face perception: An integrative review of the role of spatial frequencies. Psychological Research, 70(4), 273-292.

Schwaninger, A., Lobmaier, J. S., & Collishaw, S. M. (2002). Role of featural and configural information in familiar and unfamiliar face recognition. Lecture Notes in Computer Science, 2525, 643–650. Springer, Berlin, Heidelberg.

Skirtach, I. A., Klimova, N. M., Dunaev, A .G., & Korkhova, V. A. (2019). Effects of rational psychotherapy on emotional state and cognitive attitudes of patients with neurotic disorders. Trends in the development of psycho-pedagogical education in the conditions of transitional society (ICTDPP-2019), 09011.

Smith, M. L., Volna, B., & Ewing, L. (2016). Distinct information critically distinguishes judgments of face familiarity and identity. Journal of Experimental Psychology: Human Perception and Performance, 42(11), 1770.

Sun, P., & Schofield, A. J. (2011). The efficacy of local luminance amplitude in disambiguating the origin of luminance signals depends on carrier frequency: Further evidence for the active role of second-order vision in layer decomposition. Vision research, 51(5), 496-507.

Sutter, A., Beck, J., & Graham, N. (1989). Contrast and spatial variables in texture segregation: Testing a simple spatial-frequency channels model. Perception & psychophysics, 46(4), 312-332.

Sutter, A., Sperling, G., & Chubb, C. (1995). Measuring the spatial frequency selectivity of second-order texture mechanisms. Vision Research, 35(7), 915– 924.

Tamietto, M., & De Gelder, B. (2010). Neural bases of the non-conscious perception of emotional signals. Nature Reviews Neuroscience, 11(10), 697-709.

Tatler, B. W. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of vision, 7(14).

Theeuwes, J. (2010). Top–down and bottom–up control of visual selection. Acta psychologica, 135(2), 77-99.

Theeuwes, J. (2014). Spatial orienting and attentional capture. The Oxford handbook of attention, 231-252.

Valenti, R., Sebe, N., & Gevers, T. (2009, September). Image saliency by isocentric curvedness and color. In 2009 IEEE 12th international conference on Computer vision (pp. 2185-2192). IEEE.

Vorobyeva, E., Hakunova, F., Skirtach, I., & Kovsh, E. (2019). A review of current research on genetic factors associated with the functioning of the perceptual and emotional systems of the brain. In SHS Web of Conferences (Vol. 70, p. 09009). EDP Sciences.

Vuilleumier, P. (2002). Facial expression and selective attention. Current Opinion in Psychiatry, 15(3), 291-300.

Willenbockel, V., Fiset, D., Chauvin, A., Blais, C., Arguin, M., Tanaka, J. W., ... & Gosselin, F. (2010). Does face inversion change spatial frequency tuning?. Journal of Experimental Psychology: Human Perception and Performance, 36(1), 122.

Willis, J., & Todorov, A. (2006). First impressions: Making up your mind after a 100-ms exposure to a face. Psychological science, 17(7), 592-598.

Wu, J., Qi, F., Shi, G., & Lu, Y. (2012). Non-local spatial redundancy reduction for bottom-up saliency estimation. Journal of Visual Communication and Image Representation, 23(7), 1158-1166.

Xia, C., Qi, F., Shi, G., & Wang, P. (2015). Nonlocal center–surround reconstruction-based bottom-up saliency estimation. Pattern Recognition, 48(4), 1337-1348.



How to Cite

Babenko, V., Yavna, D., Vorobeva, E., Denisova, E., Ermakov, P., & Kovsh, E. (2021). Relationship Between Facial Areas With the Greatest Increase in Non-local Contrast and Gaze Fixations in Recognizing Emotional Expressions. International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE), 9(3), 359–368.