Relationship between facial areas with the greatest increase in non-local contrast and gaze fixations in recognizing emotional expressions

Автор: Vitaliy Babenko, Denis Yavna, Elena Vorobeva, Ekaterina Denisova, Pavel Ermakov, Ekaterina Kovsh

Журнал: International Journal of Cognitive Research in Science, Engineering and Education @ijcrsee

Рубрика: Original research

Статья в выпуске: 3 vol.9, 2021 года.

Бесплатный доступ

The aim of our study was to analyze gaze fixations in recognizing facial emotional expressions in comparison with o the spatial distribution of the areas with the greatest increase in the total (nonlocal) luminance contrast. It is hypothesized that the most informative areas of the image that getting more of the observer's attention are the areas with the greatest increase in nonlocal contrast. The study involved 100 university students aged 19-21 with normal vision. 490 full-face photo images were used as stimuli. The images displayed faces of 6 basic emotions (Ekman's Big Six) as well as neutral (emotionless) expressions. Observer's eye movements were recorded while they were the recognizing expressions of the shown faces. Then, using a developed software, the areas with the highest (max), lowest (min), and intermediate (med) increases in the total contrast in comparison with the surroundings were identified in the stimulus images at different spatial frequencies. Comparative analysis of the gaze maps with the maps of the areas with min, med, and max increases in the total contrast showed that the gaze fixations in facial emotion classification tasks significantly coincide with the areas characterized by the greatest increase in nonlocal contrast. Obtained results indicate that facial image areas with the greatest increase in the total contrast, which preattentively detected by second-order visual mechanisms, can be the prime targets of the attention.

Еще

Face, emotion, eye movements, nonlocal contrast, second-order visual mechanisms

Короткий адрес: https://sciup.org/170198640

IDR: 170198640   |   DOI: 10.23947/2334-8496-2021-9-3-359-368

Список литературы Relationship between facial areas with the greatest increase in non-local contrast and gaze fixations in recognizing emotional expressions

  • Açık, A., Onat, S., Schumann, F., Einhäuser, W., & König, P. (2009). Effects of luminance contrast and its modifications on fixation behavior during free viewing of images from different categories. Vision research, 49(12), 1541-1553. https:// doi.org/10.1016/j.visres.2009.03.011
  • Allen, P. A., Lien, M. C., & Jardin, E. (2017). Age-related emotional bias in processing two emotionally valenced tasks. Psychological research, 81(1), 289-308. https://doi.org/10.1007/s00426-015-0711-8
  • Atkinson, A. P., & Smithson, H. E. (2020). The impact on emotion classification performance and gaze behavior of foveal versus extrafoveal processing of facial features. Journal of experimental psychology: Human perception and performance, 46(3), 292–312. https://doi.org/10.1037/xhp0000712
  • Babenko, V. V., & Ermakov, P. N. (2015). Specificity of brain reactions to second-order visual stimuli. Visual neuroscience, 32. https://doi.org/10.1017/S0952523815000085
  • Babenko, V. V., Ermakov, P. N., & Bozhinskaya, M. A. (2010). Relationship between the Spatial-Frequency Tunings of the First-and the Second-Order Visual Filters. Psikhologicheskii Zhurnal, 31(2), 48-57. (In Russian). https://www.elibrary. ru/download/elibrary_14280688_65866525.pdf
  • Babenko, V.V. (1989). A new approach to the problem of visual perception mechanisms. In Problems of Neurocybernetics, ed. Kogan, A. B., pp. 10–11. Rostov-on-Don, USSR: Rostov University Pub. (In Russian).
  • Belousova, A., & Belousova, E. (2020). Gnostic emotions of students in solving of thinking tasks. International Journal of Cognitive Research in Science, Engineering and Education, 8(2), 27-34. https://doi.org/10.5937/IJCRSEE2002027B
  • Bergen, J. R., & Julesz, B. (1983). Parallel versus serial processing in rapid pattern discrimination. Nature, 303(5919), 696- 698. https://doi.org/10.1038/303696a0
  • Betts, L. R., & Wilson, H. R. (2010). Heterogeneous structure in face-selective human occipito-temporal cortex. Journal of Cognitive Neuroscience, 22(10), 2276-2288. https://doi.org/10.1162/jocn.2009.21346
  • Bindemann, M., Scheepers, C., & Burton, A. M. (2009). Viewpoint and center of gravity affect eye movements to human faces. Journal of vision, 9(2), 1-16. http://dx.doi.org/10.1167/9.2.7
  • Bindemann, M., Scheepers, C., Ferguson, H. J., & Burton, A. M. (2010). Face, body, and center of gravity mediate person detection in natural scenes. Journal of Experimental Psychology: Human Perception and Performance, 36(6), 1477. http://dx.doi.org/10.1037/a0019057
  • Bombari, D., Mast, F. W., & Lobmaier, J. S. (2009). Featural, configural, and holistic face-processing strategies evoke different scan patterns. Perception, 38(10), 1508-1521. https://doi.org/10.1068/p6117
  • Bruce, N. D. & Tsotsos, J. K. 2005). Saliency based on information maximization. In Advances in neural information processing systems, 18, 155-162. http://cs.umanitoba.ca/~bruce/NIPS2005_0081.pdf
  • Budanova, I. (2021). The Dark Triad of personality in psychology students and eco-friendly behavior. In E3S Web of Conferences (Vol. 273, p. 10048). EDP Sciences. https://doi.org/10.1051/e3sconf/202127310048
  • Butler, S., Blais, C., Gosselin, F., Bub, D., & Fiset, D. (2010). Recognizing famous people. Attention, Perception, & Psychophysics, 72(6), 1444-1449. https://doi.org/10.3758/APP.72.6.1444
  • Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., & Durand, F. (2018). What do different evaluation metrics tell us about saliency models?. IEEE transactions on pattern analysis and machine intelligence, 41(3), 740-757. https://doi.org/10.1109/ TPAMI.2018.2815601
  • Cabeza, R., & Kato, T. (2000). Features are also important: Contributions of featural and configural processing to face recognition. Psychological science, 11(5), 429-433. https://doi.org/10.1111/1467-9280.00283
  • Cauchoix, M., Barragan-Jason, G., Serre, T., & Barbeau, E. J. (2014). The neural dynamics of face detection in the wild revealed by MVPA. Journal of Neuroscience, 34(3), 846-854. https://doi.org/10.1523/JNEUROSCI.3030-13.2014
  • Chubb, C., & Sperling, G. (1989). Two motion perception mechanisms revealed through distance-driven reversal of apparent motion. Proceedings of the National Academy of Sciences, 86(8), 2985-2989. https://doi.org/10.1073/pnas.86.8.2985
  • Collin, C. A., Rainville, S., Watier, N., & Boutet, I. (2014). Configural and featural discriminations use the same spatial frequencies: A model observer versus human observer analysis. Perception, 43(6), 509-526. https://doi.org/10.1068/ p7531
  • Collishaw, S. M., & Hole, G. J. (2000). Featural and configurational processes in the recognition of faces of different familiarity. Perception, 29(8), 893-909. https://doi.org/10.1068/p2949
  • Comfort, W. E., & Zana, Y. (2015). Face detection and individuation: Interactive and complementary stages of face processing. Psychology & Neuroscience, 8(4), 442. https://doi.org/10.1037/h0101278
  • Crouzet, S. M., & Thorpe, S. J. (2011). Low-level cues and ultra-fast face detection. Frontiers in psychology, 2, 342. https://doi. org/10.3389/fpsyg.2011.00342
  • Crouzet, S. M., Kirchner, H., & Thorpe, S. J. (2010). Fast saccades toward faces: face detection in just 100 ms. Journal of vision, 10(4), 16-16. https://doi.org/10.1167/10.4.16
  • Dakin, S. C., & Mareschal, I. (2000). Sensitivity to contrast modulation depends on carrier spatial frequency and orientation. Vision research, 40(3), 311-329. https://doi.org/10.1016/S0042-6989(99)00179-0
  • Einhäuser, W., & König, P. (2003). Does luminance-contrast contribute to a saliency map for overt visual attention?. European Journal of Neuroscience, 17(5), 1089-1097. https://doi.org/10.1046/j.1460-9568.2003.02508.x
  • Einhäuser, W., Rutishauser, U., Frady, E. P., Nadler, S., König, P., & Koch, C. (2006). The relation of phase noise and luminance contrast to overt attention in complex visual stimuli. Journal of vision, 6(11), 1-1. https://doi.org/10.1167/6.11.1
  • Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: scanning emotional facial expressions. Emotion, 11(4), 860-865. https://doi.org/10.1037/a0022758
  • Ekman, P. (1992). An argument for basic emotions. Cognition & emotion, 6(3-4), 169-200. https://doi. org/10.1080/02699939208411068
  • Fodor, J. (1983). Modularity of Mind: An Essay on Faculty Psychology. Cambridge, Mass: MIT Press.
  • Fodor, J. A. (2000). The mind doesn’t work that way: The scope and limits of computational psychology. MIT press. Retrieved from http://www.sscnet.ucla.edu/comm/steen/cogweb/Abstracts/Sutherland_on_Fodor_00.html
  • Frey, H. P., König, P., & Einhäuser, W. (2007). The role of first-and second-order stimulus features for human overt attention. Perception & Psychophysics, 69(2), 153-161. https://doi.org/10.3758/bf03193738
  • Fuchs, I., Ansorge, U., Redies, C., & Leder, H. (2011). Salience in paintings: bottom-up influences on eye fixations. Cognitive Computation, 3(1), 25-36. https://doi.org/10.1007/s12559-010-9062-3
  • Gao, D., Han, S., & Vasconcelos, N. (2009). Discriminant saliency, the detection of suspicious coincidences, and applications to visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(6), 989-1005. https://doi. org/10.1109/TPAMI.2009.27
  • Gao, D., & Vasconcelos , N. (2007). Bottom-up saliency is a discriminant process . Proceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision. https://doi.org/10.1109/ICCV.2007. 4408851
  • Graham, N. V. (2011). Beyond multiple pattern analyzers modeled as linear filters (as classical V1 simple cells): Useful additions of the last 25 years. Vision research, 51(13), 1397-1430. https://doi.org/10.1016/j.visres.2011.02.007
  • Guyader, N., Chauvin, A., Boucart, M., & Peyrin, C. (2017). Do low spatial frequencies explain the extremely fast saccades towards human faces?. Vision research, 133, 100-111. https://doi.org/10.1016/j.visres.2016.12.019
  • Harris, A., & Aguirre, G. K. (2008). The representation of parts and wholes in face-selective cortex. Journal of Cognitive Neuroscience, 20(5), 863-878. https://doi.org/10.1162/jocn.2008.20509
  • Honey, C., Kirchner, H., & VanRullen, R. (2008). Faces in the cloud: Fourier power spectrum biases ultrarapid face detection. Journal of vision, 8(12), 9-9. https://doi.org/10.1167/8.12.9
  • Hou, W., Gao, X., Tao, D., & Li, X. (2013). Visual saliency detection using information divergence. Pattern Recognition, 46(10), 2658-2669. https://doi.org/10.1016/j.patcog.2013.03.008
  • Hou, X., & Zhang, L. (2007, June). Saliency detection: A spectral residual approach. In 2007 IEEE Conference on computer vision and pattern recognition (pp. 1-8). Ieee. https://doi.org/10.1109/CVPR.2007.383267
  • Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature reviews neuroscience, 2(3), 194-203. https://doi. org/10.1038/35058500
  • Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 20(11), 1254-1259. https://doi.org/10.1109/34.730558
  • Kanwisher, N. (2000). Domain specificity in face perception. Nature neuroscience, 3(8), 759-763. https://doi.org/10.1038/77664
  • Kingdom, F. A., & Keeble, D. R. (1999). On the mechanism for scale invariance in orientation-defined textures. Vision Research, 39(8), 1477-1489. https://doi.org/10.1016/S0042-6989(98)00217-X
  • Kingdom, F.A.A., Prins, N., & Hayes, A. (2003). Mechanism independence for texture-modulation detection is consistent with a filter-rectify-filter mechanism. Vis. Neurosci., 20, 65-76. https://doi.org/10.1017/s0952523803201073
  • Kosonogov , V., Vorobyeva , E., Kovsh , E., & Ermakov , P. (2019). A review of neurophysiological and genetic correlates of emotional intelligence. International Journal of Cognitive Research in Science, Engineering and Education (IJCRSEE), 7(1), 137–142. https://doi.org/10.5937/ijcrsee1901137K
  • Landy, M. S., & Oruç, I. (2002). Properties of second-order spatial frequency channels. Vision research, 42(19), 2311-2329. https://doi.org/10.1016/S0042-6989(02)00193-1
  • Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H., Hawk, S. T., & Van Knippenberg, A. D. (2010). Presentation and validation of the Radboud Faces Database. Cognition and emotion, 24(8), 1377-1388. https://doi.org/10.1080/02699930903485076
  • Leder, H., & Bruce, V. (1998). Local and Relational Aspects of Face Distinctiveness. The Quarterly Journal of Experimental Psychology Section A, 51(3), 449–473. https://doi.org/10.1080/713755777
  • Li, G., Yao, Z., Wang, Z., Yuan, N., Talebi, V., Tan, J., ... & Baker, C. L. (2014). Form-cue invariant second-order neuronal responses to contrast modulation in primate area V2. Journal of Neuroscience, 34(36), 12081-12092. https://doi. org/10.1523/JNEUROSCI.0211-14.2014
  • Liu, J., Harris, A., & Kanwisher, N. (2002). Stages of processing in face perception: an MEG study. Nature neuroscience, 5(9), 910-916. https://doi.org/10.1038/nn909
  • Liu, J., Harris, A., & Kanwisher, N. (2010). Perception of face parts and face configurations: an fMRI study. Journal of cognitive neuroscience, 22(1), 203-211. https://doi.org/10.1162/jocn.2009.21203
  • Liu, J., Higuchi, M., Marantz, A., & Kanwisher, N. (2000). The selectivity of the occipitotemporal M170 for faces. Neuroreport, 11(2), 337-341. https://doi.org/0.1097/00001756-200002070-00023
  • Liu, L., & Ioannides, A. A. (2010). Emotion separation is completed early and it depends on visual field presentation. PloS one, 5(3), e9790. https://doi.org/10.1371/journal.pone.0009790
  • Lobmaier, J. S., Klaver, P., Loenneker, T., Martin, E., & Mast, F. W. (2008). Featural and configural face processing strategies: evidence from a functional magnetic resonance imaging study. Neuroreport, 19(3), 287-291. https://doi.org/10.1097/ WNR.0b013e3282f556fe
  • Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska directed emotional faces (KDEF). CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, 91(630), 2-2.
  • Luria, S. M., & Strauss, M. S. (1978). Comparison of Eye Movements over Faces in Photographic Positives and Negatives. Perception, 7(3), 349–358. https://doi.org/10.1068/p070349
  • Marat, S., Rahman, A., Pellerin, D., Guyader, N., & Houzet, D. (2013). Improving visual saliency by adding ‘face feature map’and ‘center bias’. Cognitive Computation, 5(1), 63-75. https://hal.archives-ouvertes.fr/hal-00703762
  • Meinhardt-Injac, B., Persike, M., & Meinhardt, G. (2010). The time course of face matching by internal and external features: Effects of context and inversion. Vision Research, 50(16), 1598-1611. https://doi.org/10.1016/j.visres.2010.05.018
  • Mertens, I., Siegmund, H., & Grüsser, O. J. (1993). Gaze motor asymmetries in the perception of faces during a memory task. Neuropsychologia, 31(9), 989-998. https://doi.org/10.1016/0028-3932(93)90154-R
  • Näsänen, R. (1999). Spatial frequency bandwidth used in the recognition of facial images. Vision research, 39(23), 3824-3833. https://doi.org/10.1016/s0042-6989(99)00096-6
  • Olszanowski, M., Pochwatko, G., Kuklinski, K., Scibor-Rylski, M., Lewinski, P., & Ohme, R. K. (2015). Warsaw set of emotional facial expression pictures: a validation study of facial display photographs. Frontiers in psychology, 5, 1516. https://doi. org/10.3389/fpsyg.2014.01516
  • Pantic, M., Valstar, M., Rademaker, R., & Maat, L. (2005, July). Web-based database for facial expression analysis. In 2005 IEEE international conference on multimedia and Expo (pp. 5-pp). IEEE. https://doi.org/10.1109/ICME.2005.1521424
  • Pele, O., & Werman, M. (2009, September). Fast and robust earth mover’s distances. In 2009 IEEE 12th international conference on computer vision (pp. 460-467). IEEE. https://doi.org/10.1109/ICCV.2009.5459199
  • Perazzi, F., Krähenbühl, P., Pritch, Y., & Hornung, A. (2012, June). Saliency filters: Contrast based filtering for salient region detection. In 2012 IEEE conference on computer vision and pattern recognition (pp. 733-740). IEEE. https://doi. org/10.1109/CVPR.2012.6247743
  • Peterson, M. F., & Eckstein, M. P. (2012). Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences, 109(48), E3314-E3323. https://doi.org/10.1073/pnas.1214269109
  • Reddy, L., Wilken, P., & Koch, C. (2004). Face-gender discrimination is possible in the near-absence of attention. Journal of vision, 4(2), 106-117. https://doi.org/10.1167/4.2.4
  • Reynaud, A., & Hess, R. F. (2012). Properties of spatial channels underlying the detection of orientation-modulations. Experimental brain research, 220(2), 135-145. https://doi.org/10.1007/s00221-012-3124-6
  • Rivolta, D. (2014). Cognitive and neural aspects of face processing. In Prosopagnosia (pp. 19-40). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40784-0_2
  • Rossion, B., Dricot, L., Devolder, A., Bodart, J. M., Crommelinck, M., Gelder, B. D., & Zoontjes, R. (2000). Hemispheric asymmetries for whole-based and part-based face processing in the human fusiform gyrus. Journal of cognitive neuroscience, 12(5), 793-802. https://doi.org/10.1162/089892900562606
  • Royer, J., Blais, C., Charbonneau, I., Déry, K., Tardif, J., Duchaine, B., ... & Fiset, D. (2018). Greater reliance on the eye region predicts better face recognition ability. Cognition, 181, 12-20. https://doi.org/10.1016/j.cognition.2018.08.004
  • Ruiz-Soler, M., & Beltran, F. S. (2006). Face perception: An integrative review of the role of spatial frequencies. Psychological Research, 70(4), 273-292. https://doi.org/10.1007/s00426-005-0215-z
  • Schwaninger, A., Lobmaier, J. S., & Collishaw, S. M. (2002). Role of featural and configural information in familiar and unfamiliar face recognition. Lecture Notes in Computer Science, 2525, 643–650. Springer, Berlin, Heidelberg. https:// doi.org/10.1007/3-540-36181-2_64
  • Skirtach, I.A., Klimova, N.M., Dunaev, A.G., & Korkhova, V.A. (2019). Effects of rational psychotherapy on emotional state and cognitive attitudes of patients with neurotic disorders. Trends in the development of psycho-pedagogical education in the conditions of transitional society (ICTDPP-2019), 09011. https://doi.org/10.1051/SHSCONF/20197009011
  • Smith, M. L., Volna, B., & Ewing, L. (2016). Distinct information critically distinguishes judgments of face familiarity and identity. Journal of Experimental Psychology: Human Perception and Performance, 42(11), 1770. https://doi.org/10.1037/ xhp0000243
  • Sun, P., & Schofield, A. J. (2011). The efficacy of local luminance amplitude in disambiguating the origin of luminance signals depends on carrier frequency: Further evidence for the active role of second-order vision in layer decomposition. Vision research, 51(5), 496-507. https://doi.org/10.1016/j.visres.2011.01.008
  • Sutter, A., Beck, J., & Graham, N. (1989). Contrast and spatial variables in texture segregation: Testing a simple spatial-frequency channels model. Perception & psychophysics, 46(4), 312-332. https://doi.org/10.3758/BF03204985
  • Sutter, A., Sperling, G., & Chubb, C. (1995). Measuring the spatial frequency selectivity of second-order texture mechanisms. Vision Research, 35(7), 915– 924. https://doi.org/10.1016/0042-6989(94)00196-S
  • Tamietto, M., & De Gelder, B. (2010). Neural bases of the non-conscious perception of emotional signals. Nature Reviews Neuroscience, 11(10), 697-709. https://doi.org/10.1038/nrn2889
  • Tatler, B. W. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of vision, 7(14). http://dx.doi.org/10.1167/7.14.4
  • Theeuwes, J. (2010). Top–down and bottom–up control of visual selection. Acta psychologica, 135(2), 77-99. https://doi. org/10.1016/j.actpsy.2010.02.006
  • Theeuwes, J. (2014). Spatial orienting and attentional capture. The Oxford handbook of attention, 231-252. https://doi. org/10.1093/oxfordhb/9780199675111.013.005
  • Valenti, R., Sebe, N., & Gevers, T. (2009, September). Image saliency by isocentric curvedness and color. In 2009 IEEE 12th international conference on Computer vision (pp. 2185-2192). IEEE. https://doi.org/10.1109/ICCV.2009.5459240
  • Vorobyeva, E., Hakunova, F., Skirtach, I., & Kovsh, E. (2019). A review of current research on genetic factors associated with the functioning of the perceptual and emotional systems of the brain. In SHS Web of Conferences (Vol. 70, p. 09009). EDP Sciences. https://doi.org/10.1051/SHSCONF/20197009009
  • Vuilleumier, P. (2002). Facial expression and selective attention. Current Opinion in Psychiatry, 15(3), 291-300. https://doi. org/10.1097/00001504-200205000-00011
  • Willenbockel, V., Fiset, D., Chauvin, A., Blais, C., Arguin, M., Tanaka, J. W., ... & Gosselin, F. (2010). Does face inversion change spatial frequency tuning?. Journal of Experimental Psychology: Human Perception and Performance, 36(1), 122. https://doi.org/10.1037/a0016465
  • Willis, J., & Todorov, A. (2006). First impressions: Making up your mind after a 100-ms exposure to a face. Psychological science, 17(7), 592-598. https://doi.org/10.1111/j.1467-9280.2006.01750.x
  • Wu, J., Qi, F., Shi, G., & Lu, Y. (2012). Non-local spatial redundancy reduction for bottom-up saliency estimation. Journal of Visual Communication and Image Representation, 23(7), 1158-1166. https://doi.org/10.1016/j.jvcir.2012.07.010
  • Xia, C., Qi, F., Shi, G., & Wang, P. (2015). Nonlocal center–surround reconstruction-based bottom-up saliency estimation. Pattern Recognition, 48(4), 1337-1348. https://doi.org/10.1016/j.patcog.2014.10.007
Еще
Статья научная