Visual information is transmitted to the brain through two types of light-sensing cells (rods and cones) located across the retina. The fovea, a small central point in the retina, is densely populated by cone cells which enable the perception of fine details. Because of this configuration, our eyes move constantly to shift our gaze and fixate on or align a region of interest to the fovea so we can perceive it in greater detail.
Such shifting of gaze can occur as a result of internal (conscious decision) or external (unconscious reaction to the environment) influence. For instance, if we are walking through a crowded street looking for a friend who told us they were wearing a bright red hat, we move our eyes consciously looking for a red hat. By contrast, if we are walking through a crowded street where everyone is mostly dressed in dark colors and we are not looking for anyone in particular, someone in a bright red jacket appearing from a side street will stand out, thus automatically catching our attention and causing us to look in their direction.
Within the cognitive and behavioral sciences literature, these influences of gaze shift are referred to as top-down and bottom-up inputs. The top-down refers to how our knowledge, experience, and intention influences our action (e.g. movement of our eyes), whereas the bottom-up refers to how the salience or unexpectedness of objects in the environment can capture our attention and influence our action.
Computationally speaking, the way eye movements are generated is not necessarily a dichotomy where it is either our intention or our environment that causes us to shift our gaze (Awh et al., 2012). Instead, the brain is constantly matching sensory information (bottom-up) with existing knowledge and experience (top-down) to make predictions regarding where we should be looking next. A good example of this interaction between knowledge and the visual environment is how novice and experienced drivers differ in the way they scan the traffic scene and mirrors. Experienced drivers know when to look at the mirrors and which hazards to pay attention to, whereas novice drivers tend to look more centrally. In this case, prior knowledge contributes to the generation of efficient sequences of eye movements for the task of driving, whereas the lack of such knowledge means a less efficient series of eye movements.
Because the predictive process in gaze control involves a wide range of networks in the brain that are responsible for attention, perception and motor control, examining the complexity in eye movement patterns can provide insight into the mental state of an individual. One method of examining eye movement complexity is gaze entropy which provides a quantitative estimation of where we look (SGE: stationary gaze entropy) and the pattern with which our eyes move between different regions of what we are looking at (GTE: gaze transition entropy).
Gaze transition entropy, in particular, indicates how much top-down input (our knowledge and understanding) is contributing towards the control of our eye movements. As a result, if mental state is affected by intoxication, anxiety, stress, fatigue, or illness, the pattern of eye movements (i.e. GTE) and where we look (i.e. stationary gaze entropy) become noticeably altered. In our review of these methods, we propose that insufficient contribution of top-down input to gaze control would result in lower GTE, whereas too much of it may cause interference and increase GTE beyond the optimal range for a given task and visual environment (Shiferaw et al., 2019).
Accurate and efficient scanning of our surrounding through eye movements is critical as it guides our action in everyday tasks such as driving and other specialized operational settings. Given this importance of eye movements, such objective measures of gaze complexity make it possible to assess the efficiency of our visual scanning behavior in applied settings (Di Stasi et al., 2016; Diaz-Piedra et al., 2019; Shiferaw et al., 2018).
With the rapid advancement of eye-tracking technologies, these measures will make it possible to monitor the internal cognitive state of individuals in operational settings and detect impairment to prevent performance degradation. Real-time detection of cognitive impairment through gaze analysis has broad applications from human-computer interaction to the development of non-invasive diagnostic tools for neurological conditions that can be implemented in naturalistic settings (Shaikh and Zee, 2017).
These findings are described in the article entitled A review of gaze entropy as a measure of visual scanning efficiency, recently published in the journal Neuroscience & Biobehavioral Reviews.
References:
- Awh, E., Belopolsky, A. V, Theeuwes, J., 2012. Top-down versus bottom-up attentional control : a failed theoretical dichotomy. Trends Cogn. Sci. 16, 437–443. https://linkinghub.elsevier.com/retrieve/pii/S1364661312001489
- Di Stasi, L.L., Diaz-Piedra, C., Rieiro, H., Sánchez Carrión, J.M., Martin Berrido, M., Olivares, G., Catena, A., Martin Berrido, M., Sánchez Carrión, J.M., Olivares, G., Rieiro, H., Diaz-Piedra, C., Di Stasi, L.L., 2016. Gaze entropy reflects surgical task load. Surg. Endosc. 30, 5034–5043. https://link.springer.com/article/10.1007%2Fs00464-016-4851-8
- Diaz-Piedra, C., Rieiro, H., Cherino, A., Fuentes, L.J., Catena, A., Di Stasi, L.L., 2019. The effects of flight complexity on gaze entropy: An experimental study with fighter pilots. Appl. Ergon. 77, 92–99. https://linkinghub.elsevier.com/retrieve/pii/S0003687019300274
- Shaikh, A.G., Zee, D.S., 2017. Eye Movement Research in the Twenty-First Century—a Window to the Brain, Mind, and More. The Cerebellum. https://link.springer.com/article/10.1007%2Fs12311-017-0910-5
- Shiferaw, B., Downey, L., Crewther, D., 2019. A review of gaze entropy as a measure of visual scanning efficiency. Neurosci. Biobehav. Rev. 96, 353–366. https://linkinghub.elsevier.com/retrieve/pii/S0149763418303075
- Shiferaw, B.A., Downey, L.A., Westlake, J., Stevens, B., Rajaratnam, S.M.W., Berlowitz, D.J., Swann, P., Howard, M.E., 2018. Stationary gaze entropy predicts lane departure events in sleep-deprived drivers. Sci. Rep. in press. https://www.nature.com/articles/s41598-018-20588-7?error=cookies_not_supported&code=8ae17e93-f2f0-4a2b-ab58-53509c3269ca