Visual information is transmitted to the brain through two types of light-sensing cells (rods and cones) located across the retina. The fovea, a small central point in the retina, is densely populated by cone cells which enable the perception of fine details. Because of this configuration, our eyes move constantly to shift our gaze and fixate on or align a region of interest to the fovea so we can perceive it in greater detail.

Such shifting of gaze can occur as a result of internal (conscious decision) or external (unconscious reaction to the environment) influence. For instance, if we are walking through a crowded street looking for a friend who told us they were wearing a bright red hat, we move our eyes consciously looking for a red hat. By contrast, if we are walking through a crowded street where everyone is mostly dressed in dark colors and we are not looking for anyone in particular, someone in a bright red jacket appearing from a side street will stand out, thus automatically catching our attention and causing us to look in their direction.


Within the cognitive and behavioral sciences literature, these influences of gaze shift are referred to as top-down and bottom-up inputs. The top-down refers to how our knowledge, experience, and intention influences our action (e.g. movement of our eyes), whereas the bottom-up refers to how the salience or unexpectedness of objects in the environment can capture our attention and influence our action.

Computationally speaking, the way eye movements are generated is not necessarily a dichotomy where it is either our intention or our environment that causes us to shift our gaze (Awh et al., 2012). Instead, the brain is constantly matching sensory information (bottom-up) with existing knowledge and experience (top-down) to make predictions regarding where we should be looking next. A good example of this interaction between knowledge and the visual environment is how novice and experienced drivers differ in the way they scan the traffic scene and mirrors. Experienced drivers know when to look at the mirrors and which hazards to pay attention to, whereas novice drivers tend to look more centrally. In this case, prior knowledge contributes to the generation of efficient sequences of eye movements for the task of driving, whereas the lack of such knowledge means a less efficient series of eye movements.

Because the predictive process in gaze control involves a wide range of networks in the brain that are responsible for attention, perception and motor control, examining the complexity in eye movement patterns can provide insight into the mental state of an individual. One method of examining eye movement complexity is gaze entropy which provides a quantitative estimation of where we look (SGE: stationary gaze entropy) and the pattern with which our eyes move between different regions of what we are looking at (GTE: gaze transition entropy).


Gaze transition entropy, in particular, indicates how much top-down input (our knowledge and understanding) is contributing towards the control of our eye movements. As a result, if mental state is affected by intoxication, anxiety, stress, fatigue, or illness, the pattern of eye movements (i.e. GTE) and where we look (i.e. stationary gaze entropy) become noticeably altered. In our review of these methods, we propose that insufficient contribution of top-down input to gaze control would result in lower GTE, whereas too much of it may cause interference and increase GTE beyond the optimal range for a given task and visual environment (Shiferaw et al., 2019).

Accurate and efficient scanning of our surrounding through eye movements is critical as it guides our action in everyday tasks such as driving and other specialized operational settings. Given this importance of eye movements, such objective measures of gaze complexity make it possible to assess the efficiency of our visual scanning behavior in applied settings (Di Stasi et al., 2016; Diaz-Piedra et al., 2019; Shiferaw et al., 2018).

With the rapid advancement of eye-tracking technologies, these measures will make it possible to monitor the internal cognitive state of individuals in operational settings and detect impairment to prevent performance degradation. Real-time detection of cognitive impairment through gaze analysis has broad applications from human-computer interaction to the development of non-invasive diagnostic tools for neurological conditions that can be implemented in naturalistic settings (Shaikh and Zee, 2017).

These findings are described in the article entitled A review of gaze entropy as a measure of visual scanning efficiency, recently published in the journal Neuroscience & Biobehavioral Reviews.


  1. Awh, E., Belopolsky, A. V, Theeuwes, J., 2012. Top-down versus bottom-up attentional control : a failed theoretical dichotomy. Trends Cogn. Sci. 16, 437–443.
  2. Di Stasi, L.L., Diaz-Piedra, C., Rieiro, H., Sánchez Carrión, J.M., Martin Berrido, M., Olivares, G., Catena, A., Martin Berrido, M., Sánchez Carrión, J.M., Olivares, G., Rieiro, H., Diaz-Piedra, C., Di Stasi, L.L., 2016. Gaze entropy reflects surgical task load. Surg. Endosc. 30, 5034–5043.
  3. Diaz-Piedra, C., Rieiro, H., Cherino, A., Fuentes, L.J., Catena, A., Di Stasi, L.L., 2019. The effects of flight complexity on gaze entropy: An experimental study with fighter pilots. Appl. Ergon. 77, 92–99.
  4. Shaikh, A.G., Zee, D.S., 2017. Eye Movement Research in the Twenty-First Century—a Window to the Brain, Mind, and More. The Cerebellum.
  5. Shiferaw, B., Downey, L., Crewther, D., 2019. A review of gaze entropy as a measure of visual scanning efficiency. Neurosci. Biobehav. Rev. 96, 353–366.
  6. Shiferaw, B.A., Downey, L.A., Westlake, J., Stevens, B., Rajaratnam, S.M.W., Berlowitz, D.J., Swann, P., Howard, M.E., 2018. Stationary gaze entropy predicts lane departure events in sleep-deprived drivers. Sci. Rep. in press.

About The Author

Brook Shiferaw is a research scientist at the Swinburne University of Technology, Centre for Human Psychopharmacology.

David Crewther started his career as a theoretical physicist, completing his Ph.D. at CalTech under Nobel prize-winner Murray Gell-Mann. His interest in neurophysiology started there under the influence of Prof Jack Pettigrew. David's academic career has been diverse, successively at the National Vision Research Institute in Melbourne, the School of Optometry at the University of NSW in Sydney, the School of Psychological Science at La Trobe University in Melbourne and thence to the Brain Sciences Institute and Swinburne in 2000. His academic interests include the neuroscience of normal and abnormal visual development, psychophysics of visual attention, non-linear electrophysiology and functional neuroimaging of cognitive function, as well as neural mechanisms of refractive control. His studies have implications particularly for development in children: dyslexia, amblyopia, autism, myopia, and ADHD, as well as an understanding of conscious awareness and mind/brain relations. David has published widely with over 170 publications in peer-reviewed scientific journals, mainly in the area of vision, visual development, myopia, single-cell electrophysiology, evoked potential research, dyslexia, amblyopia, learning disability, and more than 300 refereed conference abstracts. He has an extended record of funding from both the ARC and NHMRC over nearly 30 years as well as other miscellaneous funding. He has recently held ARC DPs (through Swinburne) into the physiological mechanisms of refractive control and into the basic cortical mechanisms of form, colour and motion processing, a NHMRC project grant (through Swinburne) in the cognitive neuroscience of autistic tendency and an ARC DP (through La Trobe U) into cortical mechanisms of reaching and grasping in time. David was responsible for the "CogNOSS" plan leading to the acquisition of MEG and MRI technologies at Swinburne. David currently holds an Adjunct Professorship in Psychological Science at La Trobe University, and an Adjunct Professorship at the Zhejiang Institute for Integrative Neuroscience and Technology, Zhejian University, Hangzhou, China. He has also been a Visiting Research Professor at the 3rd Military Medical University of Chongqing, China. He served as President of the Australasian Cognitive Neuroscience Society in 2015 and is currently National Co-Chair of the Asia Pacific Conference on Vision.

Associate Professor Luke Downey is an NHMRC R.D. Wright Biomedical Career Development Fellow. He leads the Drugs and Driving Research Unit at Swinburne, managing a team of research assistants, Ph.D. students, and postdoctoral fellows.

Alongside his research in human psychopharmacology, Dr. Downey also examines how individual differences in emotional intelligence contribute to human behavior. This program of research initially focused on developing reliable and valid measures to assess emotional intelligence in adults and adolescents. These measures can now be used to examine the role of emotional intelligence in predicting outcomes such as scholastic achievement, bullying, stress coping strategies, and recruitment consultant revenue.

In 2015, Dr. Downey held a visiting scholar position at Harvard Medical School. He has also completed two years of postdoctoral work at Swansea University with Professor Andy Parrott.