X

Putting Visual Information Into Context

The cerebral cortex is the seat of advanced human faculties such as language, long-term planning, and complex thought. Research over the last 60 years has shown that some parts of the cortex have very specific roles, such as vision (visual cortex) or control of our limbs (motor cortex).

While categorizing different parts of the cortex according to their most apparent roles is convenient, it is also an oversimplification. Brain areas are highly interconnected, and studies over the past decade have shown that incoming sensory information is rapidly integrated with other variables associated with a person or an animal’s behavior. Our internal state, such as current goals and motivations, as well as previous experience with a given stimulus shape how sensory information is processed. This can be referred to as the “context” within which a sensory input is perceived. For example, we perceive a small, brightly-colored object differently when we see it in a candy store (where we might assume it is a candy) compared to seeing it in the jungle (where it might be a poisonous animal). In our recent article published in Cell Reports, we investigated the factors that impact the activity of cells in the visual cortex beyond visual inputs as animals learned to locate a reward in a virtual reality environment.

Researchers have known since the 1960s that cells in the primary visual cortex exhibit striking responses to specific features of the visual environment such as bars moving across our visual field or edges of a given orientation. The traditional view of a hierarchical organization of the visual system was that primary visual cortex encodes elementary visual features of our environment, and then forwards this information to higher cortical areas which, in turn, take these individual visual elements and combine them to represent objects and scenes. Our understanding of how an animal’s current behavior influences information processing, however, was limited. Recent studies have started to address this question directly and found that neurons in primary visual cortex show more complex responses than expected. We have set out to use cutting-edge technological advances in systems neuroscience to understand what type of information is processed in the primary visual cortex of awake animals as they learn to find rewards.

A window into the brain, literally

We have combined two recent technologies to record from a large number of individual neurons in primary visual cortex while mice were awake and free to run. An advanced microscopy technique, two-photon calcium imaging, allowed us to visualize the mouse brain and record the activity of hundreds of neurons through a small implanted window.

A key challenge with this technique, however, is that the mouse head has to remain fixed. We, therefore, constructed a virtual reality system in which an animal was placed atop a treadmill, surrounded by computer screens displaying a virtual environment within which they can freely move while their head remains in the same place. The virtual environment consisted of a linear corridor with a black-and-white striped pattern on the wall and a black wall section (visual cue) in which the mouse could trigger a reward (sugar water) by licking a spout. This allowed us to train animals to find rewards at a specific location within the virtual environment while simultaneously recording the activity of neurons in the visual cortex.

More dedicated neurons, more reward

Our first, unexpected, finding was that after learning the task, a large proportion of neurons (~80%) in the primary visual cortex were responding to task-specific elements, with many cells becoming specifically more active when the animal approached the reward area. Interestingly, the number of these “task-responsive” cells strongly correlated with how well the animals performed the task. In other words, the more precisely animals were able to locate the reward location, the more cells we found to be active around that location of the virtual corridor in the visual cortex. This was surprising as neurons in the visual cortex clearly seemed to be as interested in where the animal could get a drop of sugar water, as the visual features of their surroundings. To test the impact that the visual reward cue (black wall section) itself had on the activity, we removed this cue and found that some neurons still responded at the rewarded location. This suggests that these neurons in the visual cortex were no longer only depending on visual information to elicit their responses.

Visual inputs matter, but sometimes motor-related inputs matter more

These results opened up a number of interesting questions about what is driving these responses as it was clearly not only visual inputs. We wanted to understand the factors driving this activity in the visual cortex. Mice could use two strategies to locate the reward when no visual cue was present to indicate the reward point. The first strategy relies on their internal sense of distance based on feedback from the motor systems (motor feedback). In other words, an estimate of how far a mouse has traveled based on how many steps they have taken since the beginning of the corridor. The second strategy would be to rely on an estimate of position based on the way the visual world moves past the animal, known as “optic flow.”

We took advantage of the unique opportunities in experimental design afforded by a virtual reality system to test which information is driving those reward-location specific responses. By creating a mismatch between the animal’s own movement on the treadmill and the visual movement of the external virtual environment, we were able to test whether it is motor feedback or visual flow that determines where the animal thinks it is along the corridor, and, correspondingly, where these neurons representing the reward location become active. The results showed that animals expected the reward location primarily based on motor feedback. This means that there are some neurons in the primary visual cortex that encode information related to the location of a reward, based on an animal’s motor behavior rather than purely visual information.

However, in our final experiment, the importance of visual inputs became clear again: when the visual cue indicating the reward location was put back in, while still maintaining the mismatch between treadmill and virtual movement, the animal’s behavior, as well as the neuronal responses, snapped back to the visual cue, disregarding the number of steps it had taken. This suggests that motor feedback is available to and used by the primary visual cortex, but in a conflict situation, the visual cues indicating a specific location, override other types of information to correctly locate a reward.

Conclusion

These results demonstrate the importance of behavioral context for sensory processing in the brain. The primary visual cortex, a region that was once thought to primarily represent our visual world by detecting elementary visual features such as edges, is also influenced by prior experience, learning, and interactions with our environment. A prominent model proposed to explain sensory cortical function, posits that the cerebral cortex creates a representation of what we expect, based on current sensory inputs and previous experience.

Our results are congruent with this model while emphasizing the large role of contextual factors, such as motor feedback and prior knowledge of a location. Future studies are necessary to determine how different types of inputs to sensory regions of the brain influence the activity of individual neurons and how they shape our perception of the world.

These findings are described in the article entitled The Impact of Visual Cues, Reward, and Motor Feedback on the Representation of Behaviorally Relevant Spatial Locations in Primary Visual Cortex, recently published in the journal Cell Reports. This work was conducted by Janelle M.P. Pakan from the University of Edinburgh, the Otto-von-Guericke University, and the German Center for Neurodegenerative Diseases, Stephen P. Currie and Nathalie L. Rochefort from the University of Edinburgh, and Lukas Fischer from the University of Edinburgh and the Massachusetts Institute of Technology.