Using Machine Learning And Brain Imaging To Understand Categorization In Noisy Environments

Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. For example, recognizing another animal as a friend or a foe allows for determining how to interact with it. Likewise, recognizing a plant as edible (or not) can ensure survival. However, animals do not often have access to an ideal view of an object if it is separated from its environment. The same object is often seen with a different viewpoint, partially obstructed, or in less than ideal lighting conditions. Therefore, it is essential to study categorization under noisy and degraded conditions.

How does the brain process categorization stimuli in degraded conditions? One possibility is that brain areas typically associated with visual processing in posterior cortex (e.g., V1, V2, V3, V4) extract the stimulus from its environment (background noise), and that brain areas typically associated with categorization [e.g., striatum, prefrontal cortex (PFC), hippocampus (HC)] are not affected by the degraded conditions. Another possibility is that visual processing is not affected by the viewing condition, but that the categorization systems receive a degraded stimulus representation in poor viewing conditions and need to adjust their processing accordingly.

To disentangle these two possibilities young adults were scanned at the Purdue MRI Facility while categorizing novel abstract stimuli that were covered by masks with different levels of transparencies. Advanced machine learning methods were used to process the brain activity and try to predict the viewing conditions of the stimuli based only on the measured brain activity. This process is sometimes referred as “mind reading” and uses a support vector machine (SVM). The results of full brain analysis showed that the SVM could discriminate between the most degraded visual condition and the other two (less degraded) viewing conditions.

Analysis of the patterns learned by the SVM shows that posterior visual areas V1, V2, V3, and V4 were the most important in discriminating between the different viewing conditions. This result was further supported by region of interest (ROI) analyses that focus on specific brain areas. The ROI analyses showed that the activity in brain areas V1, V2, V3, and V4 were each individually capable of identifying the level of stimulus degradation. In contrast, the striatum, PFC, and HC, brain areas generally associated with stimulus categorization, were unable to identify the level of stimulus degradation.

Together, these results support the hypothesis that when a stimulus is difficult to extract from its background environment, processing in the visual system extracts the stimulus before passing on the stimulus for categorization to the appropriate brain systems. These results are important for bridging cognitive neuroscience work on visual attention with cognitive neuroscience work on categorization. It also has important implications for neurological patient populations. For example, patients with a brain lesion affecting only the posterior visual systems may have unimpaired categorization ability and thus benefit from help with isolating visual stimuli from their environment.

In contrast, diseases directly affecting the categorization systems, e.g., Huntington’s disease, should leave the visual system mostly intact, and aid with isolating visual stimuli from their environment may not be helpful in this case. Clearly, more work is needed to better understand how the brain process information, and machine learning methods such as SVM may allow for accelerating these findings.

These findings are described in the article entitled The effect of integration masking on visual processing in perceptual categorization, recently published in the journal Brain and Cognition. This work was conducted by Sébastien Hélie from Purdue University.