While you are reading this article, your brain is most likely being faced with a variety of different sensory stimuli such as noise from your surroundings, maybe smell coming from the restaurant across the street, and lots of visual inputs from the graphics on this website. It is up to your nervous systems to properly integrate all incoming information from the various sensory modalities and to create appropriate behavioral responses.
How these sensory inputs are encoded and separated so that humans and animals can function properly is not fully understood. From a traditional perspective, individual senses are first integrated separately and subsequently converge onto multimodal association areas such as cortical and subcortical regions in the brain. However, new research suggests that multimodal processing is present not only in higher brain region areas, but also in once-considered modality-specific regions of the nervous system, and that these areas directly affect the behavioral response.
Several hypotheses exist for how these regions deal with multimodal information, following the concepts developed in unisensory areas. One is that the information is kept completely separated so that individual information channels exist for each sensory modality. This approach is similar to how engineers construct machines that decode information. Another one is rate coding, which allows encoding of a large range of stimuli of the same type but could also serve to encode many different types. In this case, sensory stimuli from different sense organs overlap and are encoded in the firing or burst frequencies of the involved neurons, i.e. in how strong these neurons are activated. Lastly, a combinatorial code, where sensory information is represented in a largely overlapping population of neurons but with distinct activation of the involved neurons (similar to a combinatorial lock). A combinatorial code may allow networks to make more robust distinctions when parameter space is limited, making it less suitable for encoding wide ranges of sensory activities but a prime candidate for distinguishing between categorically different stimuli, such as sensory modalities.
Few studies have tested these hypotheses and addressed how multisensory integration is achieved in networks that directly control behavioral output.
To tackle this matter, we investigated how different sensory conditions are encoded in the crustacean stomatogastric nervous system. The advantage of this system is that a rather small pool of quite large neurons makes decisions on how to respond to various sensory stimuli, and that these decisions can be monitored in the electrical activity of motor neurons even in a petri dish. The decision making neurons reside within the paired commissural ganglia, CoGs, each of which contains fewer than 220 neurons. They integrate information from different sensory modalities that control aspects of feeding in this nervous system. We studied how multiple sensory modalities are represented in this network of neurons when presented either individually (unimodal inputs) or simultaneously (bimodal input).
Using multineuron optical imaging with a fluorescent dye that allows us to measure the neurons’ membrane potentials, we first found that a large proportion of the CoG neurons are involved in the processing of chemosensory and mechanosensory information, with more than three-quarters of the neurons being multimodal. Since most neurons responded to both stimuli, this excluded the possibility that distinct information channels exist for each sensory input. Further, we found that the rate at which these neurons were active did not differ between conditions, demonstrating that no rate code exists that can separate between sensory modalities.
Differences between modalities were, however, represented by the combination of which neurons responded to a particular pathway, i.e., the identities of the neurons participating. Specifically, the response sign — whether neurons were excited or inhibited by sensory stimulation — differed between sensory modalities. Along with this, the motor neurons displayed different activity patterns in the two sensory conditions, suggesting that the observed combinations matter for the behavior. Most strikingly, bimodal input was represented by a set of neurons distinct from either unimodal condition.
Not only does this suggest that the CoG network employs a combinatorial code to represent different sensory modalities (Figure 1), it also indicates that even in small motor systems the combined presence of two sensory stimuli results in a new sensation, one that is different from just the sum of the individual responses. Our findings may also provide a mechanism behind multimodal encoding and motor pattern selection in more complex networks, backed by the more recent evidence for combinatorial coding in higher-order brain regions, like the superior colliculus, basal ganglia, and even cortical areas.
There still open questions regarding how neuronal networks process multiple sensory inputs simultaneously, especially when it comes to more than two sensory inputs. What we know is that in a sensorimotor system, two different modalities are processed by a largely overlapping set of neurons, and the differences between modalities are encoded in the combination of recruited neurons.
So if you are re-reading this text, which would not be surprising given its complexity, and the restaurant across the street is now playing after dinner music, your brain uses largely the same neurons, but they are activated in different combinations. Not because the text has changed, but because rather than your nose, now your ears tell you something about that restaurant. Hopefully, that new combination strikes you as novel enough to remember that a small neural system in a crab led us to understand this important mechanism of information processing.
These findings are described in the article entitled Multimodal sensory information is represented by combinatorial code in a sensorimotor system, recently published in the journal PLOS Biology.