Whatever we think or feel, almost all information in the brain is processed in the form of “spikes.” They are brief currents of ions flowing in and out of neurons, lasting for about one-thousandth of a second.
Spikes from a single-neuron are nearly identical in shape. It suggests that information is represented by respective timings of spikes, not by their shapes. In other words, all our thought processes are coded by when and where spikes occur in the brain. Spikes correspond to bits in computers since they are the fundamental elements of thinking.
Scientists have long been trying to figure out how information is represented as sequences of spike timings. These sequences are called spike trains. Determining how information is coded in spike trains would enable brain-machine interfaces (BMI). There are thousands of applications made possible by BMI, including helping physically disabled patients to move prosthesis, communicating directly without speaking or listening, or backing up memories in the brain on external devices.
One popular approach to decode spike trains is to define a distance measure between them. Spike trains can be grouped using a distance, providing the first step toward more advanced statistical analysis, such as clustering and classification.
Another approach is to define a positive-definite kernel. It is a similarity measure that generalizes the inner product, which corresponds to the angle between two vectors. Positive-definite kernels are the fundamental building blocks of a group of powerful machine learning methods called kernel methods or Gaussian process regression. It is an infinite dimensional extension of multivariate analysis commonly used in statistics.
Various positive-definite kernels have been used for analyzing spike trains obtained from a single neuron. The novelty of this work is that it developed a framework for extending any single-neuron spike train kernel to multi-neuron spike trains in a mathematically sound and rigorous way. The proposed framework is based on Haussler’s R-convolution kernel that creates a new kernel through a convolution operation. One key point of this work is to convolve a single-neuron spike train kernel with a factor analysis matrix, which models how similarity is defined between intra- and inter-neuron spike trains. The proposed method contrasts previous approaches, which extended single-neuron spike trains to multi-neuron spike trains in a kernel-specific manner.
The work also defined different subclasses of kernels that have varying numbers of hyperparameters. These subclasses are helpful in making estimation tractable when the size of data is limited.
The new multi-neuron spike train kernel provides a new analysis tool both for experimentalists and theorists of neural information processing. Experiments using real spike trains recorded from a cat indicated that the proposed approach is more accurate than other existing multi-neuron spike train decoding schemes.
These findings are described in the article entitled Multineuron spike train analysis with R-convolution linear combination kernel, recently published in the journal Neural Networks. This work was conducted by Taro Tezuka from the University of Tsukuba.