Scientists Are Using Machine Learning To Better Predict Epilepsy

There are 2 aspects of this research that are worth highlighting: (1) we showed that micro-structural extra-hippocampal abnormalities are consistent enough across medial temporal lobe epilepsy (TLE) patients that they can be used to predict TLE, and (2) we obtained regularization values for the models trained on this sparse data in an unusual but effective manner.

Our input data consisted of 3 different diffusion imaging modalities: mean diffusivity (MD), fractional anisotropy (FA), and mean kurtosis (MK). Predictive models trained with MK proved to be the most accurate: .82 vs .68 (FA) and .51 (MD). Also, the highest coefficients of these linear models were located within the inferior medial aspect of the temporal lobes. These locations have complex fiber anatomy with many crossings.

Diffusion kurtosis imaging (DKI) is more apt than diffusion tensor imaging (DTI) at capturing fiber crossings due to the presence of non-Gaussian water diffusion. This likely explains why MK, which is calculated from DKI, had higher accuracy than the other 2 modalities, which are calculated from DTI. Finally, if these micro-structural abnormalities can be further identified by future work, then it may be possible to identify the different TLE phenotypes. This is of clinical importance since treatment varies by phenotype.

Linear support vector machine (SVM) models are often used to predict an outcome from input data. In this case, we predict TLE for each subject from the diffusion image (MK, MD, or FA). One of the challenges of using SVM is to find the optimal regularization parameter to prevent overfitting.

One option is to sub-divide the train set into a sub-train and sub-test set on each run of the cross-validation, find the regularization value which performs best on the sub-test with an algorithm such as grid-search and use this best regularization value to train the model on the train set and predict on the test set. However, this is computationally intensive, and further dividing the train set leads to increased variability. The regularization values found among the different folds of the cross-validation are usually different by orders of magnitudes. This is especially true for many life-science problems with a small number of subjects and high dimensionality.

Another option is to assume a distribution of values for the regularization parameter, instead of finding a single value. A model is fitted with the train set for each regularization value in the distribution. The final prediction is calculated as a weighted average of the different models’ predictions on the test set.

The weight (importance) assigned to each model in the prediction average is determined by 2 competing factors: the weight is (1) proportional to how well the associated model fits the train data, and (2) inversely proportional to the distance between the regularization value and the minimum required regularization value (Cμ) to have no classification error on the train data. The larger the regularization value, the better the model fits the data, but the greater the distance from Cμ. The first factor ensures that we give more weight to models that at least fit the data, the second factor ensures that we don’t give too high a weight to models that overfit the data.

This method showed superior performance than using a single regularization value. Averaging over the predictions removed noise. It also helped reduce overfitting because overfit models tend to create very complex, or wiggly, boundaries between the classes in the feature space. By averaging over the models, a new boundary is created that is smoother and simpler. Also worth noting, even though it is not in the paper, is that this method calculated relatively consistent regularization values. The values were within a factor of 2 to 3 between cross-validation runs.

This study, Using machine learning to classify temporal lobe epilepsy based on diffusion MRI was recently published by John Del Gaizo and Leonardo Bonilha in the journal Brain and Behavior.