Published by Daniel Lichtblau
Wolfram Research, Champaign, Illinois, United States of America
These findings are described in the article entitled Cancer diagnosis through tandem of classifiers for digitized histopathological slides, recently published in the journal PLoS One (2019). This work was conducted by Daniel Lichtblau from Wolfram Research and Catalin Stoean from the University of Craiova.
Tissue slides are frequently used to make medical diagnoses. One example involves H&E stained slides for assessing presence and grade/severity of cancer. Once the slides are available, they are typically evaluated by trained pathologists.
While this usually leads to appropriate diagnoses, there are several potential issues. One is that different pathologists might (and sometimes do) grade the same slide differently. Another is that the same pathologist might, on different occasions, grade the same slide differently (lighting, fatigue, etc. all play a role here). Yet another is that as diagnostic technology becomes less expensive and more widely used, there may be locales for which samples can be prepared but there are not sufficiently many trained pathologists to process them. For all these reasons, automated diagnosis software is viewed as a way to ameliorate the workload and also to get second (or third) opinions, in a way that is unbiased (or, more correctly, tends to have different biases from those of human pathologists).
Prior literature has made good use of image processing and machine learning (ML) methods for the purpose of automating diagnoses. A typical algorithm workflow involves the following steps:
- Image segmentation and related methods for obtaining various “measures” in given images.
- Feeding many such measures into ML classifiers, with a view toward determining which features are “important” as predictors of the actual diagnosis.
- Further training of classifiers using the determined features.
Such methods tend to require considerable time, computational resources, and a good idea in advance of what set of image features might be useful.
Our approach is more direct. We let the ML classifiers determine relevant features, and only provide training data in the form of slides and corresponding diagnoses (as determined by more than one pathologist, under careful conditions). In addition, we employ an unrelated method that grades an image by its proximity to “nearby” images of known grade (based on prior published work involving tandem usage of Fourier and Principal Components methods, by the first author).
The benefit is that this method tends to correlate more loosely with ML classifiers than they correlate with one another — loosely speaking, it makes its mistakes in different places, and thus serves to offset incorrect grading from the standard ML approaches. Further along these lines, we then use a validation method to create an ensemble weighting using the multiple classifiers. We also provide a confidence measure. That is to say, using thresholds for the overall probabilities we can assess the reliability of a given diagnosis. For the main data set in this study, it shows that roughly 70% of the diagnoses are quite trustworthy (and correct), with the
most if not all errors occurring in the rest.
The main novelty in this work is the methodology for creating an ensemble score. Our approach is shown to be competitive with more strenuous processing methods by assessing three benchmark data sets. A second aspect, of independent interest, is that the benchmark tests cover two different cancer types (colorectal and breast), thus giving some confidence that the methodology might be extensible.
There are some future directions under consideration. We first note that H&E tissue images can pose certain difficulties for machine learning methods. One is that results should be independent of slide orientation. Another is that different levels of coloration might be due to different lab set-ups or lighting differences in creating electronic images from actual slides. A third is that tissue inhomogeneities sometimes arise from boundaries with unrelated tissue rather than benign/malignancy borders. Possible future experiments involve color deconvolution (to offset the effect of inter-lab differences), and use of images averaged over rotations to minimize the impact of both orientation and tissue inhomogeneities. Also, we might extend to a different cancer type, such as leukemia, for which there exists a large benchmark set of stained slide images.