Black Box, White Coat: Rise Of The Machines In Medicine 

Machine learning (ML) and its parent, artificial intelligence (AI), hold the potential to transform the way we live and work. For many, AI remains a black box of computer technologies and jargon. But it’s really as simple as A to B! Artificial neural networks (ANN’s) are layers of mathematical algorithms which use advanced calculus and vectors programmed in computer software to solve problems. Humans input data (A) which passes through multiple algorithm layers to generate outputs (B) – computer solutions (A → B) to problems that humans can easily solve.

Machines learn using training datasets – for example, thousands of pictures of different cats and airplanes – that move through the hidden layers of an ANN under expert human supervision (i.e., supervised ML). The training continues until the ANN can distinguish the picture of a cat from an airplane with a low error rate for predicting the correct outcome (i.e., high predictive accuracy). ML is just one discipline in the universe of computational sciences (see AJM 2017, Figure 1).

In the late 2000s, deep learning (DL) took advantage of ultrafast game computer speeds (i.e., NVIDIA™ graphic processing units, GPUs) to process massive training datasets and analyze unstructured data – like images, voices, videos, etc. – without direct human supervision. DL’s capacity for unsupervised learning – the algorithms working directly with the raw data without much initial dataset preparation – is also known as feature engineering. Feature engineering allows the algorithms to focus directly on the key features and distinguishing elements in a dataset (i.e., self-teaching) without the need for any expert intervention (i.e., autonomous). DL capabilities are now widely used for knowledge discovery and data-mining (KDD) by companies with large diverse data troves, in order to reveal key business insights.

Early AI Applications in Medicine

Over the past five years, multiple reports from diverse AI-medicine collaborations reproducibly show that combined human expertise plus machine tasking improves system performance over either approach alone.

Credit: Pixabay

Most of this literature involves the use of ML or DL for medical image analysis:

  • Radiology – digital scan “interpretation” (CXR, CT, MR, mammography) for cancer, non-malignant lung disease, depression; virtual image generation
  • Histopathology – distinguishing skin or breast cancer from benign disease
  • Retinal photography – diagnosing diabetes (using FDA-approved IDx-DR™) or macular degeneration

As some big AI firms have discovered, non-imaging data AI is the hardest AI, largely because these highly diverse structured and unstructured data arrays are less easily classified by standard neural networks. Patient data processing applications of the AI discipline natural language processing (NLP) can “read”:

  • Electronic medical records – for EMR integration and compression
  • Health insurance databases – for actuarial risk profiling
  • Administrative health databases – for population health trending

A good deal of research & development (R&D) activity that is introducing AI products still awaiting commercial success is focused on virtual medicine:

  • Patient-AI Q&A (“Chatbot”) Apps – symptom self-diagnosis (Ada™), urgent care triaging (Dialogue™), mental illness empathy/therapy (Woebot™)
  • Internet-of-things (IoT) Wellness – wearable biometric device data flowing to the Cloud (sleep disorder management/Knit Health™, weather-smart asthma inhalers from IBM Watson & Teva Pharma)
  • Senior Care – medical monitoring & family intervention (video home monitoring/CarePredict™)
  • Clinical Decision Support – clinical practice guideline-based specialty care recommendations for primary care providers in underserved areas

Economic Considerations of AI Applications

New technology insertion is often difficult, and always costly.  Just where will AI technology ‘fit’ in a healthcare ecosystem characterized by population health management strategies intended to reduce rising costs, improve quality and reimburse value?, Historically, the costs of newer and/or more advanced technologies have been passed on to health insurers (public or private), and then partially defrayed by patient premiums/copays and healthcare delivery systems.

Managing the costs of care is often achieved by limiting access and/or by reducing expensive high-complexity acute hospital care for patients with chronic diseases. However, the majority of healthcare costs are related to people – providers and administrative personnel – who might carry out some human tasks more efficiently and effectively when augmented by intelligent machines.  Human-enabling AI technology interactions could be beneficial for healthcare by:

  • Reducing costly medical errors (i.e., misdiagnosis, mistreatment) and cognitive bias in clinical decision-making
  • Accelerating new drug discovery, ‘old’ drug repurposing, and medicinal chemistry (i.e., selecting candidate compounds, antibodies, etc.)
  • Improving hospital employee compliance with value-based reimbursement rules (i.e., bot calls, reminder texts, etc.)
  • Addressing health insurance complexity – helping patients select the right insurance or drug benefit plan (i.e., by chatting with virtual assistant bots); improving claims management (i.e., eligibility, EOB, payments)

Major healthcare sector companies are investing heavily in (building or buying) DL capabilities and products for business. Their AI-related R&D and merger and acquisition costs are not generating profits (yet). Moreover, the insertion of any new technology into healthcare has typically added to the direct costs of patient care (especially when over-utilized) and to the indirect costs of care (adding time for use documentation) and process management complexity. AI technology insertion challenges, including healthcare workforce up-skilling, remain unsolved.

Getting Beyond the Hype

Precision medicine (PM) is real. PM uses an individual patient’s genomic and other ‘omics profiles to predict disease occurrence and progression.  PM is a data-dense field that lends itself to big data science’s analytic capabilities. Next-generation gene and exome sequencing (NGS) rapidly generates massive datasets of point mutations and poly-genomic gene expression profiles that DL could model for:

  • Defining gene ontologies
  • Predicting gene-function relationships

While such AI capabilities have been touted as a way to precisely manage rare genetic diseases and serious recurrent cancers, this AI application has so far failed to impress most oncologists, or to augment the standard of cancer care.

The capacity to reduce big data diversity (i.e., dimensionality) gives DL the future potential to generate models that predict individual:

  • Suitability for a clinical research trial
  • Hospital-based microbial resistance to antibiotics
  • Susceptibility to drugs response and toxicity

While these and other applications remain works in progress, PM and AI will surely intersect in the next decade.

AI’s Next Wave 

The first two waves of AI – handcrafted knowledge and statistical learning – successfully tackled the easy stuff. As a result, select humans now enjoy driving autonomous vehicles, using facial recognition software, watching game-winning bots (IBM Watson on Jeopardy! and Google in Go), and NLP document reading.

The cresting wave of AI – predictive analytics – is an even more potent DL technology that could benefit many humans. Predictive analytic AI’s can model data solutions forward, like modeling a hurricane’s landfall from East Atlantic weather buoy signals and early satellite imagery. Promising DL research can project the trajectory of a chronic disease from continuously-generated wearable device data and prior year EMR entries. Generative AI models can explain human decisions and even shape human behaviors… like predicting the number of pen strokes that you will use to draw the number 4 (1, 2, or 3) from your other data.

The future “third wave” of AI – contextual adaptation – is gathering. Contextual adaptation takes AI capabilities much closer to emulating the human mind. It generates explanatory models for classes of real-world phenomena. Machine responses mimic very human traits. When Siri can confidently reply, “I know whether to trust you”… “I know the best way to help you”… contextual adaptation will have arrived.

How will such powerful future AI technologies empower more humans?

  • Patients asking for help making decisions on a diagnostic test or treatment [“companion analytics”]
  • Learners asking for help making the best professional career decisions in response to AI-configuring of individual aptitudes
  • People who are already learning about their ancestry and genetic makeup actually changing their health & wellness behaviors in response to cues based on AI-predicted risks and outcomes.

The AI Talent Pool

Where is the AI talent pool coming from? Where are the source experts who can see the future transformation of their field by AI? In a world blurred by the hype surrounding what AI actually can and can’t do, the system need-technology fix expectation gap is widening.

The first good questions that very smart computer scientists ask experts is, “What is your biggest problem?” and “What would you ‘fix’ if you could?” There are currently many problems and few real fixes in healthcare. And these are very tough questions for experts to answer, especially if they possess limited knowledge of what a high technology (like AI) is actually capable of doing.

So, for intelligent machines and smart humans to effectively connect, experts who might be able to determine the best future use for AI need to learn more about how the technology works, and how it doesn’t, right now. There is an urgent need for “precision education” – digital high technology up-skilling for health professionals working with powerful AI’s in the precision medicine era.

The White Coat 

A doctor’s touch remains very reassuring to patients and is the essence of humanistic medical practice. As intelligent machine high technology improves, humans rightly worry about the impact of AI on the patient-doctor relationship.  The ubiquitously adverse experience with lower technology EMR use in the clinic is a case study in patient-provider dis-intermediation.

A few medical specialties (i.e., radiologists, pathologists, etc.) are worried, sensing that AI’s comfort with digital imaging big data may displace them from their jobs. But if AI technology can also address clinical workplace “pain points” for most healthcare providers, and give doctors more time with their patients, then that is an overall good outcome for the many.

Patients already rely on their smart devices for fingertip medical information (“I’ll Google it…”). If that easy information access can be enhanced or personalized by medical AI apps (beyond Amazon Alexa, Hey Google, Apple Siri, Microsoft Cortana), then this higher technology could provide helpful guidance and safely prevent unnecessary doctor calls and emergency room visits.

Big data analytics already impacts complex systems of care delivery, health insurance, and population medicine resourcing. These systemic data science applications inform health policies, insurance coverage regimes, clinical operations, and budget allocations. The rising cost curves associated with the 30+ year expansion of administrative healthcare jobs could be “bent” by AI.

In the near future, the marriage of data science and AI technologies will no doubt transform the patient-doctor relationship. It’s difficult to speculate just how. But because AI is neither astute nor intuitive, humans will remain essential to the intelligent use of high technology in medicine.

These findings are described in the article entitled Artificial Intelligence in Medical Practice: The Question to the Answer?, recently published in the American Journal of MedicineThis work was conducted by D. Douglas Miller from New York Medical College and Eric W. Brown from IBM Watson Health.