AI to the Rescue
How artificial intelligence can help stave off a looming health care crisis
November 2020Photograph: Rick Cruz
America is facing a health care crisis primarily due to its aging population. Physician shortages have come to the forefront recently, as many hospitals are overwhelmed due to the COVID-19 pandemic. In truth, our looming physician shortage is a generation in the making, as baby boomer doctors retire in droves. This is all occurring as lifespans are increasing—hence, there are fewer doctors to treat more patients. Exacerbating the problem is that medical schools are not churning out medical students fast enough due to capacity constraints, and it takes 12 to 15 years to train a doctor. Today, more than half of active physicians are older than 55, and by the year 2032, the Association of American Medical Colleges projects a shortfall of 122,000 doctors in the United States.
Fortunately, technological advancement continues to improve health care across the globe, which may be the panacea to our looming physician shortage. Of particular interest, artificial intelligence (AI), which has become a common phrase in our connected society, holds current and future promise to impact health systems in the United States. AI may be defined as a set of mathematical algorithms (usually paired with a learning heuristic) that look for patterns in data to provide a structured model that provides insights. In this article, we will review the origins of AI, explain the different types of AI and impacts to health care currently, and explore where AI potentially may be headed. We will address how physicians, providers, insurers and pharmaceutical companies increasingly will rely on AI as a way to be more productive in the broader mission of diagnosing and treating patients more effectively.
AI Origins
The idea of AI is far from an idea generated within the last few years. Concepts of human-built machines with designed intelligence date as far back as ancient Greece with the character Talos,1 who was an automaton made for Minos by Hepaestus. In more modern times, writers like Isaac Asimov envisioned the world where AI robots directly affected human life and were governed by the “Three Laws of Robotics.” But the inception of AI may have begun in 1956, when a prominent group of scientists met at Dartmouth to discuss how machines could learn and mimic intelligence.
While there was an eye to the future back then, AI now impacts our lives regularly. AI drives cars, predicts the TV shows and movies we might like and determines the marketing that appears in our mailboxes. It is starting to find its way into the health care arena, impacting the way both providers and consumers interact with the health care system. As a society, we turned the corner around 2013, when large internet companies started investing resources to make search and ad placement much smarter—and that opened up a lot of the research opportunities into AI.
While technology is pervasive in health care, AI adoption is still in its infancy. But it is growing rapidly. A recent study by Capgemini 2 showed that life sciences is a leading sector in AI investment and maturity driven by pharmaceutical companies. The study found that 27 percent of the life science companies have deployed AI-based solutions in their production flows. On the other hand, the insurance industry is toward the bottom end of the spectrum, with only 6 percent having achieved a production version of AI.
Types of AI
What exactly is modern-day AI, and how does it work? At its core, AI can be thought of as a set of predictions and pattern matching with an expected outcome. It starts with basic regression analysis and can include more advanced concepts like neural networking, ensemble learning and natural language processing. In essence, a computer can “learn” from its mistakes, much like a human.
Each of the different AI technologies has different abilities and potential impacts. One of the earliest and most fundamental algorithms is the neural network, which is important in today’s “deep learning.” This AI is modeled on the structure of the human brain, with nodes representing neurons arranged in sequential layers. The nodes may “fire” (return a 1 instead of a 0) if their input exceeds a certain threshold, and the information (0/1) is fed along with a weight to the next layer of the network. In neural network training, this is accomplished using a feedback loop process called backpropagation, where the network updates node weights and the connections between nodes in the network of equations. After a number of loops, a computer gets good at classifying inputs and returning the right answer.
Machine learning is the process by which a computer system is trained (either supervised or unsupervised) to recognize specific conditions or items. Training the system under a supervised manner means that the computer is taught from a specific set of examples where the target variable is known and the observations are annotated with the correct classification. This allows the computer to learn the pattern between predictors and targets.
A common example of this supervised machine learning is the facial recognition and matching in smartphones when they match and identify similar looking faces. They learn how to correct their errors through repeated human interactions with the phone—requiring us to match additional photos—thereby training the machine how to recognize individual faces. On the other hand, unsupervised learning is where a computer is fed massive amounts of data without any explicit target and the algorithm identifies patterns on its own to group “similar” records together.
Natural language processing (NLP), which is the process by which computers can interact with humans via linguistics, is emerging as a core function of AI. AI is driving the ability of computers to understand and respond appropriately in multiple languages, a now commonplace phenomenon. For example, Google’s translate service is now capable of real-time language translations. NLP currently can function by reading text or the spoken word.
The next frontier for NLP in health care is in the digitization and processing of electronic health records (EHRs). The doctor’s notes in an EHR contain a wealth of clinical information describing a patient’s current disease state and possible future risk factors. The problem is translating that information from unstructured notes to a machine-readable format. State-of-the-art deep learning algorithms that process written words can identify the connection in the phrase “patient coughs, wakes up frequently during the night and complains of shortness of breath” between coughing and shortness of breath. It can interpret the implied severity of the condition and suggest proper diagnosis codes and evidence-based courses of treatment. The output can guide a doctor’s expert opinion in a clinical setting by putting exponentially more information at the clinician’s fingertips.
The increased digitization and interoperability utilizing new standards such as Fast Health care Interoperability Resources (FHIRs) will enhance the capabilities of EHR information around diagnosis, treatment, prescription and social determinants of health. Records can be mined to create patient profiles, predict the course of disease and recommend potential treatments. As this information flows more freely among insurance companies, hospitals and doctor’s offices, it allows for automated preauthorization of service, detection of potentially harmful drug interactions and better holistic patient care. Patient clinical profiles also can be studied to ensure the diagnoses reported to the Centers for Medicare & Medicaid Services (CMS) are consistent with the true disease profile of the patient. Fully integrated EHRs will be a new frontier for predictive models that provide new applications for the parsing and interpretation of natural language.
AI’s Impact and Future Potential
AI is already impacting health care in the United States, but its future impact will likely be much greater. All those who participate in delivering, receiving and paying for care will experience changes. AI will impact health care in the areas of clinical practice, operations and clinical decision support that helps to drive patient treatments.
The operational side is focused on the payment, processing and interactions for care recipients who deal with the payers. Call center interactions are an area where investments in AI are being made to enhance interactions with patients. While interactive voice response (IVR) systems today can respond to questions and make simple tree paths, the AI-driven call may be a very different experience. Multiple patents have been filed by organizations like Google to drive more AI into the IVR system. The intent is for the AI systems to read not just the text of the language products, but also the tone and stress of the caller.
With the help of other data sources, such as customer relationship management (CRM) data, customer segmentation and health records, AI can infer the intent of the customer. AI also can authenticate the voice, so there is no need for prompts to provide personal information that may annoy callers.
Imagine a call center phone interaction where an upset customer is calling for a status update on an appeal. AI allows the phone system to translate both the subject and the tone of the caller. With this information, organizations immediately can escalate the phone call directly to a supervisor in appeals, who is proactively alerted to both the client’s mood and subject prior to saying hello. This sort of quick movement to the right person at the right time drives down costs and simplifies the process, making for happier customers.
This technology is not just limited to problem resolution—it also may be incorporated into other technologies like telehealth. Having an AI interaction discern a customer’s tone and mood helps prioritize the urgency of a response and engage the right specialist upfront, before the human interaction begins.
Operational usage of AI has promise in the areas of health care fraud, waste and abuse. According to the Federal Bureau of Investigation (FBI), fraud, waste and abuse costs the U.S. health care system tens of billions of dollars each year. Health care transactions are becoming more complex, and the volume of data is increasing. Humans are not as capable as machines to review and identify fraud with the massive amounts of data being generated.
When it comes to analyzing large data sets, machines outperform humans, especially where computations on a large scale make a difference—for example, human grand masters in chess no longer can beat a computer. Machine learning in both a supervised and unsupervised manner can start searching through the large data stores of claims data to find and flag unusual patterns. AI is taught how fraud, waste and abuse are perpetrated, so it can efficiently monitor and analyze big data sets. However, those wanting to cheat the system will catch on when their tactics are no longer working. That is where the unsupervised side of machine learning comes into play, by identifying potential red flags requiring further investigation.
One promising new AI algorithm at the frontier of fraud detection is the generalized adversarial network (GAN). This technique pits two neural networks, a generator and a discriminator, against each other. The generator tries to create the most realistic possible fraudulent claim, and the discriminator receives a stream of actual claims interspersed with the fraudulent ones created by the generator. The objective of the learning algorithm is to train the discriminator to detect which claims are fraudulent. Each of these neural networks iteratively learns from the other, and over the course of thousands of rounds, these networks are able to produce (and detect) very accurate “counterfeits.” These models also have been used successfully in facial recognition software, with surprisingly accurate false profiles that can even fool the human eye.
AI already is augmenting physicians with real-time clinical decision support capabilities. Perhaps the most important and prevalent use of AI in health care today is analyzing the images produced from radiology. A study from New York University in 2011 concluded that AI could identify nodules between 62 percent and 97 percent faster than a human radiologist. Another clinical study determined that when AI and pathologists individually judged whether a lymph node contained cancer cells, there were error rates of 7.5 percent versus 3.5 percent, respectively. But when AI was used to augment the pathologist’s decision process, the error rate went down to just 0.5 percent, vastly improving diagnosis accuracy.
Imaging equipment has surpassed the human eye’s ability to see the detailed patterns that indicate abnormality. But available and emerging technology doesn’t replace human doctors; it aids them in quickly and accurately diagnosing patients. Areas where the volume of data is beyond a human’s ability to process are great candidates for deploying AI, such as predicting the response for a specific likelihood of successful cancer treatment via immunotherapy by using genetic markers.3 The idea is to use AI in analyzing the genetic markers, tumor traits and other data to determine the likely result of a specific treatment. If successful, this type of AI will save money from what would be ineffective treatments and, more important, help direct the best course of care for the patient.
The pharmaceutical industry is embracing predictive modeling techniques to make clinical trials more efficient and effective, and to guide basic research. Drug companies are hiring data science teams and instructing them to look at “real-world evidence” data in addition to large databases of information on genes, biomarkers and their interactions to leverage new predictive modeling tools and techniques.
With respect to drug discovery, pharmaceutical companies have learned they can spend less money screening out ineffective or potentially harmful compounds by using predictive models and large databases of basic scientific research combined with insurance industry claims and disease profiles. Such algorithms also can help identify potential new compounds based on their genetic signature. These databases can identify potential new uses for already approved drugs by using knowledge of biological systems and how they interact. The pharmaceutical industry also is using this data to weed out potentially harmful drug interactions for populations that are at high risk. These critical applications explain why pharmaceutical companies are hiring data scientists at a rapid rate.
CMS also is incorporating predictive modeling into its assessment of risk for individuals who are eligible for Medicare, as well as individuals and small groups enrolling in health plans under the Affordable Care Act (ACA). CMS has a strong interest in modeling both the prospective and concurrent morbidity risk of individuals, as it must ensure that Medicare Advantage and ACA insurance plans are compensated properly for the risk they insure, and that insurance companies are discouraged from designing plans that only attract healthy members. An accurate assessment of health risk is essential to that purpose, and CMS has shown a willingness to embrace new statistical methods to measure that risk. At present, CMS is using standard “least squares” regression models, but it is likely that accelerating costs and rapid change in the market will drive the adoption of newer modeling techniques like random forests and gradient boosted trees to better assess comorbidities, capture behavioral characteristics driving plan selection and increase accuracy in relative risk prediction.
Another emerging application of AI is in predicting the readmission risk of patients to the hospital within 30 days of discharge. Particularly to protect Medicare members’ long-term health outcomes, CMS has put financial incentives in place to deter unnecessary readmissions.
There are many potential drivers of readmission risk, such as comorbidities, complication of drug regimen prescribed and past inpatient and outpatient facility claims. It is critical to identify which patients have the highest risk of readmission at discharge, so there can be an intervention with the proper level of nursing care to avert these adverse outcomes. New predictive models are forecasting readmission risk accurately by identifying the driving factors and interactions among those factors that lead to higher risk, such as whether the patient is discharged to a skilled nursing facility or home health center, and what drugs are prescribed upon discharge. Those individuals identified as high risk can be stratified in terms of risk and potential adverse outcome, and then they can be targeted for a manual intervention such as a phone call, home visit or telehealth consultation.
The financing of health care also is being revolutionized by new predictive models of group risk that allow for the more accurate stratification of groups and pricing of health insurance policies. State-of-the-art predictive models, as well as ensemble learning methods (which combine, either sequentially or simultaneously, a collection of weaker models to form a strong predictor), allow insurers to discriminate among subpopulations and price groups according to the true health claims risk they bring to the block. Furthermore, more precise understanding of group risk allows for the allocation of the provision for the adverse deviation component of premium to correspond to each group’s inherent volatility.
Predictive models enable the construction of new estimates, which can be combined with the traditional ones of prior claim history, block-level results and underwriting judgment. AI meta-models also can help with the balancing and weighing of these estimates of prospective risk according to their expected accuracy and variability. More accurate claims forecasting will make the market for health insurance more efficient, and competition will ensure that all insurers improve their morbidity prediction algorithms so as not to be selected against in the market.
Other areas where AI can make a difference are in remote patient monitoring (RPM) and surgical procedures. AI sensors can detect patient movements to ensure nursing care patients are getting enough exercise, or they can identify if a patient has fallen and needs help (and can go even further to help determine the root cause to prevent future falls). These sensors also can detect whether a patient has taken their medication as prescribed, and future developments will monitor the concentration of needed pharmaceutical agents in the bloodstream.
AI Can Solve Pressing Problems
The COVID-19 pandemic has accelerated the use of predictive models in health care. Due to the omnipresence of cellphones with GPS capabilities, companies that aggregate this location information have sprouted, allowing health authorities to track and model the spread of infection. This mobile data represents truly “big data,” as it longitudinally identifies the locations of nearly every person in the United States. Powerful AI algorithms are needed to find patterns, observe how clusters of people are changing over time and forecast the spread of infection.
Predictive models also are being employed to forecast scenarios for the “second wave” of the pandemic and assist hospitals with resource allocation, volume planning and ventilator allocation. The insurance industry is utilizing predictive models to forecast unemployment scenarios and, therefore, the populations that will shift among Medicaid, ACA and commercial health plans.
AI has enabled doctors to expedite COVID-19 patient triage, helping hospitals manage their overwhelming patient loads. It is hard to know whether to quarantine patients using traditional methods, since lab testing to detect infection may take days, or patients may need to wait several hours for a specialist to review an X-ray. But, using deep learning models, newer lung screening products used before the pandemic were retooled to detect COVID-19 infections in mere minutes.
The pandemic has been a disruptive force for many hospitals, accelerating the adoption of new technologies to help manage the treatment of patients. Many AI vendors initially are providing this technology free of charge during a trial period, expecting a later sale as the benefits of using AI become apparent.
Future Applications and Growth
While AI is still evolving as a mainstream technology, it already is affecting multiple facets of our society. AI currently has many practical applications in health care, such as diagnosis, monitoring disease, patient wellness and new devices. For example, machines are just starting to assist doctors in imaging, but in the near future one could imagine automated interpretation of images without human involvement.
The application of AI continues to expand at an exponential rate, with unforeseen advances furthering the welfare of humankind. Soon these advances will pose new ethical questions about the social and moral implications of allowing unmitigated technological advances without human oversight. It is a paradox that this technology promises accurate forecasts, yet it still can’t foretell the future. As Yogi Berra famously quipped, “It’s tough to make predictions, especially about the future.”
References:
- 1. Dictionary.com (accessed September 22, 2020). ↩
- 2. Thieullent, Anne-Laure, Ashwin Yardi, Fabian Schladitz, Jerome Buvat, Ramya Krishna Puttur, Marie-caroline Baerd, Ron Tolido, Jerry Kurtz, Subrahmanyam KJV, and Gaurav Aggarwal. The AI-powered Enterprise: Unlocking the Potential of AI at Scale. Capgemini Research Institute, July 2020 (accessed September 22, 2020). ↩
- 3. Eisenstein, Michael. AI Brings Precision to Cancer Immunotherapy. Genetic Engineering & Biotechnology News, April 1, 2020 (accessed September 22, 2020). ↩
Copyright © 2020 by the Society of Actuaries, Chicago, Illinois.