AI: Augmented Intelligence or Electric Sheep?

by Barry P Chaiken, MD

Visionary Elon Musk fears it. Astrophysicist Stephen Hawking worried about it. Microsoft’s Bill Gates embraces it. Science fiction writer Philip K. Dick wrote about androids having the capacity to dream because of it. At HIMSS 2019, everyone talked about it.

So what is artificial intelligence (AI)? Computer science defines AI research as the study of “intelligent agents“: any device that perceives its environment and takes actions to maximize its chance of achieving its goals. Researchers Andreas Kaplan and Michael Haenlein (2019) define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” Finally, most experts, at least those who are not computer scientists, equate AI with machine learning and include natural language processing as a tool used within AI research.

At HIMSS 2019, I assembled a group of industry experts – physicians, CIOs, data scientists, public health experts, and informaticists – to debate their views on AI and its promised impact on patient care. To prepare attendees for the meeting, I shared several academic papers on AI taken from the Journal of the American Medical Association and the British Medical Journal.

Although individuals on the panel expressed slightly differing views overall, they agreed that AI in healthcare is an overhyped concept inappropriately attributed to programs that do not fit any reasonable definition of AI tools. They described many instances where clinical or operational decision support tools touted as AI were actually expert systems driven by human-built algorithms.

Attendees also worried about “black box” AI. In this instance, decision support tools deliver results built from opaque processes not visible to users. Without transparency into the processes, organizations using the tools are unable to evaluate the quality and reliability of these “AI” systems. In addition, organizations cannot determine if the results are based upon AI principles or more simplistic, static, rule-based algorithms.

The data science behind AI

Contrary to portrayals in the movies or in the non-industry press, AI is not currently a pure, natural intelligence like that of humans. Today’s AI systems do not think or act independently, and they are a far cry from the androids with positronic brains you might have seen on Star Trek: The Next Generation. Instead, data models form the basis of all AI and drive the results that we interpret to be intelligence.

To create an AI system, data scientists must first carefully circumscribe the system’s use case. This activity defines the large data set needed to train the AI’s “intelligence,” which really refers to its capability to reach reasonable conclusions when presented with a broad set of possible inputs. The data must contain meaningful, relevant relationships among data points so that the conclusions reached by the AI will make sense.

For example, developing AI for treatment of breast cancer requires a data set that is rich in clinical, social, and demographic information about female cancer patients. Choosing a data set too broad, such as one including men or children, may lower the accuracy of the information in the data set. Choosing one too narrow may deliver no valuable results.

Upon choosing the data set, data scientists apply specific machine-learning models to create the AI. These models are tested against expected results as reviewed by domain experts. Several iterations of model choices and data sets may be necessary to obtain the proper AI building blocks.

While expert systems can mimic some of the tasks completed by AI, their value is limited by the rigid structure built into their algorithms. In contrast, AI that uses well-fitting models can evolve as better data is fed into the models. This explains the value of big data in AI. The larger the quality data set, the better the AI performs in the real world.

AI augments care

Rather than seeking to use AI to deliver care, my group of industry experts believes that IA, or “information augmentation,” is the proper first step in using emerging AI capabilities. Variations on that term include “augmented information” and “augmented intelligence.” Using AI-driven applications, administrators, clinical staff, and other decision-makers can access critical information when it is needed, presented in a format that is easy to digest. For clinicians, AI offers an enhanced view of a patient’s condition by using both the patient’s own data and the data of similarly ill patients.

Organizations already use dashboards to monitor clinical and administrative processes. AI can enhance these dashboards by providing point-of-service, real-time insights that explain the results displayed in each dashboard. The AI system can apply its data model to the underlying data set to identify various factors, variables, or combinations of variables in the data set that might be driving the displayed values. Each of these factors receive probabilities that, together, offer the end user augmented information that can be used in decision-making. In this use case, AI helps strengthen the “signal” in the data and focuses the human expert on the information that is truly relevant and requires the most attention, leading to insights that help clinicians work as effectively as possible.

Informaticists have much work to do to understand how AI can be optimally applied at the point of care in the clinical setting. Nevertheless, AI can now be used to narrow the data presented to clinicians and enable them to focus on the most important patient information. That said, we are still a ways off from the vision of AI in which digital caregivers can, as Dick (1968) put it, “dream of electric sheep.”

References

  1. Bean, A. L., & Kohane, I. S. Big data and machine learning in health care. JAMA, 319(13), 1317–1318.
  2. Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ [Epub ahead of print]. https://doi.org/10.1136/bmjqs-2018-008370
  3. Dick, P. K. (1968). Do androids dream of electric sheep? Garden City, NY: Doubleday.
  4. Kaplan, A. & Haenlein, M. (2019). Siri, Siri in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. Retrieved from https://www.sciencedirect.com/science/article/pii/S0007681318301393
  5. Maddox, T. M., Rumsfeld, J. S., & Payne, P. R. O. (2019). Questions for artificial intelligence in health care. JAMA, 321(1), 31–32.

Excerpts from “AI: Augmented Intelligence or Electric Sheep?” published in Patient Safety and Quality Healthcare

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.