Dual Edges of Healthcare AI: Innovation vs. Patient Safety

by Barry P Chaiken, MD

As artificial intelligence (AI) in healthcare delivery accelerates, the imperative to prioritize patient safety cannot be overstated. Integrating AI into clinical settings presents a revolutionary opportunity to enhance the quality of care, patient safety, and access to healthcare while managing costs. However, it also introduces complex challenges requiring rigorous scrutiny to ensure these technologies do not inadvertently perpetuate biases or compromise patient well-being.

In an interview with JAMA Editor-in-Chief Kirsten Bibbins-Domingo, Marzyeh Ghassemi, Ph.D., an assistant professor at Massachusetts Institute of Technology’s (MIT) Department of Electrical Engineering and Computer Science, sheds light on the critical considerations for developing and deploying AI in healthcare. Ghassemi’s work at MIT, focusing on creating “healthy” machine learning (ML) robust, private, and fair models, underscores the importance of designing AI applications that function effectively across diverse settings and populations. This approach is vital to mitigate the risks associated with AI-generated clinical advice and to ensure that such advice does not harm patients.

Ghassemi argues that designing AI applications necessitates a deep understanding of their potential impact on clinical practice and patient outcomes. The presentation of AI-generated clinical advice to physicians imposes risks, suggesting that developers bear a significant responsibility towards clinicians and patients.

Ethical Machine Learning

This responsibility highlights ethical machine learning as a critical concept that technologists must consider during product development. This ethical framework involves recognizing biases in AI models and striving to mitigate them, ensuring that models perform equitably across different groups. Ghassemi points out that biases in problem selection, data collection, and algorithm development can lead to disparities in AI’s effectiveness among diverse populations.

The dialogue between Ghassemi and Domingo also highlights the necessity for AI tools to be subject to human review before physicians act upon any recommendations. This human oversight is crucial to safeguard against the automation bias and algorithmic overreliance that can lead clinicians to accept AI recommendations without sufficient critical evaluation. Moreover, establishing proper clinical workflows that incorporate AI tools in a manner that enhances, rather than undermines, the decision-making capabilities of healthcare professionals is essential for leveraging the full potential of AI in improving patient outcomes.

Multi-Arm Regulatory System

Like the multi-arm regulatory system in aviation, proper regulation is necessary to ensure safety and efficacy in healthcare AI applications. The aviation model requires safety and training standards, along with rigorous oversight, to ensure the responsible integration of technology. Such a framework would promote safety and foster a culture of continuous learning and improvement in the use of AI in clinical settings.

In conclusion, integrating AI into healthcare represents a significant advancement with the potential to transform patient care. However, navigating this transition requires a steadfast commitment to patient safety, ethical considerations, and establishing clinical workflows that ensure human oversight of AI recommendations. By adopting a cautious and informed approach to developing and deploying AI tools, we can harness their capabilities to enhance healthcare outcomes while safeguarding against potential risks.

Source: AI Developers Should Understand the Risks of Deploying Their Clinical Tools, MIT Expert Says, JAMA February 7, 2024


I look forward to your thoughts, so please put them in this post and subscribe to my video series on my Dr Barry Speaks channel on YouTube.

Comments 1
Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.