Setting the Standard: The Critical Role of Outcome-Centric Healthcare AI Regulation

by Barry P Chaiken, MD

As artificial intelligence (AI) rapidly evolves from a futuristic vision to a tangible reality, we must prioritize patient outcomes in developing AI-driven care tools. A recent article in JAMA underscores this view by exploring how harnessing AI can genuinely enhance healthcare delivery, ensuring that technology serves as a boon rather than a bane to patient care.

The authors argue for a regulatory framework anchored in outcome-centric evaluations of healthcare AI technologies. This perspective challenges traditional process-centric regulations, which, while necessary, may not sufficiently address AI’s unique complexities and rapid advancements. The authors elucidate AI’s potential to significantly improve patient care while emphasizing the need for empirical evidence demonstrating that AI tools lead to clinically meaningful improvements in patient outcomes.

Pitfalls of Untested AI

As evidence of potential pitfalls in using AI without proper outcomes evaluation, the authors share the results of a study of the Epic Sepsis Model (ESM). In that study of 2552 hospitalized patients who developed sepsis, the ESM identified only 7% who did not already receive early treatment. Moreover, the system failed to identify 67% of patients who developed sepsis.

The authors propose an outcome-centric regulatory strategy for AI, requiring companies to demonstrate that AI tools produce clinically meaningful differences in patient outcomes before being offered to customers. Regulating healthcare AI using outcome measures presents distinct challenges compared to process measures due to AI’s inherent complexities and “black-box” nature.

The rapid pace of AI development often outstrips the ability of regulatory frameworks to keep up. This fast pace delivers fewer opportunities to learn from experience using these tools, which is critical for developing process-centric regulations. The novelty and complexity of AI applications make it challenging to establish standardized outcome measures universally applicable across different technologies, diseases, and patient populations.

Measurement of Outcomes Difficult

Outcome measures also require empirical evidence demonstrating that AI applications lead to a net clinically meaningful improvement in patient outcomes compared to existing standards of care or placebo. This necessitates rigorous, long-term studies, such as randomized clinical trials, to establish the efficacy and safety of AI tools. Such studies are time-consuming and resource-intensive and may need to be continuously updated to account for AI advancements, retraining of models on different data sets, and changes in clinical practice.

Furthermore, outcome-centric regulation requires the establishment of clear, measurable endpoints that accurately reflect improvements in patient health. These endpoints must be clinically meaningful, directly tied to patient well-being, and sensitive enough to capture the effects of AI interventions. The complexity of healthcare and the variability in patient responses add layers of difficulty in defining and measuring these outcomes.

In addition, outcome measures must consider the broader impacts of AI on the healthcare ecosystem, including effects on clinical staff workload, patient-clinician interactions, and the potential for ethical dilemmas. These factors are less tangible than process measures and require a holistic evaluation of AI technologies’ integration into healthcare workflows.

As the authors point out, process-centric regulations, while easier to define and enforce, may not adequately prevent harm or ensure improvements in patient outcomes when applied to AI. Process measures focus on the steps taken to develop and deploy AI technologies rather than on their actual impact on patient health.

The advent of Electronic Medical Records (EMRs) has provided a wealth of data that researchers can leverage to test the impact and outcomes of various AI clinical tools. Organizations like the Mayo Clinic are at the forefront of building comprehensive databases that can be a bedrock for evaluating AI technologies in clinical settings. These databases offer a unique opportunity to rigorously assess the effectiveness of AI applications in improving patient outcomes, thereby ensuring that the integration of AI in healthcare is both evidence-based and outcome-driven.

Need for Regulating Agency

One cannot overlook AI’s potential to revolutionize patient care through predictive analytics, personalized treatment plans, and advanced diagnostic tools. Yet, the allure of these advancements should not overshadow the necessity for these technologies to demonstrate a tangible, positive impact on patient outcomes. It is here that the Agency for Healthcare Research and Quality (AHRQ), a component of the US Department of Health and Human Services (DHSS), can play a pivotal role in setting the standards for AI use in healthcare.

AHRQ’s mission centers on producing evidence to make health care safer, higher quality, more accessible, equitable, and affordable, and to work within the DHSS and with other partners to ensure that the evidence is understood and used. Through research, data, and analytics, AHRQ informs health policy and practice, aiming to improve the outcomes and quality of healthcare services. The agency’s work encompasses many healthcare issues, including patient safety, healthcare improvement, health information technology, and access to healthcare services. By generating rigorous and relevant evidence, AHRQ supports healthcare professionals and policymakers in making informed decisions that improve healthcare delivery and patient outcomes. AHRQ possesses the skilled staff to undertake AI evaluation and propose regulations.

Beyond Outcomes

The integration of AI in healthcare extends beyond patient outcomes. Researchers must consider the impact of AI tools on clinical staff during its development. The deployment of AI must support clinical staff, reducing their workload and minimizing burnout without introducing ethical dilemmas or compromising their “duty of care” responsibilities. This dual focus—on both patient outcomes and the well-being of clinical staff—is crucial for the ethical and effective implementation of AI in healthcare.

Conclusion

Regulating AI in healthcare using outcome measures is more challenging than using process measures due to the need for empirical evidence of clinical benefit, the complexities of defining and measuring meaningful patient outcomes, and the rapid evolution of AI technologies. These challenges necessitate a nuanced, flexible, and patient-centered approach to regulation that can adapt to the fast-paced advancements in AI, ensuring that these technologies truly enhance patient care.

Source: Regulate Artificial Intelligence in Health Care by Prioritizing Patient Outcomes, JAMA, February 27, 2024

Source: Agency for Healthcare Research and Quality


I look forward to your thoughts, so please submit your comments in this post and subscribe to my bi-weekly newsletter Future-Primed Healthcare on LinkedIn.

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.