Artificial Intelligence March 20, 2024

Protect Patients from AI-Driven Healthcare Misinformation

by Barry P Chaiken, MD

The proliferation of health misinformation, a complex and formidable issue, was underscored by a recent Supreme Court case involving the Biden administration’s battle against false COVID-19 vaccine claims on social media. As a healthcare information technology and public health expert, I am deeply alarmed by the potential dangers of medical misinformation, mainly as artificial intelligence (AI) becomes increasingly integrated into patient care, exacerbating the problem.

A recent New York Times article by Dani Blum offers valuable insights into the evolving nature of health misinformation and how to recognize it. Blum points out that unsubstantiated health hacks, cures, and quick fixes have spread widely on social media, while conspiracy theories that fueled vaccine hesitancy during the COVID-19 pandemic are now undermining trust in vaccines against other diseases. Recent outbreaks of measles, previously deemed eradicated in the U.S., are evidence of the impact of a reduction in childhood vaccinations fostered by misinformation. Rapid developments in AI have made it even harder for people to distinguish between true and false information online.

Test AI-Generated Content

As AI is integrated into patient care, it is imperative that organizations rigorously test the AI output for accuracy and regularly monitor it to prevent the dissemination of potentially harmful misinformation. Equally crucial is educating doctors, nurses, other clinicians, and patients about the risks of healthcare AI misinformation and how to identify it. The primary threat to patient exposure to misinformation is the abundance of unverified and untrusted healthcare websites that mimic reputable institutions but can quickly disseminate AI-generated misinformation.

Identify Misinformation

Blum’s article provides valuable tips for recognizing misinformation, such as looking out for unsubstantiated claims, emotional appeals, and “fake experts” who lack relevant medical credentials or expertise. It also recommends validating claims with multiple trusted sources, such as health agency websites, and tracking down the original source of information to check for omitted or altered details.

I fear that AI-generated misinformation will be used to support political agendas, such as those proposed by anti-vaccination supporters who reject the proven science of the value of vaccinations. Additionally, unscrupulous drug or supplement manufacturers may offer unsubstantiated information about their products, prioritizing profit over patient health and safety.

As Blum’s article rightly points out, addressing misinformation within personal circles necessitates empathy and patience. Using phrases like “I understand” and “it’s challenging to discern who to trust” can help maintain relationships while guiding individuals toward reliable resources. Local public health sites and university websites may prove more effective for those who distrust national agencies.

Duty to Call-out Misinformation

As healthcare professionals and informed citizens, we must remain vigilant in identifying and addressing health misinformation, particularly as AI advances and complicates the information landscape. By educating ourselves and others about the risks of misinformation, validating claims with trusted sources, and engaging in empathetic dialogue, we can work together to protect patient health and safety in the face of this growing threat.

Source: Health Misinformation is Evolving. Here’s How to Spot It, NY Times, March 16, 2024


I look forward to your thoughts, so please submit your comments in this post and subscribe to my bi-weekly newsletter Future-Primed Healthcare on LinkedIn.

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.