Artificial Intelligence December 8, 2023

Don’t Believe Everything You Read

by Barry P Chaiken, MD
AI produces hallucinations, so start with distrust and verify.

When we look to buy something online, we first scan the reviews to see what others are saying about the product. However, we have learned over the years that some reviews are more valuable than others, as posting fake reviews is rampant throughout the internet. Even though the author Anthony Trollope first used the phrase “Don’t believe everything you read” in a book he wrote in 1864, his advice rings valid 160 years later.

Hallucinations are a byproduct of artificial intelligence. Hallucination in AI refers to generating outputs that may sound plausible but are factually incorrect or unrelated to the given context. In other words – misinformation. There are many reasons for this flaw in AI, including improperly trained models, biased data sets, and improper model prompting.

While hallucinations are generally damaging, they present a significant risk when AI generates them in healthcare. AI-generated medical content that is not true could lead to poor outcomes and even death. Therefore, implementing AI is as important as the actual AI-generated responses. Well-designed workflows offer stops, allowing AI to augment clinical decision-making instead of prescribing care.

Upon introducing practice guidelines, many physicians rejected them as cookbook medicine that did not consider each patient’s unique needs. Over time, practice guidelines evolved and now constitute a significant source of dependable clinical decision support. I suspect that the introduction of AI in clinical care will go through a similar evolution before it becomes a trusted tool available to clinicians.

I look forward to your thoughts, so please submit your comments in this post and subscribe to my weekly newsletter, “What’s Your Take?” on DocsNetwork.com.

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.