Archives March 26, 2011

Seeing Is Not Believing

by Barry P Chaiken, MD

Consider this scenario. An adventure traveler begins his trek to a remote village in the Andes. Upon arriving at the airport, he rents a car and begins his journey on winding roads to the village. After 90 minutes of driving, he encounters an intersection with a traffic light. Upon seeing the bottom of the light glowing brightly, he continues through the intersection.

Suddenly, his car is knocked sideways by an automobile that crashes into his front passenger side door. No one is injured but both cars are severely damaged. Figuring his “attacker” ran a red light as his light was surely green, he jumps out to accuse the other driver of reckless driving. Upon further investigation, our traveler learns that in this part of the country, traffic lights are constructed differently than in the United States. Although a red light means stop and a green light means go, green lights are placed at the top of a traffic light while red lights are at the bottom, completely opposite what is followed in the U.S. and most of the world.

The Light was green

Who is at fault here? I am pretty sure our traveler saw the bottom light as red but his brain processed it as green, meaning go. In every other situation encountered by this traveler, a glowing light at the bottom of a traffic light was green, and it meant “go.” For human beings to navigate the world efficiently, we generalize our surroundings.

The effort required to analyze each situation requires too much brain processing and would cripple our ability to do things. Therefore, when we encounter situations that are familiar to us, we infer much of the situation, using only a limited amount of the reality as a template for what we are seeing and experiencing. Only when we encounter completely novel situations, do we dial back our inference and concentrate on the activities in front of us. Yet, even then, we do a significant amount of inference to make efficient our interpretation of the situation.

Joseph Hallinan is a Pullitzer-Prize winning investigative reporter and the author of the book Why We Make Mistakes (2009). In March 2011, he wrote an op-ed for The New York Times on the recent financial crisis where he connected the causes of mistakes made in the financial industry to the causes of misplayed classical music by accomplished musicians. Hallinan related the musical errors this way:

[Boris] Goldovsky, who died in 2001, was a legend in opera circles, best remembered for his commentary during the Saturday matinee radio broadcasts of the Metropolitan Opera. But he was also a piano teacher. And it is as a teacher that he made a lasting—albeit unintentional—contribution to our understanding of why seemingly obvious errors go undetected for so long.

One day, a student of his was practicing a piece by Brahms when Goldovsky heard something wrong. He stopped her and told her to fix her mistake. The student looked confused; she said she had played the notes as they were written. Goldovsky looked at the music and, to his surprise, the girl had indeed played the printed notes correctly—but there was an apparent misprint in the music.

At first, the student and the teacher thought this misprint was confined to their edition of the sheet music alone. But further checking revealed that all other editions contained the same incorrect note. Why, wondered Goldovsky, had no one—the composer, the publisher, the proofreader, scores of accomplished pianists—noticed the error? How could so many experts have missed something that was so obvious to a novice?

This paradox intrigued Goldovsky. So over the years he gave the piece to a number of musicians who were skilled sight readers of music—which is to say they had the ability to play from a printed score for the first time without practicing. He told them there was a misprint somewhere in the score, and asked them to find it. He allowed them to play the piece as many times as they liked and in any way that they liked. But not one musician ever found the error. Only when Goldovsky told his subjects which bar, or measure, the mistake was in did most of them spot it. (For music fans, the piece is Brahms’s Opus 76, No. 2, and the mistake occurs 42 measures from the end.)

Goldovsky’s experiment yielded a key insight into human error: not only had the experts misread the music—they had misread it in the same way. In a subsequent study, Goldovsky’s nephew, Thomas Wolf, discovered that good sight readers report that they do not read music note by note; instead, they rely on their recognition of familiar patterns and on their ability to organize the music into those patterns and dependable cues.

In short, they don’t read; they infer. Moreover, this trait is not unique to musicians: pattern recognition is a hallmark of expertise in any number of fields; it is what allows experts to do quickly what amateurs do slowly.

As experts in medical care, physicians and nurses act just the way great musicians do. By inferring, they generalize each medical situation to more efficiently address the circumstances presented. With the deployment of electronic medical records and other health information technologies, organizations are redesigning processes and workflows to leverage the capabilities of these electronic tools. Although this redesign offers great promise to improve care, it also presents a risk of inferior care with dangerous outcomes.

Processing our environment

Workflow and process redesign must consider not only the existing patterns of care delivery and the ways to make them better, but also the inherent way human beings process their environment. As noted above, inferring the environment is critical to our maneuvering through our daily lives. A workflow that does not consider the impact of inference on the actions of human experts can easily lead to medical errors.

Even the simple act of signing on and off a workstation has its risks. For example, let us assume that a physician signs on to a workstation to chart a patient, Mrs. Jones. After a few minutes of using the workstation, that physician walks away to speak with a consulting physician a few feet away.

With the workstation unoccupied, a second physician ends that first physician’s session and signs on creating a new session so he can write orders on his patient. This second physician, having finished his work, leaves the workstation without signing off.

The first physician, now finished with his conversation, returns to the workstation to complete his patient orders. He assumes that the workstation was not used during his brief time away from it and infers that the patient order entry screen he sees on the workstation monitor is for the patient under his care, Mrs. Jones. He writes the orders and also walks away without signing off. Anyone who has worked in a busy clinic, emergency room, or patient ward sees the high probability of this happening on a frequent enough basis to present a measurable risk to patients.

As organizations work at deploying health information technology and deliver clinical transformation through redesigned workflows, they need to recognize the basis for many of the errors we, as human beings, make in our everyday lives. By recognizing our limitations and designing around them, we can fully reap the safety benefits of health information technology in our delivery of patient care.

References

  1. Hallinan, J. (2011, March 5). The young and the perceptive. The New York Times. Available art http://www.nytimes.com/2011/03/06/opinion/06hallinan.html
  2. Hallinan, J. (2009). Why we make mistakes. New York: Broadway Books.

Excerpts from “Seeing Is Not Believing” published in Patient Safety and Quality Healthcare

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.