Understanding “Human Error”

Humans make mistakes. Any system that depends on perfect performance by humans is doomed to failure. In fact, the risk of an accident is more a function of the complexity of the system than it is the people involved. Humans are not the weak link in a process. We are a source resilience. We have the ability to respond to unpredictable inputs and variability in the system. The contents of this post are based on the work of Sydney Dekker in his book “The Field Guide to Understanding Human Error.”

Professor Dekker is a pilot and human factors engineer. Most of his work comes from analyzing industrials accidents and plane crashes. One such crash was the accident in May 1977 where one jet rammed into another killing 583 people. 

Now we can blame the pilot for the crash. Had the pilot performed better, this accident could have been avoided. If we remove such bad apples, the system works fine. 

However on deeper inspection there were multiple causes (non-standardized language, bad weather, overly crowded runway, equipment issues, etc). It was not simply “human error” that caused this crash, but a series of problems. Understanding all these causes reveals that pretty much any pilot could have made this mistake. The system needs to change to promote pilot success. 

Casting blame makes us feel like we’ve offered an appropriate response to a terrible event. However blaming does not improve the system so the next person doesn’t make the same mistake. In order to learn from our mistakes, we need to understand why they happened. 

Local Rationality and Just Culture

No one comes to work wanting to do a bad job. 

Sydney Dekker

The local rationality principle asks us to understand why an individual’s action made sense at the time. “The point is not to see where people went wrong, but why what they did made sense [to them].” We need to understand the entire situation exactly as they did at the time, not through the benefit of retrospection. 

We balance the need to keep people accountable while acknowledging that most adverse events are not due to “human error.” We emphasize learning from mistakes over blaming individuals. We need zero tolerance for blameworthy events like recklessness or sabotage while not unfairly blaming individuals for system problems. 

Just Culture Algorithm

The Just Culture algorithm asks a series of questions to determine the cause of an adverse event and offers an appropriate response. If an act was a deliberate act of sabotage, then severe sanctions are necessary. If reckless behavior led to the adverse outcome, the individual should be held accountable. However if the any individual’s actions in the same context could have led to the same result, then it is hardly fair to blame that person. 

  1. Did the individual intend to cause harm? Did they come to work in someway impaired? This is sabotage.
  2. Did the individual do something they knew was unsafe? This is reckless behavior.
  3. Does the individual have a history of similar events with similar root cause? This person is not learning from prior mistakes.
  4. Would three peers have made the same mistake in similar circumstances? This passes the substitution test. It is a no blame error. 

Analyzing Adverse Events

The single greatest impediment to error prevention in the medical industry is that we punish people for making mistakes.

Dr. Lucian Leape

The old school format of Morbidity and Mortality conferences pit the person who made the error against a room full of experts with the benefit of hindsight. This adversarial arrangement encouraged people to hide their mistakes. We needed a new approach if we wanted to encourage bringing errors into the light for analysis to learn from these mistakes. Dekker describes six steps. 

Step One: Assemble A Diverse Team

The team should include as many stakeholder perspectives as are pertinent. In medicine, we would include physicians, nurses, technicians, patients and others. This team needs to have expertise in patient care (subject matter expertise) and in quality review. The one group not included are those who were directly involved in the adverse event. Their perspective will be incorporated through interviews, but they do not participate in the analysis. 

Step Two: Build a Thin Timeline

In airplane crashes, investigators recover the flight recorder (black box) to create a timeline of events during the flight and conversations between parties. In medicine, we look at the chart to understand what happened and when. This is a starting point, but excludes the context needed to understand local rationality. 

Step Three: Collect Human Factors Data

Interview the people directly involved in the adverse event to understand what happened from their point of view. This is best done as early as possible as memory tends to degrade with time. Understand what was happening in the room, why did they make the choices they did, and what was their understanding of the situation and why. 

George Duoros presents a series of questions on the EMCrit Podcast to guide the collection of this human factors data. 

Collecting Human Factors Data (George Duoros)

Step Four: Build a Thick Timeline

With the human factors data in hand, overlay this on the thin timeline to build a thick timeline. This presents the events as they occurred within the context under which the providers were working. You may need to go back to interview providers until you can understand what happened as they understood it at the time. Then we achieve local rationality. 

Step Five: Construct Causes

We don’t find causes. We construct causes from the evidence we collect. The causes of the error are complex and are not readily available to be discovered. We need to work to understand and propose possible causes. One method of organizing the causes is in a Ishikawa diagram (or fishbone diagram). 

Ishikawa (fishbone) diagram to analyze potential causes of adverse events. The adverse event is placed at the fish’s head on the right. Off the spine are potential areas where errors may arise. From each rib, place the potential error and supporting details. 

Step Six: Make Recommendations

Brainstorm for potential solutions that would prevent others from having the same outcomes. Ideally recommendations are worded such that they are specific, measurable, achievable, relevant and time-bound. 

Final Thoughts

Remember that information is protected. It includes patient data and as such is protected under HIPAA. Do not put it in publicly available platforms such as Google Slides or Zoom. 

Additionally, the entire quality improvement process should be a safe space to encourage providers to examine their errors. As such, it is protected under the Patient Safety and Quality Improvement Act of 2005 (Public Law 109-41), signed into law on July 29, 2005. Use an approved slide template which includes the appropriate language, for example: 

This document is privileged and confidential under the Illinois Medical Studies Act and should not be shared or distributed other then through the Quality Assurance Committee structure.

Dr. Douras recommends the following agenda for a 30 minute M&M case: 

  • Introduction: Remind the group that this is about learning and identifying systemic problems, not about blame & shame.
  • Present the thin and thick timelines: this should take about 10 minutes, excluding extraneous information. It can be presented by a junior resident, but they would need the support of a senior facilitator to keep the discussion on track. 
  • Discuss the case: identify potential causes possibly using a fishbone diagram with the group. This should also last only about 10 minutes
  • Look for systemic problems and solutions: the goal of the exercise is to identify potential solutions that would prevent a similar mistake from happening again. The bulk of the time should be spent in this section: 10 to 15 minutes


  1. Sydney Dekker’s “Field Guide to Understanding Human Error”
  2. Angels of the Sky: Dorothy Kelly and the Tenerife Disaster
  3. EMCrit 249 – You Can Either Learn or You Can Blame – Fixing the Morbidity and Mortality Conference with George Douros
  4. The Patient Safety and Quality Improvement Act of 2005


Remember that Oxygen Delivery is composed of two parts:

What is Shock?

[Oxygen Delivery] = [Oxygen Content] [Cardiac Output]

In the first video, let’s go over problems with that second part: cardiac output.

How can cardiac output go wrong? All of these can lead to decreased cardiac output.

  • Cardiac: problems with the PUMP. The heart won’t push blood forward.
  • Blood vessels: problems with the PIPES. The blood vessels are causing either obstruction to flow or are so massively dilated that blood just pools within or leaks out.
  • Fluid volume: problems with the TANK. There’s not enough fluid to pump around.

The commonly taught categories of causes of cardiogenic, obstructive, distributive and hypovolemic fit into the above three physiologic groups.

How do you diagnose shock?

You can recognize shock by hypoperfusion of organ systems. So you’ll find measured blood pressure is low. Also, decreased blood flow to the

  • kidneys leads to decreased urine output
  • brain leads to altered mental status
  • skin leads to cyanosis.

Remember that H&P are the best diagnostic tools we have. So search for potential signs and symptoms for diseases of the pump, pipes or tank. Ultrasound (the RUSH protocol) is very helpful as well. Treatment depends on identifying the cause.

How do you treat shock?

Treatment depends on the cause of hypoperfusion.

  • PUMP problem? Maybe you need an inotrope or other cardiac support
  • TANK problem? Then fill up the tank. Use whatever fluid you need, but remember crystalloid doesn’t carry oxygen.
  • PIPE problem? Then, assuming you have a full tank, you need a pressor.

EMRA + CDEM Patient Presentations Video

SAEM’s Clerkship Directors in Emergency Medicine (CDEM) and EMRA released a training video for medical students that demonstrates how to tell a compelling story when presenting a patient’s case. This brief video offers handy do’s and don’ts that will help medical students understand how best to efficiently and effectively communicate in the ED.

The ten-minute video features EMRA resident and student members and CDEM leaders: Aditi Mitra, Michael Yip, Zach Jarou, David Gordon with help from Cathey Wise (EMRA) and Melissa McMillan (SAEM), with yours truly playing Mr. Ferguson.


SMACC Workshop

SMACC_Flipped_Classroom_MarqueeOn Tuesday, 8 am in Chicago, Stella Yiu, Rob Cooney, Andrew Petrosoniak, Doug Schiller, Jen Leppard and I will be presenting a Flipped Classroom workshop. The goal is to take you from idea to completed product in the span of four hours (with a much needed coffee break in the middle).

Here’s our worksheet.

SMACC FC Worksheet

Look forward to seeing you in Chicago.

Odds vs Risk Ratios

Odds ratios and risk ratios always confused me. I never really understood the reason behind having an odds ratio. It is so unintuitive to me, even still.

There’s a great article from the Southern Medical Journal that explains it all! Watch the video then read the article.

Viera AJ. Odds ratios and risk ratios: what’s the difference and why does it matter? South Med J. 2008 Jul;101(7):730-4. PMID: 18580722

Systematic Reviews and Meta-Analyses

Systematic reviews sit atop the evidence-based medicine pyramid as the strongest form of evidence we have. This is so because it incorporates more data than individual studies. To avoid bias in making reviews, the authors need to follow a systematic process. In this video we look at this process the authors would follow and you should note when reading such reviews.