Being Careful Isn’t Enough, Particularly in Pathology

2
4

One of the core ideas of the Toyota Production System is the recognition that people are indeed human and are therefore fallible (part of the “respect for humanity” tenet of TPS). No amount of “be careful” warnings will prevent all mistakes in a process. As Deming said, a bad system will defeat good people. In medicine, in particular, “good” doctors might make a medical error because they are working in a poorly designed or dysfunctional system. This is well documented, particularly the books Wall of Silence and Complications.

With TPS or lean, because we realize that good and careful people are bound to make mistakes, we need to error proof our processes (the principle of “poka yoke”). I've heard stories of how Shingo used to use a phrase that meant “idiot proof” until it made a woman cry. You are error proofing a process to ensure that smart people don't make an accidental mistake (that's a redundant phrase, accidental mistake).

“Shigeo, who had high regard for workers and would never cause one to lose face, quickly changed the name to focus on the real culprit, the process, and not the person.”

I saw a process last week where tissue sample slides are made for pathologists to review (the final product looks like the picture on the left). There are far too many “medical mistake” cases (such as the ones highlighted recently in the WSJ) where tissues are mixed up and mistaken identities and mistaken diagnoses occur.

In any process, batching can lead to mistakes. In a hospital last week (as well as other hospitals I've worked with previously), I saw a technician labeling a number of slides in advance of placing tissue samples on them. The tech had tissue samples for multiple patients that arrived in a batch. The tech, I'm sure in an effort to be “efficient” with her labeling activity, labeled about 10 slides all in a row, representing three different patients. The tech then was going to make the slides and match up the tissue sample to the correct pre-made slide. How easy would it be to have your attention slip and to grab the wrong slide?

Another tech was doing things differently (because we don't yet have standard work). She was labeling ONE slide right as she was putting the matching tissue sample on the slide (which is in a labeled holder until this point). One piece flow. Sure, she was picking up the pencil more times (10 times instead of once), but she was reducing the opportunities for putting the wrong tissue on the wrong slide. In this case, the time savings, miniscule as it is, is NOT a good tradeoff for the potential errors that could occur. She had error proofed her process, at least in this one regard.

I brought this issue up with the manager. Her response was dismissive, saying basically “well, I have people who manage to not make mistakes while batching and some who DO make mistakes doing them one at a time.” She didn't immediately see the need to change. That reminded me of mass production thinking back in the factories I worked with.

That's incorrect reasoning, then, to assume that you shouldn't change your process to single piece flow for all operators (error proofing through standard work). Those people who haven't made an error with the batch process may have been lucky so far or maybe the hospital just never identified an error that was already made. If you mix up two patients who each doesn't have cancer, then each patient ended up with the proper diagnosis and you might never know that a process error might have occurred. DNA testing at each step of the process would be time-consuming, expensive, and impractical… plus it would be “inspection.” Better to prevent tissue mixups in the first place.

For the sake of those good, careful employees, you have to error proof your processes. You have to reduce or eliminate potential failure modes in the process, particularly those errors that might be tough to catch or have catastrophic results. That's classic “FMEA” thinking — prioritizing errors (failure modes) by 1) how likely are they to occur 2) how hard are they to catch and 3) what is the impact of the error?

Looking at the systemic opportunities for errors, it's good reason to get a second opinion. A “clean bill of health” might mean that they looked at someone else's tissue sample by mistake. We need to fix this — the good news is that lean can help.

By the way, this was taking place at an award-winning hospital. This wasn't Doctor Nick's clinic from the Simpsons. Medical errors (or the potential for errors) are everywhere.


What do you think? Please scroll down (or click) to post a comment. Or please share the post with your thoughts on LinkedIn – and follow me or connect with me there.

Did you like this post? Make sure you don't miss a post or podcast — Subscribe to get notified about posts via email daily or weekly.


Check out my latest book, The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation:

Get New Posts Sent To You

Select list(s):
Previous articleLean at Krispy Kreme?
Next article"Lean and Mean?" Sir, No, Sir!
Mark Graban
Mark Graban is an internationally-recognized consultant, author, and professional speaker, and podcaster with experience in healthcare, manufacturing, and startups. Mark's new book is The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation. He is also the author of Measures of Success: React Less, Lead Better, Improve More, the Shingo Award-winning books Lean Hospitals and Healthcare Kaizen, and the anthology Practicing Lean. Mark is also a Senior Advisor to the technology company KaiNexus.

4 COMMENTS

  1. The response from the supervisor reminds me of a saying that I believe came from Beoing (can anyone out there confirm this ?!):

    “In God we trust, but everyone else had better bring data.”

    People can often get annoyed when you ask them to back up statements like “I have people who manage not to make mistakes while batching and some who DO make mistakes doing them one at a time”, but the data (generally) won’t lie!

    I worked on a similar situation to this about 10 years ago for a computer network hardware manufacturer. Each of the network cards had to have its own unique address burned to its EPROM. The addresses were generated from a database, and printed onto a barcode label. This was stuck to the network card, and scanned during functional test to load the EPROM. To increase ‘efficiency’, people tended to print out several labels at a time, scan the numbers from the label sheet, and then stuck the labels onto the products after test.

    This caused major problems for our hardware users, as conflicting device addresses made the network crash. But until we’d had a problem with a big enough customer, which was almost fatal to our business, everyone was convinced that mistakes just didn’t happen.

    It actually took three attempts to finally solve this problem. The reason for this is, as the late Douglas Adams observed,

    “When designing things to be completely foolproof, most people underestimate the ingenuity of complete fools . . . “

    Karl.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.