Professor John Grout, A Deep Dive On Mistake Proofing And Lean

109
0

My guest for Episode #462 of the Lean Blog Interviews Podcast is Professor John Grout, the former dean of the Campbell School of Business at Berry College in Rome, Georgia. 

He was recently a guest on “My Favorite Mistake” — Episode 186, so I encourage you to check that out.

He's the current Chair of the Technology, Entrepreneurship, and Data Analytics Department and the David C. Garrett Jr. Professor of Business Administration. John has overseen the development, approval, and implementation of Berry College's Creative Technologies program and Berry's makerspace, HackBerry Lab. 

Dr. Grout has researched mistake-proofing extensively and published numerous articles on mistake-proofing. In 2004, John received the Shingo Prize for his paper, “The Human Side of Mistake-Proofing” with Douglas Stewart. John has also consulted with a large variety of firms to mistake-proof their processes.

He's also published “Mistake-Proofing the Design of Health Care Processes,” a book that's freely available online.

His website: https://mistakeproofing.com/

Today, We Discuss Topics And Questions Including:

  • Your origin story – how did you first get introduced to TPS, Lean, etc? Context of discovering mistake proofing?
  • Shingo's book on Poka Yoke
  • “Shingo was not kind to Statistical Quality Control”… use SQC and/or mistake proofing?
  • Acceptance sampling… keeps bad product out… maybe?
  • Field goals — Conformity to specs vs. closer to center?
  • Successive checks and self-checks
  • Source inspections – Shingo's gold standard
  • Why should you react when a part's out of control but still in spec??
  • Do you HAVE to stop the line? Don't be dogmatic??
  • Statistics don't do well with rare events
  • Do we have data on how universal the “universal protocol” is?
  • Doctor signing vs. you signing the surgical site?
  • ZERO – “the only way to go” in terms of goals
  • The goal of “zero defects” can be controversial.. is it possible? Motivating? Demoralizing?
  • Possible research – optimal time to stop doing final inspection??
  • Why is it easier to error proof now? Technology
  • “People don't like to own up to mistakes”
  • Naida Grunden episode on aviation safety
  • Can't error proof everything??
  • Preventing execution errors is easier than preventing decision errors
  • The balance and benefits of examples to copy vs. developing thinking?? “Catalog or catalyst”?? BOTH

The podcast is sponsored by Stiles Associates, now in their 30th year of business. They are the go-to Lean recruiting firm serving the manufacturing, private equity, and healthcare industries. Learn more.

This podcast is part of the #LeanCommunicators network



Watch the episode here


Listen to the podcast here


Professor John Grout, A Deep Dive On Mistake Proofing And Lean

LBI John Grout | Mistake Proofing

Our guest is John Grout. You'll learn more about him in this episode. He is a professor and an expert on mistakes and mistake-proofing. That's the deep dive that we're taking here now. We'll talk about what kinds of mistakes are easier to mistake-proof and which are more difficult. We're going to talk about how to create a culture of admitting mistakes, detecting mistakes, and learning from mistakes. We'll talk about that and more.

Our guest is Professor John Grout. He is the former dean of the Campbell School of Business at Berry College in Rome, Georgia. He was a guest on My Favorite Mistake. That was episode 186. I encourage you to go check that out. You can hear John's favorite mistake story and our conversation there. You can find it at MarkGraban.com/Mistake186. If you're not already tuning into that, find My Favorite Mistake wherever you're tuning into this show.

I think it's going to be a different conversation here than what we had in the last episode. There is a little bit of overlap, but a different conversation for a different audience. John and I collaborated on a Lean Enterprise Institute webinar that I was the host for in 2010. I'm still trying to find out if that is online or if the recording of that is still available. There are some broken links that haven't been mistake-proofed. Before I tell you a little bit more about John, let me say thank you for joining us. How are you?

Mark, I'm doing great. I hope you're doing well also.

I'm excited about the conversation here. Everything that was from that webinar in 2010, I'm sure holds up even if you or I or both of us barely remember that. It was so long ago.

It's more relevant now and more importantly, easier to implement now than it was back then.

Let's leave that as a teaser. We will come back to the question of whether mistake-proofing concepts are easier to implement now and how so. Keep tuning in for that. A little bit more about John Grout. He's the Chair of the Technology, Entrepreneurship, and Data Analytics Department and the David C. Garrett Jr. Professor of Business Administration. He's overseen the development, approval, and implementation of Berry College's Creative Technologies program and their makerspace called the HackBerry Lab.

John has researched mistake-proofing extensively and published numerous articles on mistake-proofing. In 2004, he received the Shingo Prize for his paper, The Human Side of Mistake-Proofing which was authored by Douglas Stewart. He has also consulted with a large variety of businesses to help mistake-proof their processes. There's a free eBook that's available online through the AHRQ. It's Mistake-Proofing the Design of Healthcare Processes. I encourage you if you're working in healthcare or otherwise to go grab a copy of that. That's still available online. John's website is MistakeProofing.com. That healthcare eBook is still available to your knowledge, right?

Yes, it is absolutely still out on the web. I also have a PDF copy on my hard drive if anyone can't get it any other way.

I hope people will go check that out. There's probably a lot to offer for a reader that's not working in healthcare in terms of looking for different ideas that help them think about their own mistake-proofing. Would you agree with that?

I would agree with that. The framework is set up to use a whole bunch of existing quality management and reliability modeling tools to help you think through how to create mistake-proofing devices in environments where you're not sure exactly what you should be doing. The saying is, “A problem well-defined is half solved.” That's probably true, but there are those cases where you see something happening and you don't have a good vocabulary of how to fix it. This book will provide that vocabulary.

I hope people will check that out. You can find it on John's website, MistakeProofing.com. John, there's one question. It's become a habit of asking different guests their origin stories. How did you first learn about mistake-proofing? Was it a broader context of learning about Lean or the Toyota production system? I'm curious, what was your initial introduction and what sparked a deep focus in this?

Back in 1991, I was teaching at Southern Methodist University, and they were in the middle of the TQM Juran order. They asked me to teach a quality course. I had lots of statistical quality control training, both in my undergrad and grad programs, but I wanted to do a thorough read-through of everything that was out there. As I was looking through it, it seems like Schamberger had said something. He was one of the very early writers on the Lean side. He had mentioned poka-yoke. I was interested in what it was like to know whether it should go in the course or not.

I found it very difficult to find anything about poka-yoke. I finally stumbled on Shingo's book. I ordered it. I had it sent in and read it. It is not kind to statistical quality control. He says, “Statistical quality control has all these problems and here are the issues.” I was about to teach this stuff so I had to sort out who was right. I have done that to my satisfaction and have published academic journal articles based on that. I now do a lot of mistake-proofing, but I still think Shingo might have gotten a few things wrong.

I was going to ask you to elaborate on that. First off, for people who might not know the terminology, how would you summarize statistical quality control?

My view of statistical quality control is you have acceptance sampling on one side. I think by and large, people have figured out that it doesn't help the core of the problem. All it does is keep the product out of your factory maybe. The other is statistical process control, which involves control charts and three standard deviation limits above and below the need. When you have unusual products or things that go outside those limits, you need to find out what's going on and take action. Now, in terms of Shingo's mistake, I think his main mistake was he inferred that every time you had a defect, you were out of control or that remedial action was required every time you have a defect.

The whole notion of process capability is the idea that if you can't consistently kick your field goal through the goalposts, you have to work on the variance. You can't take one-off things and adjust based on individual defects. You have to manage the entire process. If you have a process that's in control and you make adjustments based on individual defects, you may be making the process worse.

LBI John Grout | Mistake Proofing
Mistake Proofing: You can't take one-off things and adjust based on individual defects. You have to manage the entire process. If you have a process that's in control and you make adjustments based on individual defects, you may be making the process worse.

Tampering was the term. In order words, it's over-adjusting a process.

Shingo didn't focus on that aspect. He blurred control limits and tolerance limits or specification limits. He was conflating the engineering limit of, “This will function, this won't,” with the statistical limit of, “This is usual, this is unusual.”

We can keep diving into this topic of, is mistake-proofing a binary in-spec or out-of-spec determination versus the idea that I think would come from SQC and SPC that being closer to the center line of your specifications and tightening the variation would be better. A physical mistake-proofing of a product flowing down a line, and if the product is too big, it bumps some barrier and gets kicked to the side. It's a go-no-go gauge in that case, but that might not be the best we can do from a quality perspective.

It's worthwhile to have defects not make it to the customer because the cost of that is almost always way higher. Shingo had a variety of different inspection techniques. He talked about successive checks and self-checks. Successive checks are when a downstream operation runs your product through a go-no-go gauge. Your self-checks are when you do it and it's always better if you do it than if someone else does it.

However, he had something called source inspections. I think that's the gold standard concept, which is that you need to inspect the conditions that will lead to high quality and make sure those exist before you produce the product. It's also where it fits very nicely with statistical process control because if you have an out-of-control point, it's saying something unusual has happened and your job is to go into the process, understand, explore it, do experiments, or whatever is required to find out why you had that out of control point. Once you find out why, you need to take that pre-condition for a good product and test that pre-condition and a source inspection on how you do that.

You need to inspect the conditions that will lead to high quality and make sure they exist before you produce the product. Share on X

Source inspection would be, in the case of a manufacturer, back at the supplier or on receipt into your factory.

I think you're focusing on the product, and what I recommend is that we focus on the process. We're going to look at a process and say, “Is the temperature right? Is the machine set up correctly? If something comes out of calibration, is there a way for the device itself to say, ‘I'm out of calibration?'” We want the process to talk to us and source inspection is how we get it to do that. Typically, it's done with very simple kinds of means to make it obvious that something is going wrong before it causes a problem.

You're judging up memories from my first year, especially at General Motors in 1995. Executives confused the difference between the control limits on the SPC chart, which are calculated based on the process, and how much variation there was. That would be part of the input into the process capability calculation but they would confuse specification limits with control limits. It's a great detriment.

In a very real scenario here. I'm curious how you would react or coach somebody on it. You've got somebody on the frontline. A production worker is doing their SPC checks, and they notice, “Something has drifted above the upper limit. We should stop and do something.” The management says, “It's in spec. The part is good. Why the hell would we stop production?”

The answer there is that you would stop production. I am not dogmatic about stopping production. What I think you would do is figure out what's going on with the machine, why did it go above the limits, and try and resolve that issue without ever making a defect. At some level, maybe the orthodox is you stop the machine, but if you can figure it out without stopping the machine, then it's internal and external setup.

You would love to figure it out while the machine is running, but if you can't, I think that there's a benefit to be had by stopping the machine long enough to see if you can figure out what that special cause is and then work out a system for inspecting for that special cause as a source inspection using poka-yoke.

Thank you. That's a good nuance there. If the point is protecting the customer and doing that investigation versus not doing the investigation. The process we were working on was I'm thinking of a machining line where this is where memories and technical details are either fuzzy or I never fully understood. However, a lot of times the debate was framed in terms of, “Do we keep producing or not?” I don't think that the middle ground was being explored necessarily, and that could be a different problem.

Having knee-jerk reactions is so easy in life, and in some ways, that's what statistical process control is all about. Don't adjust the process unless you have something that's out of control and we all want to say, “That's a defect. Let's fix it.” What SBC says is if it's a generic normal process variation, fixing it does not involve adjusting it based on a single defect.

Having knee-jerk reactions is so easy in life, and in some ways, that's what statistical process control is all about. Don't adjust the process unless you have something that's out of control. Share on X

This goes back to Deming in the funnel experiment of saying, “This hole in the engine block is larger than the center line of the spec so let's tweak the machine and try to make the next one a little bit smaller.” That amplifies variation in a bad way. Back to fuel goals for a minute and I could bring it at some point. A friend of mine who does a lot of Lean Six Sigma work was a guest on My Favorite Mistake, Brion Hurley. He was a placekicker at the University of Iowa on their football team.

He understands process capability very well.

This is why I would be curious to pull him into the discussion because a field goal anywhere between those goalposts, even if it hits the goalpost and still goes through, is worth the same three points. There's that go-no-go, good not good binary. The results of the kick were in spec or not but I'd be curious, back to your point of source inspection to see if this analogy is a way of thinking through it. Source inspection on the kicker might look at their mechanics and the leg angle or different things of saying, “If the leg angle is more consistent, then the kicks are more likely to be good.”

Some of the things that you would call a source inspection don't look a lot like an inspection. For example, if you have a kicker and he has practiced and practiced, what he knows is that he lines up his shot, he steps back three steps, but then he'll step sideways two steps. The two steps sideways are his effort to get the right angle of approach to get the ball to go where he wants it to go. He's built into the process a way of determining where the starting place ought to be.

From my perspective, that borders on a source inspection. If they said, “That's a source inspection.” I would be hard-pressed to say, “No, it's not.” I think it's close. With source inspection, sometimes inspection's not a great word for it because in some ways, it's good management and it doesn't have to be sophisticated. It doesn't have to be based on some computer aid or anything. It could be always taking two steps to the right. I put my parking ticket when I go to the airport in my wallet. The reason I do that is it means I can't leave my car without knowing I've got my wallet in my pocket. If you get into the airport and you don't have your wallet, you're not going anywhere. It's procedural so calling it an inspection probably is stretching things a little. I think that's what Shingo has in mind.

It sounds more like maybe in TPS Peak, going to the Gemba, understanding your process, and the connection between process and results.

Also, mistake-proofing and particularly source inspection tend to be very idiosyncratic. It's precise to your particular process, which is why when people would pay me money to come and talk to them about mistake proofing, they would say, “How do we mistake proof this?” I'd say, “I don't know because I don't have any of the detailed knowledge that's necessary to design a good mistake-proofing device.” That's why I wrote a book the Mistake-Proofing the Design of Healthcare Processes because I wanted to give what I knew to them because they had all the detailed knowledge to implement it.

LBI John Grout | Mistake Proofing
Mistake Proofing: Mistake-proofing, particularly source inspection, tends to be very idiosyncratic. It's precise for your particular process.

Let me come back to healthcare in a minute because that's the more meaningful life-changing implications and application of this. To think about kicking for a minute, those two steps, there's probably this question of how consistent are those steps. If two steps are not always the same, how do you train yourself? Even if the wind is blowing hard, those two steps are the same size and would probably be an indicator of quality.

That's a meta question to me. It's a question above the question. In their case, they don't get to take a measuring tape out on the field, although I don't know that that's ever been tested.

There was something that happened in an NFL game. What was illegal and what was a penalty was it was a wet field and they brought a towel out onto the field and the holder was trying to dry the part of the field where the kick was going to be placed and that was penalized. The towel was interfering with the field. The tape measure isn't changing the field.

That's a conjecture. I'm not sure if it's an experience or not. I think every time he practices, one of the things he's practicing is how wide are those steps. Hopefully, he gets it to the place where it's muscle memory. As long as he can get the game circumstances out of his head, which is the hard part of kicking anyway, he can take those two steps, he's ready to go and he kicks it. Also, it may be that the variability of his steps is small enough that it doesn't impact his CPK. His process capabilities are fine even if he takes a big step versus a small step because the coasts are quite a ways apart.

You talk about muscle memory. I've probably only tried kicking a football once and it was not in any game setting. It was being out on the field during a marching band rehearsal because I was in the marching band for eight years in high school and college. There's a point where you're talking about the distance of your steps. It's 8 steps every 5 yards. That's 22.5 inches per step. Also, you want each of those steps to be the same because of the dimension of quality of how the band looks when you're marching or how straight are your lines.

The ideal would be that every step is 22.5 inches so a line is continually looking perfectly aligned, but that quality check is that you can peek down and you're not supposed to move your head down. There's another visual thing but if the ball of your foot is hitting that yard line, which is thankfully marked off every 5 yards, you can gauge how you're doing. Now you don't want to be taking a bunch of small steps and make up for it with one big step at the end, but that muscle memory becomes quite good where you can test it by closing your eyes and marching 10 yards. If your right foot ended up right on that yard line, then you got it dialed in.

It's remarkable what the human body can do very consistently and yet, that's also the cause of the problem with mistake-proofing. One of the reasons mistake-proofing is hard is because people don't make mistakes very often, but when they do, it can be catastrophic. It puts it outside the realm of normal statistics because statistics are not good at rare events. What human mistakes are all about is a rare event. Human beings as a process are very reliable, 1 in 1,000 or 1 in 10,000. We get things right all the time. It's just that 1 in 1,000 times, it's a problem. That's why when you design your mistake-proofing device, you're thinking about those kinds of issues and triggering when those have happened and stopping the process right then.

One of the reasons mistake-proofing is hard is because people don't make mistakes very often, but when they do, it can be catastrophic. Share on X

I'm going to put the football field behind us now. Let's think a little bit more about the operating room, even before we get into definitions here. I think about healthcare rare events. Let's say, you're the patient and the surgeon operates on the wrong side or the wrong site. I don't know the exact numbers, but that happens very rarely and we're not going to fall back on, “On average, they're getting it right,” because the impact when they don't is huge.

Nor is it subject to any kind of normal statistical process control.

There's not a bell curve of how wrong are they. It's either it's right or wrong. Do you have thoughts or experiences around this? I realize this is probably a very complex thing to try to error-proof, but if the problem statement is that rare mistake of the wrong side or wrong site, what would part of your thought process be about how to try to error-proof that?

I think that a lot has been done using something that they call the universal protocol, which is to sign your site. If you're having surgery on your left arm, they should be explaining to you the procedure well enough and then asking you to write your name where the cut is going to occur. I've also heard people say that they write no on the other arm. That doesn't help at all. The reason it doesn't help at all is because then if they drape you for an appendectomy, you're out of luck. It presumes that you're on one of the two arms and sometimes the wrong site is an entirely wrong surgery.

It's not bilateral. It's different. It's the wrong surgery.

Over time, if the surgeon knows, “I'm only going to cut if I'm cutting through the person's signature,” that's a pretty decent signal. When you add to that the timeout where the entire surgical team stops for a minute and says, “Let's all agree about what we're doing here. We're operating on the left arm below the elbow and we're doing this procedure. Here's how it's going to work. Here's about how long it's going to last.” Even things like, do they introduce each other around the room so that everybody knows everybody else or at least has some sense of who they are. They then all look at the site and there's a signature there. That's a step in the right direction. Is it a perfect mistake-proofing device? I have yet to figure out a perfect mistake-proofing device for that particular application.

It seems like a lot of that is dependent on our ability or our willingness to follow the “universal” protocol. Do we have data on how often that's occurring or not occurring?

There probably is data, but I don't have that data.

My little bit of exposure to operating rooms is, “I have a question. Where's that data coming from?” It's certainly not being universally tracked in a reliable data-type way.

My wife was in for some surgery and I asked the nurse about a timeout procedure and she says, “They always clean the room thoroughly between operations.” I'm like, “That's terrible.” That is not the answer. In this particular case, because this was internal surgery that was a female surgery, so there was no external body incision going on. The anesthesiologist, I said, “How does the universal protocol work in a case like this?” I forget what he said, but he totally whiffed on the answer.

He came nowhere close to understanding that I was saying, “In a case where the site isn't an external incision, what do you do then to make sure you're doing the right operation?” He had no idea. One of my students in an evening MBA class was the chief operating officer of this hospital. I sat down and typed out an email in the waiting room and sent it to him and said, “I want this date and time stamped that you've got these issues and it's making me nervous.”

The surgery went off fine. There were no problems, but my view was I want to be on record that this is not what I had in mind. I ended up on the quality board for that hospital years later. I recounted the story and they're like, “You are the guy.” To their credit, their quality culture was such that this story still remained in active memory to the people I was talking to 5 or 10 years later and that, I admire.

You touch on culture and behavior. I'm sure there are elements of psychology. This becomes a fairly complex thing as opposed to the physical size of a part that we've been cutting back in a factory. To recap, I've seen it go both ways. I had a friend of mine send a picture where somebody I used to work with who was a Six Sigma black belt. Her husband had a broken collarbone and they sent a picture where they had written no on the unbroken collarbone.

It was obviously painful to him. You could visually see that's the broken one and they thought, “Thank you for following the protocol because you don't want to be in a situation like, ‘We don't need it this time,'” but to your point, you could write no on an almost infinite number of body parts instead of marking the one spot.

At the start of all of this, 1 in 4 orthopedic surgeons would have a wrong site surgery during their career. That's rare, but not nearly rare enough.

I think of it as the so-called universal protocol. Universal is ideal.

It ought to be done every time.

Hopefully, it's the right form of it because there would be variation around you signing the spot versus the doctor signing it. Now, if I'm having back surgery, I probably can't write on my lower back, but I can see where it would introduce opportunities for error if the doctor is signing it. Let's say you've already been given some medication that makes you drowsy. If the X-ray has been flipped and they marked the left arm and you're barely aware of it, it could lead to a mistake.

That would not be the universal protocol. In some ways, it's far better to have a loved one. It's like you would say to your wife, “I can point to it, I just can't write my name there. Put it right here.” To me, that would be better.

The universal protocol is supposed to help prevent one form. There's this terminology that again is ideal or optimistic never events. I refer to these as so-called never events because that's the goal. It's not the reality.

It's like zero defense.

I'm wearing a hat that says “Zero,” which is a reference to zero harm from the Patient Safety Movement Foundation. John noticed this when we got on and I was going to ask your thoughts on this idea of aiming for zero and talking about zero.

It's the only way to go. It makes perfect sense to aim for zero in all of these cases. If a customer gets a defect, that's a problem. One defect is a problem so zero should be the target. Getting there, it all depends on what environment you're working in and what kind of knobs, tools, and dials you can turn to make things better. Back in the old days, I guess it was Crosby who said that quality is free. His argument was that the cost of preventing defects was always less than the cost of the defects. Through the years, people have stopped arguing about it.

LBI John Grout | Mistake Proofing
Mistake Proofing: One defect is a problem, so zero should be the target.

Some of my classmates argued in grad school at MIT that had a background in manufacturing. We did end up arguing with a microeconomics professor who was still teaching this idea of optimal quality levels. These assumed trade-offs are that better quality costs more, and at some point, it is not worth it. We're like, “That trade-off isn't true.” I remember he got to a point where he was like, “You all need to shut up so I can finish my lecture.” It was a little more polite. We did talk about it after the fact because we were hoping to try to educate him about what we were seeing in the industry and he was still pretty upset that we were derailing his lecture.

The issue is that if you look at it from a statistical perspective, if you are on a normal curve and if it is infinite tails, it's hard to get to zero when it's an infinite tail. It's also hard if you've got that infinite tail driving the prevention and appraisal part of the old concept of cost of quality but then you look at proofing and you say, “Can we virtually eliminate a mistake at a finite cost that's reasonable and that's not too expensive?”

The answer is, of course, you can and we've got lots of examples where people have done things and the errors have gone away. Until you have a different process, that error is not going to occur again. It's done using a pin or a piece of steel sticking up or a little sensor. With the sensor, you do add on some appraisal costs because you've got to keep it calibrated, but as long as you keep it calibrated, it's going to do what it's supposed to do. I'm of a mixed mind on zero defects as a practical theoretical matter, but as a practical everyday matter, it's clearly the right target.

In relatively simple error-proofing applications, you really can reach zero defects of a simple go-no-go gauge. As parts are flowing down a line, you could say unless the error-proofing device failed somehow that it would be perfect mistake-proofing. However, you get into more complex systems. Coming back to Toyota, as much as they talk about quality at the source. Going back to Shingo and a long practice of mistake-proofing. 2019 was the last opportunity I had to go to a Toyota plant in Japan, they have a final inspection. That might be surprising to some people like, “I thought the inspection was waste,” if we're going to be dogmatic and Toyota is decidedly not dogmatic on things like this.

I've often thought that it would be an interesting research paper to write, which is, when is the optimal time to stop inspecting? It seems to be a challenging question to answer. You have to make some presumptions about the process that are difficult to make.

It's because you could keep doing the final inspection well beyond the time it's necessary. This is a story I've told before in another episode. In 1995 General Motors, there was a design intent for the engine factory that quality was going to be built in. They used to do a hot test at the end of the engine assembly. They fire up the engine and make sure it's working fine.

There was this dogmatic of like, “We don't need that anymore because we're building quality in.” Unfortunately, that first hot test was happening after the engine was installed in a Cadillac, and at the end of the Cadillac assembly line, they would go to fire up the engine to drive it off the line and go back to the engine block machining if there was some part where the hole was too big. Now, this engine has literally got black smoke coming out of it. All the potentially defective engines between there all the way back to machining could have been 2,000 bad parts and hundreds of assembled engines already installed in cars. That was the mistake of ending the inspection too soon.

It's a presumption that entropy doesn't happen. Entropy is one of those laws you can't get away from. I've got another example of that. This would've been back in the '90s, so it's a long time ago but Frito-Lay had their tortilla chip production down to a science. They started doing everything associated with the science of getting that chip right and they started losing blind taste tests to Eagle brand, which was a brand back then.

It's a presumption that entropy doesn't happen. Entropy is one of those laws you can't get away from. Share on X

The Eagle brand was winning in the taste tests for blind taste tests. Now, if it was branded, people still liked Frito-Lay better, but if it was unbranded and they couldn't tell, they liked the Eagle brand better. Frito-Lay to their credit said, “We can't have this. This is not acceptable.” They realized that no one was tasting the chips and they found out that you need to have a person on the line producing chips and tasting the chips. It's that final inspection.

They hired an artist and they made a chip that was a little raw. one that was just right and one that was a little cooked too much. They're interested in the amount of blister on the chip. It was amazing stuff. We would never care about that, but they cared about it and that's why the chips turned out good. They would hold the chip up and they would look at the chip and say, “Which model does it look like?”

They would eat the chip and then they had a bag of gold standard chips. they would eat the gold standard chip and say, “Does it taste like that?” It's like, “Those taste about the same.” They had a whole procedure to keep a gold standard bag in stock because you had to rotate it through. You couldn't let it sit there for two weeks because then it wasn't a gold standard chip anymore.

It's stale now. That's deciding what the gold standard is and how accurate are they in deciding that?

Through time as that gold standard gets messed up, you could have problems, but they did their best. At some level, it still depends on someone eating the chip and going, “That tastes okay.” I'm sure there are people with more refined palates than mine who could do that well.

That's where it comes back to how we define quality. We could be looking at process measures or variables that don't matter to the customer. We might think we're dialing it in on quality in a way that is pretty meaningless to the business.

Frito-Lay wanted 15% broken chips or less. Once my kid picks up the bag off the shelf and jams it in the cart, that metric is out the window.

It sounds like the old story that may or may not be true of the Japanese supplier getting an order from the American automaker. The punchline of the story in this case would be, “Here's your bags full of broken chips that you wanted.”

We separated out the 15%. The rest of them are fine. That goes back to the name Taguchi and the whole idea that any amount of variants should be reduced. I haven't heard his name in forever, but I think his idea is still true that if you can get variants as low as possible, you're better off even if it functions just as well with higher variants.

This is the Taguchi loss function. I drew a chart like this once where a Taguchi football game nowadays would have some laser measurement. and the points on a field goal would be determined based on how close it was to the center of the goalpost. You would get maybe four points for a kick that was perfectly centered all the way down to a half-point if you barely missed the upright.

You can have a little post sticking up and whichever one you knocked off as it went through.

I jumped to technology. There could be a bunch of pool noodles or something going across.

This is instructive in terms of mistake-proofing because it's so easy to think you've got to have a laser to do it and you don't. Sometimes it's a pin on a dye that keeps you from putting the part in backward. Some of the best mistake-proofing I've ever seen are these devices that are $25 or less. In Shingo's book, nowadays, that would be a $50 because of inflation but about a quarter of his ideas were $25 or less and fully half of them were $100 or less back then. That would be $200 now or $250 but would anybody pay $250 to eliminate a mode of defect in their factory? I'm guessing lots of people would sign up for that.

Let's come back to the thing that we teased early on where you're saying that compared to 2010, it's easier to mistake-proof things now. Is that because of technology? Why is that?

It's absolutely because of technology and I was thinking more of the heyday for mistake-proofing was in '95 to 2000. That's when there were conferences about poka-yoke. There's no conference about poka-yoke anymore, and yet now more than ever, it is the time to do it. With the maker movement came this thing called an Arduino. It's a little programmable logic controller that costs $35. Once you've got it sorted out, you can buy an Arduino mini for $10. You can hook it to a limit switch, a light sensor, or any number of different kinds of sensors like a hall sensor that will do magnetic.

I've got undergrads in their first semester doing prototypes where they can do basic programmable logic controller mistake-proofing. You couldn't do that in 2000. Back then, you had to figure out how to do it on an ABB, a square D, or one of these industrial programmable logic controllers that were hundreds of dollars. Now, it's a $10 thing. It's so easy.

It makes me wonder. Back in 1999 to 2000 when I was working at Dell Computer, one of the parts of the process is what they call the pick-to-light line. Based on the order for those computers, people would pick the different parts, like the hard drive, fan, and different things that would go into a kit that was then sent down to the assembly station. I'm trying to remember exactly how it worked. One version of pick-to-light would be lights that say, “Here are the ones you grab.” I'm pretty sure they had some error-proofing where if you tried to grab the wrong one, there would be some indicator, a light, a buzzer, or something.

There was a company called SpeasTech back in the day that had a light curtain. It would cycle around and if you stuck your hand in the right spot, it would turn the light off. If you stuck your hand in anywhere else, it would say, “That's the wrong spot,” and a buzzer would go off. They had good luck with it. Their pick went from 200 parts per million down to two. By the way, that's the hardest mile in all of this. It's going from 200 parts per million, which is fantastic to two, which is world-class.

I bet that technology would be a lot cheaper now, is what you're saying.

I've got undergrads who could do that all day long. To put this in perspective, I had an undergrad who for his senior project created a vision system to look at a box with fuses in it. The automotive fuses with the number printed on top, he had five of those in a row that was going into a Kubota tractor. They had this wiring harness and the fuses would get put into the wrong place.

He created on his own a vision system to look at those through optical character recognition and figure out whether they were in the right spot based on the spec and alert the operator if they were in the wrong place. It then took a picture of it so that if the company ever came back to this vendor and said, “The fuses were in the wrong spot,” and you go, “Here's the picture. They came out of the factory correct.” He had both the quality at the source and he had nice audit documentation.

Going back to the 1999 Dell computer. The impact was pretty trivial if there was a mistake in the pick-to-light line, because then in the assembly station, there was further error-proofing. If I remember right, pretty much everything was barcoded and scanned. To the order, if there was a part missing or the wrong part, that would get caught and that would lead to a little bit of inefficiency, but it would probably protect the customer.

You would want to prevent the waste of having to go swap out or chase down the right part versus the lack of error proofing in a process as critical as gathering the right instruments for a surgery where if it's discovered once the patient has been under anesthesia, it's supposed to be detected in advance. The defect of let's say a missing instrument could be discovered at a point where now it either delays a surgical procedure, which could have an impact on the patient, depending on the situation. They could find the error once they're under anesthesia, which is bad because now they're under anesthesia longer or the error could be found once they've already been cut open and that's even worse. I hate to say it but there's a lack of error-proofing in that operation.

All of a sudden, your inventory theory becomes part of your quality management system. The fix for that is to have pre-packaged kits.

Sometimes, you can do that depending on the procedure.

However, not all procedures are that way and not all doctors like what the kit has in it.

If the kit was perfect when it first comes in, the challenge then at the end of the procedure is if you've opened up four trays, who's guaranteeing that the right instruments go back into the right tray? That's part of the waste and the process of processing and then sterilizing those instruments. It's sorting out the mess that was created back upstream in the operating room.

In many cases, they're designed with disposable products.

That's true sometimes too.

It's because then you've got the hazardous waste that comes from it. All of these are trade-offs.

There are some other things that are hard to error-proof in that context where one defect could be the instruments. Getting back to the operating room, there appears to be what they politely call bioburden. It's supposed to be inspected. It's certainly not error-proofed. Back to your idea of source inspection, it's an interesting value stream that it's circular where you could go back to look at the process. I was part of a team that did this back in the operating room.

If the instruments aren't properly sprayed with foam, then the odds of properly cleaning and sterilizing them in the sterile processing department go down dramatically. It comes back to like, “Your process helps us help you, but how do you do those process checks rather than doing an inspection at the end?”

One of the things I want your readers to understand is that mistake-proofing is not easy. Designing good mistake-proofing devices is the pinnacle of design. The idea is that you make something that is essentially invisible until it's needed. Back in the day when you had a 3.5-inch disc and you put it in the machine correctly, no problem. You put it in upside down, it stops you halfway in. It only notifies you when something is wrong. The rest of the time, you may not even know it's there. I have a table saw like that. It's a SawStop table saw.

Mistake-proofing is not easy. Designing good mistake-proofing devices is the pinnacle of design. The idea is to make something that is essentially invisible until it's needed. Share on X

Until I touch it with my finger, it's going to be just fine. I'm not going to know any different than any other saw. As soon as I touch the spinning blade with my finger, it snaps out of the way. I have a little cut on my finger. I put a Band-Aid on it and I'm done. People said, “That's no good.” Other manufacturers don't want to use the technology and now it's the top-selling saw in the market.

I wonder if they're thinking, “Clearly, that's user error and you shouldn't put your finger or any body part anywhere near the saw.” It's back to some psychology of blaming the user or blaming them for being human because we all get inattentive or sloppy at times.

I think what was going on was that the owner of SawStop or the inventor of SawStop is a patent lawyer. He had patented it and he was trying to license it. He didn't want to build saws at all, but they could see that they were going to be paying him royalties for the rest of their lives, and they didn't want to go down that road. He didn't want to build saws, but he was essentially forced to so he went and found a contract manufacturer. They built a saw. It sold like hotcakes, and to this day, all of the other power toolmakers are like, “No, don't do that.”

It is a competitive disadvantage to them.

It's because he knows how to play the game, he went to the National Product Safety Commission and tried to get rulings that would force everyone else to buy his module for their saws. As far as I know, that didn't end up succeeding.

A quick detour. You mentioned the National Product Safety Commission. We have a National Transportation Safety Board. There are people advocating that we need what they are calling the National Patient Safety Board. It's interesting that there's a societal or legal role in place for commissions or boards like that, but we don't have that in healthcare.

One of the things that we have found out about mistakes is that people don't like to own up to them. In particular, they don't like to own up to them if they're going to be fired as a result, sanctioned, punished, or what have you. There's this whole other realm called Just Culture. It is where you figure out a way to do a full and thorough investigation without people feeling threatened, and yet you want accountability. If someone has been egregious, those people should be held accountable.

People don't like to own up to mistakes if they're going to be fired, sanctioned, punished, or what have you as a result. You have to figure out a way to do a full and thorough investigation without people feeling threatened, and yet… Share on X

Sorting out how to draw that line has been something that there's been a lot of talk about. The National Traffic Safety Board does that because if you see another plane that's doing something unsafe or you're in a plane as a pilot who's doing something that's unsafe, you can report it. I'm trying to remember how that works but there's an offshoot in NASA. I think it may be the NTSB.

There's no-fault reporting and there's fault reporting. The FAA will come after you, but the NTSB won't or something like that. I'm not sure I've got the agencies correct there, but the idea is that you can get a full report of near misses and actual events and learn from those without having people have to take actions that are against their own self-interest.

Back in episode 112, I talked to Naida Grunden who had done a lot of Lean work. She'd written a book about Lean in healthcare. Her husband was a pilot. They were personal friends with Captain Sullenberger. She was a big advocate for the need for reporting mechanisms, learning, and safety culture that is sometimes brought from aviation. If we think back to Sully in that plane and his co-pilot and all the passengers, you can't really error-proof against a bird strike unless you somehow have a design that's magically robust against that.

The moral of the story there is that you cannot mistake-proof everything. That goes back to the psychology of errors. Things that are slips or your intent is correct and your execution is flawed, you can mistake against those. However, for anything that takes deliberation or judgment, it becomes very difficult to mistake-proof those things. I'm constantly having people say, “We need to mistake-proof that,” and it's like, “I can't help you.”

When you talk about near-misses and that opportunity, Paul O'Neill, when he was CEO at Alcoa, he certainly advocated for zero employee harm. Part of that was the recognition and creating the culture that made it safe, if not required but making it safe for people to speak up about risks, near-misses, and minor injuries. That was the pathway to preventing fatalities.

That's pretty much all cultural kinds of things. What we see is that any place where you have that culture, it helps. All of the work on crew resource management in aviation has been very helpful in terms of reducing problems. Once a warning light goes off in a plane or an alarm is sounded in a plain cockpit, their chances of fixing it properly are about 50/50 or at least that's what it was when they started the process. A lot of times, everyone was so busy working on the problem that no one flew the plane.

There were cognitive and cultural issues there.

Nowadays, when you have a warning light that goes off, someone flies the plane, and that's all they do. The other people work on the issues. That's why Jeff Skiles was busy going through checklists on the miracle on the Hudson and Sully was flying the plane.

He is the co-pilot. I'm pretty sure in the audio recordings, you hear there's this procedural, “My airplane. Your airplane.” It comes back to culture. There are maybe the technical aspects of mistake-proofing, but a lot of it is just culture. Culture is an amazing framework. I think of it as a healthcare framework that others could adopt in other industries. There are two different situations. There's the mistake of giving the patient the wrong medication or the wrong dose that leads to death.

Unfortunately, society still blames, punishes, or even sometimes prosecutes. This is clearly something you should prosecute where people are intentionally murdering patients. They are putting them out of their misery. How would you mistake-proof and what checks and balances could you have to make sure somebody is not subtly murdering patients in a way that's hard to detect?

I'm not going to be Perry Mason on this or Murder, She Wrote. I think that you use every tool in the toolbox to try and sort this stuff out and if sleuthing is required, that is something that we've done for a long time. If someone is doing that, they should be found and punished and perhaps prosecuted. The just culture stuff works for me in the sense that if you didn't intend to do harm, you look at it differently than if you had a reason to believe you're doing harm.

My example is if you drive to work, you're driving the speed limit, and you get in a wreck, at some level, it's not your fault. You can drive 10 or 15 miles over the speed limit. You know you're breaking the law, but you didn't think you were going to do harm. Probably understanding what was going on in your life that made you want to do that or the fact that everybody does that changes the equation for me. Now, if you drive 100 miles an hour to work, that's not a reasonable thing to do.

Also, if you know somebody is drunk and you let them drive anyway or you are a bar who's overserved somebody knowing that they're going to be driving, there are different levels of responsibility there. A lot of times, we're quick to oversimplify it, and in a lot of these healthcare error cases, find a scapegoat. We found the person who screwed up and they've been fired. It's not a problem anymore.

Even back to instances of intentional harm is pretty rare. The surgeon who's known to be so incompetent that they are harming people and others aren't speaking up about it, that becomes a cultural issue. Now there starts to be some culpability on the leaders of the organization. If you knew there was a problem and you didn't address it or you stifled the communication channels that would've informed you, it becomes more than an individual problem.

It would be nice if we could talk to the local surgical nurses and when they say, “Don't use that doctor.” They know what they're talking about. They're right there with them. They know what's going on. I tend to trust them a whole lot. It'd be great if all the surgical nurses say, “I'd rather not work with that doctor.” That's useful information. I don't know how you culturally allow that to get out, but it'd be great if we could.

Sometimes it comes out after the fact and they are reporting about something. It was like, “It was known. We spoke up. You didn't listen.”

Often, organizations will need compelling evidence to end someone's career in their hospital. I understand that but in those cases where the evidence is all there and no one took action, it's troubling.

There's an expression that's been written about and even dramatized a little bit. It's Dr. HODAD. Have you heard of this slang?

I have not heard that.

HODAD is an acronym that stands for Hands Of Death And Destruction. There's this balance and I think the just culture helps us figure out where, “I'm all about saying most errors are caused by the system but there are some outliers where they're either just so grossly incompetent or intentionally causing harm.” There's a whole podcast and a TV series that I can't bring myself to watch or listen to. It's called Dr. Death, a surgeon in Dallas who was maiming people and it went on for a long time. If that's the rare exception, we need to make sure that it's addressed differently than “systemic errors” that could happen to any surgeon.

It's important when you do root cause analysis and you're looking at what's going wrong, you almost never take a negative descriptor of a person as the root cause. I also think that when you've ruled everything else out and you've got the data to show it, it's unconscionable not to act on it.

One thing that frustrates me, and I'd love to hear your reaction to it, is that we might be making baby steps toward not blaming an individual and throwing them under the bus. However, I've heard this progression that I think only goes part way where an organization will say, “It was a systemic problem. We're not firing someone. It was a human error but there's nothing you can do about human error.” I'm like, “That shouldn't be the end of the story.”

You're talking to the wrong person about that. Mistake-proofing is all about that. If it's a judgment call, you should have other people looking at that judgment and you should be rewarding differing opinions. One of the problems is if everyone's responsible, nobody's responsible. The more people you have to look at it, the less responsibility any one of them takes.

If everyone's responsible, nobody's responsible. The more people you have to look at it with, the less responsibility any one of them takes. Share on X

What you need to do is see if you can figure out some structure that will keep the accountability for each and every individual so that you can make it blind so they don't know what other people have said. All kinds of other things that will lead people to take it very seriously that's worthwhile. Having five nurses check something is not a recipe for success but with judgment, redundancy is the only answer because the normal mistake-proofing process is not very good.

There are two levels of a possible mistake and I will point people back to the episode of My Favorite Mistake 186, where I do get John's thoughts on definitions of mistakes as bad decisions versus slip-ups. If I'm going in for lower back surgery and they cut in the wrong place, it would be like a slip-up. They didn't intend to cut there but then there are diagnostic decisions. A few years ago, I went to a surgeon where I had a badly extruded disc that was pressing against a nerve. It was awful. The first surgeon said, in his judgment and his professional experience, “You need surgery immediately or your foot and leg are going to be numb for the rest of your life.”

It's not because I distrusted him, but I ended up going for a second opinion because long story short, he said, “If you have the surgery, which you need, you can't travel for six weeks.” I'm like, “If I can't travel for six weeks, I need to have that surgery in Orlando because I need to be there with my wife at the time if I can't travel for six weeks.”

The surgeon in Orlando had a different judgment. He was younger. He had newer education. He showed me the journal or articles that said, “Outcomes are better if you wait and let the body try to heal itself first.” There's a difference between a decision that the other surgeon's judgment would say isn't grounded in science versus doing the right thing the wrong way. That was a long-winded way of trying to compare those.

Helping people not make errors in surgery is easier than diagnosing the right surgery.

Preventing execution errors is easier.

Execution errors are where mistake-proofing thrives. Deliberation and decision in unstructured problems are where it's more difficult. We're seeing this move towards evidence-based medicine and I think that's in the right direction. Even there, you're codifying statistics to have them supplant personal judgment. A lot of times, that makes lots of sense in terms of mistake-proofing personal judgment but you'll always have those counterexamples.

LBI John Grout | Mistake Proofing
Mistake Proofing: Execution errors are where mistake-proofing thrives. Deliberation and decision-making in unstructured problems are where it's more difficult.

Even a few years ago, Dr. Brent James, who is considered one of the leaders in the modern quality and patient safety movement, I saw him give a talk and he said, “For all the talk of evidence-based medicine, it probably applies in about 35% of cases across medicine.” There are some common, well-known things like a child with certain illness symptoms like an ear infection. There is evidence-based best practice for how to treat that. Me having this weird mysterious lower back pain, not so straightforward.

Also, the Apgar score has changed medicine more than any other thing.

What is that score?

The Apgar score is the score that you give a child when they're born. There's a rubric, but a list of things that you look at different aspects of how the child was born and they get a rating. The better the rating, in some sense, the healthier the child. The practice of giving birth and the medicine around that have changed dramatically since the '60s. It's all because we had good metrics of what the outcome looked like. It probably has some side effects and almost any mistake-proofing you do will have some side effects. For example, the side effect of giving birth is there are so many more C-sections than there were before.

Some hospitals are working to reduce that.

However, that was driven by the fact that if you used a C-section, the Apgar score tended to be higher. Any chance that it was going to impact the baby negatively, there was that much more impetus to not have a natural vaginal birth. This stuff is all mixed together. We're trying to port things over from the Toyota production system and Shigeo Shingo to an environment that in a lot of ways is very different.

Part of what you point too, the side effects of metrics, targets, and rewards where if cardiologists are being ranked, rated, and compensated based on post-op mortality rates, there's this dynamic of where they might choose to not take on the sickest patients and a bad distortion of what the care should have been.

This is why I'm a little skeptical. They have health grades and things like that where you look up your doctor and you say, “Is this doctor great or not great?” Some of the best doctors, maybe those doctors who only do the hardest cases and lots of their patients die, but far fewer than expected. I'm not sure that in trying to create transparency, we've got it right yet.

As another professor from the statistics and quality fields, Dr. Donald Wheeler would say, “Statistics without context have no meaning.” If you look at mortality rates across hospitals, it could be very misleading. There are these mortality ratios of actual versus expected based on smarter people than me figuring that out. We have to be careful with that.

As we wrap up here, again, we've been talking to John Grout from Berry College. You can check out his website, MistakeProofing.com. Again, that PDF book that you can either get online or contact John to get is Mistake-Proofing the Design of Healthcare Processes. There is one last question I wanted to ask you, John. Looking at a book like that or getting an education on mistake-proofing, what's the benefit in seeing examples to maybe in some cases copy versus developing a way of thinking and a process for developing mistake-proofing?

There was a guy at the VA who we were talking about in this book. He says, “Is it a catalog or is it a catalyst?” It's both. On the one hand, examples are a catalog where you say, “I'll take one of those,” but there are a lot of other cases where you'll say, “They did that in that industry. In my industry, it would look like this,” and it would be something entirely different. I hope that it is both a catalog and a catalyst and that you design it carefully and vet it thoroughly. It may have side effects, but if it does, those side effects may be far less than the side effect of not improving the system.

That's very well said. We'll leave it at that as a final note. John, this has been a lot of fun. I hope people also enjoyed it. If you haven't, go check out episode 186 of My Favorite Mistake. There are lots to learn from John. I feel like we've only scratched the surface, but thank you for a great discussion and for sharing some of your knowledge here.

I've enjoyed it very much.

Thanks, again, to John Grout.

Important Links


What do you think? Please scroll down (or click) to post a comment. Or please share the post with your thoughts on LinkedIn – and follow me or connect with me there.

Did you like this post? Make sure you don't miss a post or podcast — Subscribe to get notified about posts via email daily or weekly.


Check out my latest book, The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation:

Get New Posts Sent To You

Select list(s):
Previous articleRyan McCormack’s Operational Excellence Mixtape: November 4, 2022
Next articleDoes Learning From Mistakes Mean It’s OK to Try Any “Dumb Thing” – For Elon Musk or Any of Us?
Mark Graban
Mark Graban is an internationally-recognized consultant, author, and professional speaker, and podcaster with experience in healthcare, manufacturing, and startups. Mark's new book is The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation. He is also the author of Measures of Success: React Less, Lead Better, Improve More, the Shingo Award-winning books Lean Hospitals and Healthcare Kaizen, and the anthology Practicing Lean. Mark is also a Senior Advisor to the technology company KaiNexus.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.