“Bowling Charts” Are Not State-of-the-Art Management [LinkedIn Live Recording]

250
0

Here is a LinkedIn Live event I did on Thursday at 11 am ET, with my co-host, Elisabeth Swan.

“Bowling Charts” Are Not State-of-the-Art Management

YouTube Recording:


What are “bowling charts” in the context of “Lean Management” (or just management more broadly)? Why are “Process Behavior Charts” a superior method that 1) tells us more at a glance and 2) helps leaders make better decisions?

I will share my thoughts and experiences regarding these methods. I'm looking forward to a lively debate and discussion!

This will provide a glimpse into content in my book Measures of Success: React Less, Lead Better, Improve More.


And thanks to Ron Pereira for having me on the Gemba Academy podcast again to preview the workshop we had offered (but had to postpone because we didn't have enough registrations).


Summary of the Video's Content:

Reconsidering Your Approach to Performance Data
For years, organizations have relied on “bowling charts”–those static grids of monthly data points highlighted in red and green–hoping they would guide better decision-making. In practice, these charts often do more harm than good. They prompt hasty reactions, drive managers toward blaming individuals rather than understanding systems, and obscure meaningful trends hiding in plain sight.

Why Bowling Charts Fall Short

  • They reduce understanding to a simplistic red/green view, obscuring subtle shifts or patterns.
  • Leaders often waste time chasing random variation (“noise”) instead of seeking true “signals” that warrant action.
  • Without understanding the system's natural fluctuation, it's tough to differentiate normal ups and downs from meaningful changes in performance.

A Better Way: Process Behavior Charts
Instead of static grids, visualize data over time. Run charts and process behavior charts help teams see when a system genuinely changes, rather than just when it deviates slightly from an arbitrary target. With this approach, you stop asking, “Why is this month red?” and start asking, “Is this system stable or did something truly shift?”

What Makes This Approach Valuable?
Think of it as upgrading your organizational “sensors.” A process behavior chart not only flags true changes but also shows when the underlying system is just running as expected. You'll know when to investigate, when to adjust, and when to let stable processes be–ultimately saving time and improving outcomes.

Key Quotes

  • “Red and green don't tell the whole story.”
  • “Move beyond chasing noise–focus on what truly matters.”
  • “React less, lead better, improve more.”

Practical Steps to Get Started

  1. Begin by plotting your data as a simple run chart.
  2. Identify signals that matter–look for sustained shifts or points that break the expected pattern.
  3. Shift the conversation from blame and panic to learning and improvement.

Video Transcript:

Mark Graban:
I'm Mark Graban. I am the author, most recently, of a new book called The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation. It's available now, but today we're going to talk about some things related to my previous book from four-ish years ago called Measures of Success: React Less, Lead Better, Improve More. Really happy to be joined by a good friend of mine and colleague in this world of continuous improvement, Elisabeth Swan. How are you?

Elisabeth Swan:
I am good, Mark Graban, and better for having been invited. Always a good time.

Mark Graban:
Yeah. So we're going to be talking about bowling charts and some alternatives that I think are closer if they're not state-of-the-art. I don't know. These are empty phrases, world-class management practices. I don't know. I'm going to call it better and I'm going to try to demonstrate that in a couple of ways here today. And we're going to be able to do some chat and have some interaction. So hello, Hector from Mexico. So if you want to put in the chat where you're coming from, that would be cool to see. And then we'll take your questions and I'm going to ask your thoughts as we compare these methods for looking at performance measures and data and the decisions that we would make as leaders in the context of lean or lean six sigma or what have you. But Elisabeth, tell everyone a little bit about yourself, your website, your podcast, your book.

Elisabeth Swan:
Well, that's a lot, Mark. Okay, so I have like Mark, decades in the business, over 30 years, and a lot of different industries and focused actually a lot now in healthcare like Mark, and have been a coach and a consultant helping companies do some transforming into problem solving cultures. And my most recent book is Picture Yourself a Leader: illustrated micro lessons for navigating change and just looking at the people centric side of problem solving. And what, you know, the reality that not all of us recognize that we are leaders, but that people are watching us and that it might be not in our title, but in the way people see us. And then the website that Mark is mentioning, just in time cafe, that is a free learning nexus where we have regular podcasts and webinars with the idea of a rising tide lifts all boats. Mark Graben has been a guest on both the webinars and the podcast. But just bringing folks in to talk about the things like Mark's book that make a huge difference in problem solving.

Mark Graban:
Yeah. So I hope people go check out jitcafe.com, check out Elisabeth's book, Picture Yourself a Leader. We've got people here from England, Ohio, Indiana, Texas. We got a howdy from Houston, which seems totally appropriate way to send greetings. So I'm going to start with the meat of the content here in a minute. But I do want to mention my book Measures of Success: React Less, Lead Better, Improve More. As of yesterday, it is once again available on Apple Books. For those of you who prefer that ecosystem and format, it's available through Amazon as a paperback, hardcover and Kindle book. And again, you can now get that. It's no longer an Amazon exclusive. You can get it again on Apple Books. All right. So bowling charts, Elisabeth, are you a bowler? Would you call yourself that? Have you ever bowled?

Elisabeth Swan:
I have bowled. I would say that makes me a bowler. I would not say I'd put that in any kind of a, you know, resume. Yes.

Mark Graban:
Did you enjoy it? Or like, there's beer involved sometimes. So that may, or, you know, there.

Elisabeth Swan:
There is a place in New Orleans called Mid-City Rock and Bowl, and there are bands and people in costumes. And I would say those are, those are some bowling highlights. Absolutely. I enjoyed mid-city rock and bowl.

Mark Graban:
Wow. That's, I grew up in Michigan and bowling was, and maybe is still popular. I haven't bowled much as an adult. I bowled in a league with some co workers like 20 years ago and actually dug this out from the closet. I have a bowling ball, like, look, it's like multicolored and like, I don't, I don't know if my thumb even fits. I don't know if I would want to use this today. But I have a bowling ball. I'm happy I didn't drop it on, on my toe. But, I mean, Elisabeth, you probably remember, like when you, when you, when you ball, maybe you don't want to pay attention to the score, but you keep score like it's a game, even if you're not being competitive, right?

Elisabeth Swan:
Yeah, absolutely. The score, whether you're super competitive or not, the, once there's something being captured, you're like, you pay some attention.

Mark Graban:
And, I mean, I'm old enough where we used to keep score by hand. You know, you had to actually draw on this thing that was projected like an overhead slide. I mean, again, further dating myself, nowadays the bowling alleys do this in a very automated way, usually. And actually, I dug up a picture of a time I went bowling, I don't know, seven or eight years ago. And I don't mean to brag, I don't know, like, this is one of the best games I ever bowled a bowl to 235. For those of you who don't know bowling. The maximum score is a 300, a perfect game if you roll twelve strikes. But this is the format for bowling now. We think about bowling charts, and I'm going to share an example of that in a minute. In a workplace for people who didn't know, I think that this is the origin and the parallel of seeing a grid of numbers on a wall. I'm sure you see this.

Elisabeth Swan:
Elisabeth oh, yeah. The affectionate name I use is wall.

Mark Graban:
Of data, wall of data, wall of numbers. And like here, I mean, I went bowling by myself, which maybe that sounds sad, it may be antisocial, but I had some free time and I went, usually if you're bowling with four or five people, you would see a wall of data projected up on that screen of people's scores. And you can compare your progress. And I, you know how you're doing competing with each other. But, you know, we could maybe keep score, like in context of lean management. I mean, I saw this first time, General Motors 1995, like an hour by hour chart for tracking production. And again, like a list of numbers. I mean, we can look and see, well, the number on the right is lower than the number on the left. There was, like no root cause to be found in this case of, like, why, why was there a performance gap? But, but sometimes people show data this way, which, which might be helpful, maybe for some kind of real time problem identification or. Have you seen this in use, Elisabeth?

Elisabeth Swan:
Oh, yeah, absolutely. I see this presented to be many times overdose course of coaching somebody. I'm like, okay, these are good numbers to have.

Mark Graban:
They're good numbers. But then, like, what do we do with it, right? So, you know, it's in these numbers. They're visual. But then to make it visual management, I remember, and this was a problem at General Motors. Like, they had somebody hired in from a Toyota supplier who taught some of these methods. And GM management with that GM culture at the time, used these numbers to know who to yell and scream at. And like, that is nothing, not the intent of the Toyota production system and kind of a cautionary tale of using a quote, unquote lean tool in a different type of culture, yelling and blaming instead of constructive problem solving.

Elisabeth Swan:
Yeah, shame charts.

Mark Graban:
Shame charts. And then there's all kinds of stories, and I've blogged about this before, where leaders then feel pressured to fudge the numbers, which you could do if you were keeping score at the Bowlingen alley by hand. The electronic scoring makes that a little harder to do. But if we think about different ways of visualizing performance. I mean, this kind of converted the grid of numbers into a chart. Like, maybe this is what an actual bowling chart would look like. The purple line is showing the ideal score frame by frame. And then you can see the yellow line. You know, I started off a little behind ideal, and then I was tracking pretty well, and then, you know, I trailed off. But I don't know. I mean, I don't know how this would help me improve my bowling game, but I think it's visual in a different way than the old grid of numbers would be.

Elisabeth Swan:
A little bit.

Mark Graban:
So when we're thinking about workplace metrics, corporate metrics, how do we make better decisions based on data? A list of numbers. A grid of numbers or a chart? Grids of numbers. Kind of establishing kind of this approach from keeping score in a bowling match. I mean, I've seen a lot of metrics represented like this, like a list of numbers. I mean, it's sort of visual, but I don't know. Elisabeth Quick, what kind of trends do you see there?

Elisabeth Swan:
What's the list of numbers telling numbers changing fast? Mark. And none of them are the same. I say, no, no numbers are the same.

Mark Graban:
Why was December lower than November?

Elisabeth Swan:
Quick quick.

Mark Graban:
I mean, that question gets asked a lot, you know, why was December lower than January? Do we have a quote unquote trend? Like, people might say those things, but I don't know if that's necessarily true. Sometimes we see here's. Is this more like your wall of data? Is that what you called it?

Elisabeth Swan:
Yeah, wall of data.

Mark Graban:
Wall of data. And here's kind of an advanced format bowling chart where we see, in this case, like, six different metrics, we see a comparison between an actual and a target. And then for those who don't want to look and compare the number, there's some helpful color coding. We've got greens and we've got reds. How do a lot of workplaces react to the reds?

Elisabeth Swan:
Elisabeth yeah, once again, it's a shame chart. So it's no bueno. You don't want the red. But this one, it's funny because it has a pseudo visual management. Just adding those colors. It's like it forgives the chart for being a wall of data, whereas you're.

Mark Graban:
Say more about that, forgiving the, you.

Elisabeth Swan:
Think about visual management. We often are asking people, bring this into a visual, give me some visual cues to what I'm saying. So the red, green, they've said, here's some visual cues, but it's still a wall of data. It's still something you are going to struggle to figure out. What am I looking at? What does it mean?

Mark Graban:
There's a lot of management traps people fall into when they're using this bowling chart format. They might overreact to the difference between green and red. This bottom number here, patient experience scores. It goes from red to green and then red. Would we throw a pizza party because it was green? And now get upset at people? And maybe the shame and blame and punishment comes out because it's red. But look at the difference between those numbers, 77.6 and 77.2. That's not a statistically meaningful difference, but it just happens to be fluctuating around the red green. And I've seen organizations that will either react to every red, or they might say, if you have two red data points in a row, you have to start in a three. You have to do root cause problem solving. There's probably no root cause for why it's such small fluctuations or what you might call common cause variation.

Elisabeth Swan:
Or it equates things like, there's a, you know, what is it, the patient experience? There's a 78% in red, there's an 80% in green. Right? So as you say, there's, like, a couple of points there and. But that all those are equated. So you can go as low as 40%. Still just red. Like the 78, which was, you know, two marks off, 80. So, yeah, not helpful.

Mark Graban:
Some reds are worse than others. Some Reds might be more meaningful. Then there's a question of, like, I can only imagine, like, how many hours might have been spent arguing if the goal should have been 77.5 or 78.3. Like, it's just such a precise goal. If everything was green at a 77.5% patient experience, hospital rating, we could still improve. And I think that's a different trap of, like, people. I've heard people say, literally, like, oh, well, the metrics are green, so there's nothing to look at. There's nothing to look into.

Elisabeth Swan:
Yeah, we're hitting our targets and you're saying, are those good targets and can we do better?

Mark Graban:
Should we raise the target? But again, if there's that blame and fear and punishment culture, people don't want the red. They want to set easily achievable goals so they can be green. When our focus should be on improvement, I think working toward perfection, if we're satisfied with hitting the target, that doesn't mean that's not always the best thing for the business or for the health system.

Elisabeth Swan:
No.

Mark Graban:
All right, so now I'm going to ask everyone to be much more active through the comments. And if you've got questions or comments, I am seeing those here in the streamyard window. I'm going to put up four different metrics. Now we're going to kind of simulate, let's say we're in a management review meeting where we're being asked to make snap decisions, quick decisions. So I'm going to put up these four lines of data. There's some red and green. There's something labeled as trend. We can see the target, site one, site two, site three, site four. I'm going to intentionally not give you a lot of time to mull this over, but here's the question. Which of the four sites deserves the most attention? Right now we're looking at the December numbers. It's January of the year. Which of the four deserves the most attention and why? Please put that in as a comment. And while we're waiting for that, Tamara made a great comment here. We should also understand what we did to be green so we can learn. Right. So instead of, you know, the reds. I totally agree. The reds aren't the only opportunity to learn and improve if we're actually making progress in a good direction. We want to understand, is that because of something we're doing, or is it just happening? Is it because of outside factors? Trying to understand that cause and effect is really critical.

Elisabeth Swan:
Have you ever heard the term positive deviance?

Mark Graban:
Yeah, it's kind of, it's a weird jargony term.

Elisabeth Swan:
It's a weird jargony turn. But, and I think there's also sort of comparative problem solving might be a less jargony one. But, like, you're saying what, what's, what's different about site three? Right. If I look at the processes of one, I'm just building on what Tamara pointed out, like, and what you're saying, which is what is what's different about them from the other sites? Like, what are they? Is there something different in their processing?

Mark Graban:
Right. Right. Before we even get to that of, like, trying to understand why. I mean, like, you know what, I'm curious again, like, what's drawing the most attention and why? So Denise said site two. I mean, that one's all red. Dan says, depends if there's clear rationale for sites one and two to have a higher target, you know? Right. So sites one and site two have a target of 90. Sites three and four have a target of 70. Tamra says site four, because of the regular variants. Michael Sharkey says site three has gone down from eighties and nineties in the first quarter to seventies, and their target is low.

Elisabeth Swan:
Oh, right.

Mark Graban:
I mean, I've done this in workshops and I've had people make a case for one or for two or for three or four. There's rarely consensus and think, well, then what happens, as a management team, if we're trying to decide where attention is needed to understand a place where performance has dropped, do we need to go and understand why? Do we need to start some root cause problem solving? Do we need to go do a site visit? Do we need to understand a site that's PeRfOrminG REALlY well, like as a Positive outlier, a PosiTive deviant, because they're deviating from standard practice. But, you know, I think, you know, this is where bowling charts, to me, are not as helpful as the alternative that I'm going to share next called the process behavior chart. And so another comment, site two, there's a consistent downward trend month on month. And site two, because there are some issues with some months, issues are reoccurring, maybe. So that's a potential trap, though, right? And I think looking at this list of numbers makes it really hard, if not impossible, to try to separate what doctor Deming and Don Wheeler, a statistician who's still writing and working today, they would point out the difference between common cause variation and special cause variation. Or you might call that the difference between signal and noise. We have. Somebody made a comment. I can't really make sense of this without a run chart. And it's not a dyslexia friendly display. It might not be a color blindness friendly display if we can't tell the difference between red and green. It's funny that that's the color coding that most often gets used. And I think that is a very common form of color blindness. But it's funny. This is called a bowling chart. I mean, it's not really a chart. I mean, someone made the comment here about a run chart. I think a run chart is a more helpful visual, and a process behavior chart builds upon a run chart. So let's take a look at that. All right, so first I'm going to put up here in the lower left, a process behavior chart. And there's a deeper dive that we could take some other day about how to create the charts and how to calculate what we call the lower and upper limits. And that's in my book, measures of success. It starts off with a run chart. So that's in orange. And now to me, that's just so much clearer. That January was really really low. And there's maybe generally this upward trend throughout the year. Like to me, that's just so much easier to see in a run chart, aka line chart, compared to a list of numbers. Now, maybe that January, that four, being a single digit, jumps out more than a 27 would have. But to me, there's just so much more advantage to visualizing it this way. Now, the green line here is the average for, for the year that's calculated. This red line here and the other red line, we call these the lower and upper limits of the chart. And we'll see some other examples where the context of that might be helpful. But I think what's interesting about this process behavior chart for site one, well, okay. And before I do the color overlay, I think it's actually going to pull up the chart for site two. So site one and site two are, they jump out to people sometimes because they're all red. But I think there's a clear difference where site one is showing like some significant improvement, where site two is maybe just kind of fluctuating around that average. And we see, you know, for site two, because all of those data points are within those calculated lower and upper limits. We would describe all of those data points as being common cause variation, like there's no root cause for any of those data points or what's different, whereas site one, like, there's certainly something to be understood about what drove that improvement. Site two, we might not be happy with the performance because it's always in the red, but, you know, we need to improve the system instead of fixating on the most recent data point. There we go. So now if we bring up that red green overlay, that sort of illustrates, I think, a little bit differently what's seen in the bowling chart. So we have another comment here. There's always room for improvement with all the sites. That's true, but for priority sake, okay, they gave us a ranking here. Site two, site four, site one, and site three. But to me, again, I think there's a different thought process around improving site one. We want to understand what happened over the course of the year. And if performance is settling in around this level, how do we make sure we're sustaining that? Then if it stabilizes, we can step back and ask, how do we improve the system in a way that boosts performance upward? Site two, I think we would have that conversation around saying, well, performance is relatively stable or predictable, but we don't like the level of performance. So I think I've tried to make this case in the book and different things I've taught of understanding when we should be reactive and when we should be a little bit more systematic and look at the bigger picture. So I think to me there's hopefully a clear difference between site one and site two. We see a list of numbers. It's all red. The trend arrow could be misleading. A lot of times this quote, unquote trend, all it's doing is comparing the last two data points. For site one, I would say, sure. Over the course of twelve months, there's a trend. Site two, there's not really a trend, there's just fluctuation. So we have for site one a signal, both these data points at the end that are really close to the upper limit. This data point at the beginning that's below the lower limit. Clearly this is not just fluctuating around an average where site two is doing that. So we would describe all of those data points as noise. And again, we would improve in a different manner, different style of improvement, less reactive, more systematic. Now, if we look at sites three and four, here's the process behavior chart for site three was always green. But Elisabeth, what's your reaction to this chart?

Elisabeth Swan:
Yeah, so there's special cause, right. You've got the spike in the beginning and then you've got steadily performing below average based on these limits. But yeah, there's trouble. There's trouble here.

Mark Graban:
Yeah. And this could be a spike in these beginning data points, the first three data points. Or if we actually had the previous year's data, maybe it was fluctuating pretty consistently around an average. And then something happened right about here, March, April, that then caused the performance to shift. So this is an example where you could actually calculate, probably the most helpful average would be to calculate the average for those nine data points because it now seems to be fluctuating around a new average. So I think we want to be careful of avoiding the trap of, well, site three is all green and people might say nothing to see here, but they've got a relatively low target. But there's clearly been something that we would want to go investigate. Vivian posts a question here. What's better, improving or staying within the average? I'd say, well, improving for sure. And we can use process behavior charts to see the difference between continued fluctuation and a statistically meaningful improvement. And I think, you know, it's very visual. And there are a couple of rules that we could use including, you know, looking for any data point that's above the upper limit or below the lower limit. You know, seeing eight or more consecutive data points that are all on the same side of the average. In this case, performance has gotten worse. If we were sort of flipped this chart, we would say, well, there's evidence of some improvement that has been sustained, and we will make sure that we understand why. Great comment here. And I think this is some important nuance to Vivian's question. It depends on what we're trying to achieve. Sometimes it's stability, and sometimes it's increasing or decreasing an indicator. So that's a good point. And then Sandro says it depends on the intrinsic randomness of the measurement. We could call that the level of common cause variation, that the boundaries, which I would call the lower and upper limits, help avoid kind of short term overreaction. I'm kind of paraphrasing. I think Sandro and I are on the same page, but just slightly different words to describe that same thought process. So the one that's all green is actually hiding an important piece of information we need to understand. And then here's the one that was going between the red and green, another chart that's just fluctuating around an average. And this happens a lot. When an organization sets their target to be something suspiciously close to last year's average, this is just fluctuating, and it will continue to do so. And I think there's a lesson here about not overreacting to, let's say, this November data point that was red, the same system that's generating this red data point also generated the green data points. I don't think it would be a good use of time to try to do a root cause analysis about why November's number was 67. Because the process behavior chart shows us it's just part of the noise.

Elisabeth Swan:
Yeah, and if you want better performance, then you're gonna have to look at the whole process.

Mark Graban:
You can't be less reactive, more systematic. Comparing the evaluation of those two charts, site three, there's a clear signal something happened. Can we figure out what happened nine months ago? Maybe we do. Maybe we can. Maybe we don't know. Site four shows noise again for all of these, even site three, that was always in the greenhouse. There is a lesson in this case to be learned from the chart. There is certainly room for improvement, as there is for all four of these sites. So back, I don't know, to answer my own question, if I was looking at the process behavior charts, which do I think requires the most attention, I would say, well, maybe site three, because there's been this downward shift in, in performance that seems to be pretty well sustained. Let's make sure we understand what happened. Is there something that happened that we can counteract, or is it too late? Was the signal the shift in performance caused by something? Market factors, did we get a new competitor? And now this metric has dropped. And now we've got to figure out other ways to improve our system other than just undoing something that happened back in this March April timeframe. And then I think, again, back to site one. I can get it to go back here. We would want to understand how and why things got better over the course of the year. It's a different thought process. Instead of browbeating somebody for being red, we could celebrate the improvement and make sure we understand why the improvement was there, why it happened.

Elisabeth Swan:
Yeah, give them a pizza party, right?

Mark Graban:
Yeah. Red, green is not the only criteria that matters. And I think sometimes bowling charts, again, create a lot of those traps.

Elisabeth Swan:
Yeah. They drive a different emphasis, not helpful.

Mark Graban:
And it can be unhelpful. It can be counterproductive. We've got a little time here and a deeper dive resource. Of course, my book measures of success react less, lead better, improve more. It's available now, not just through Amazon in Kindle, paperback and hardcover format. It's also available now through Apple books as an Apple book ebook. So we'll see if there are any other comments. Elisabeth, let me turn it to you. Do you have any other reactions or tips or bots or audience around this?

Elisabeth Swan:
Yeah, I mean, one thing that strikes me, because I, like I said, I get, you know, bowling charts thrown at me all the time. And, you know, I'm dealing with people that are new to problem solving. And you probably know the expression meet people where they are. And so I want to get them closer. Right. So I try to, you know, start, hey, start out. Start out with a baby step. Like a run chart, a line chart, something in excel. Like if you have one of those charts you showed, you know all the dates and you know all the numbers on those dates, maybe it's weeks, maybe it's months. Hey, that's so easy to grab that and throw that into a line chart. And just the idea of having a visual that tells you a story. And I feel like process capability charts tell you a story. So I start out with a baby step, which is, hey, look at this line chart. Is that telling you a story? Yeah, that's how things were. Here's where we started. Then I made some changes. I'll say, when did you do your changes? We should see a shift, right? We should see your chart go up or down based on what you did. So, you know, that's part of your story. And I think that resonates for folks. And then I sort of pull them, pull them a little bit. In getting closer to, hey, next step would be create a process capability chart.

Mark Graban:
Yeah, yeah. And we can do a baby steps approach. This is a made up number, but something like 85% of the benefit of a process behavior chart is just visualizing it as a line chart, aka run chart. People then maybe are sometimes guessing about the difference between signal and noise. But I mean, I think as humans, we're just, it's easier to absorb quickly and take in, I think, better takeaways from a chart than a list of numbers, which is not built that way.

Elisabeth Swan:
Yeah, yeah. And people also get very tied to averages, which can be, again, massively misleading. So just being able to see, like you said, just visualize over time, how does this play out? It shows you where, hey, if you just averaged all this together, you took away all of that variation. And I think, like you're pointing out, this just gives you this lovely image of the variation. Helpful ways to understand the process.

Mark Graban:
We got a comment question here from Michael. And first off, I was going to mention, if you want to learn more about the book, you can go to measures of success. You can download a free preview. I put that link in the comments. So Michael commented and then asked, this is really great, but how do you convince people who have lived in organizations that have done red green for their entire career that this is a better method? That is a great question. And I'll add, there's a wrinkle. It's not even for their entire careers part. I've gone into organizations that had maybe just recently, in the last two years, adopted bowling charts with red green color coding and these rules of thumb that I think are not helpful of. Like if there's two reds in a row, do a root cause analysis and they'll say, oh, but my lean consultants taught me how to do that and we just rolled this out and they don't want to or can't admit that there could be a better way. Like people get really anchored on the way we've always been doing it, even if always is relatively new. But back to the question, Michael. I think doing what Elisabeth described, let's convert the bowling chart, or at least a couple lines of it, to a run chart. Do we see something differently? Sometimes people can convince themselves we can maybe then take the step of let's make it a process behavior chart. What does that teach us? You can try to bring people along like that, but there's a great story. I always fall back on somebody, a woman who took my workshop in about 2017 or so, 2018, when she went back to her health system and she tried talking about process behavior charts, and she was like, yeah, no one was interested. And she was like, well, so then you know what she did? She actually just started making some, putting them up, even on metrics boards or showing the leaders, like, letting them see that visual comparison. Sort of like I tried to do here today.

Elisabeth Swan:
Yeah.

Mark Graban:
Of like, which. Which really, which helps us see more sometimes. You know, you can just sort of try to demonstrate. And I think I appreciate her lesson of, like, take initiative, make the charts. You know, you're not going to get in trouble, even if someone says, like, oh, no, no, I'm fine with the grid of numbers. Like, you're probably not going to get in trouble for trying to show the difference.

Elisabeth Swan:
Another great tactic I've seen in a couple different venues. And I'll give you a story for one of them. I did a lot of continuous improvement in school systems, and two of the metrics, they want to improve our test scores and attendance. And there's a story. I was doing this with a colleague, and this was a, I'm gonna go with third grade class. And two young boys in the class missed the bus. And instead of going home and getting parents or trying to figure out how to get to school or giving up whatever, ran all the way to school in Texas. Yeah. So not entirely safe. Were pulled into the principal's office. Why? What got into your brains to run all the way to school and not go find an adult? And they said, we don't want to be the reason the chart goes down. Oh, and apparently every day they took attendance scores. Right. So somebody, one of the kids would go up and dot the graph. So doing it live. And then they also had a separate one for when they had a quiz. And that day was going to be a quiz, so both attendance and the quiz were going to be impacted by their absence. And they were determined, they were driven right. By seeing those visuals. So I think, and I've seen in the hospital groups I'm working with, like, daily huddle boards, having a chart. Same thing. You get to go and put the next dot on the chart.

Mark Graban:
Yeah. There could be fear of punishment or there could be fear of losing a reward. Like, I think, you know, there's times when organizations will offer people prizes for perfect attendance. If you have perfect attendance, you'll be entered to win. Even like, you know, a car, like even before a pandemic, like even just think about flu season. Like if someone's sick, they should stay home and hopefully their employer is giving them sick leave, you know, and do the math of like I pay to stay home or I can pay to come in and get other people sick.

Elisabeth Swan:
Yeah, I just realized part of the lesson there was that they feared being the ones, the reason the chart went down. But I think the other point, and that certainly this works well on the huddle boards in the hospitals for the different tiered performance boards, was that interaction being part of creating that line chart together, seeing the trend that you're creating together and interacting, I was thinking that might help convince people if you're part of it, if you have a role in creating, creating that line chart and it's live. I think that's also something visceral that people engage with.

Mark Graban:
Such a good story. Good point. Comment here from Carlos or question. Very good way to see the indicators to identify opportunities for improvement in the run charts. I saw the control limits of the process and I'm going to interject and say again, those are calculated based on the data. The control limits are driven by the design and the performance of the system. We do not choose the lower and upper control limit like we can choose what a target is and we should be careful with that sometimes. But so here's the question, should we also add the specification limits or objectives to have a vision of the internal external, and I think you're right to put that in quotes, customer, because sometimes a goal is not the voice of the customer, it's the voice of management. Who said we should set a certain goal. Doing this red green overlay is to me that's a way of showing a target, or a specification, if you will, different types of control charts that are used for, let's say, manufacturing process control, the size of a hole that's being drilled in a piece of metal, you would probably have two sided specification limits, an upper limit and a lower limit of what's the in spec. Yeah, but as we've learned through modern quality practices, the goal is not just to have every part in spec, but we want the process to be in control and to be capable to a point where the calculated control limits are actually far narrower than the specification limits. Does that check? Did I say that right? Elisabeth? I haven't thought about the manufacturing specifications.

Elisabeth Swan:
Well, I think, yeah, and I think the reality is most of us deal with a one-sided spec. Right. We don't go any higher than. Don't go any lower than. And so you're reminding me, hey, yeah, there's two by fours, you know, no less, no more. Although they aren't two by four anymore, are they? But I, what I really like, and I'm taking away from this experience today, is the, using the vestige of the bowling chart. Right. The red green as an overlay so as not to add yet another line to a process behavior chart. And I really like that. It's just a nice, clear, it's, it helps. It's telling a good story. Right. A clear story like that clarifies it for me. So I like that.

Mark Graban:
And hopefully an accurate story because there's, you know, Don Wheelere calls a process behavior chart that it's illustrating the voice of the process. You'd say the voice of the system, because there's just things that people do. Looking at this list of numbers, drawing a linear trend line, there's all things that you can do to massage or torture the data until it tells the story you want to tell. And I think if that's inaccurate, that doesn't really do us any favors. It might keep you out of trouble in the short term, but it's really going to stifle real improvement.

Elisabeth Swan:
Yeah. It buries the trouble and stifles improvement.

Mark Graban:
Okay, well, I think we'll go ahead and wrap that up. I'm done showing weird or scary AI corporate bowling photos. But, you know, hopefully the more important point of the difference between, you know, grid of numbers versus a line chart, or better yet, a process behavior chart. I hope that's opened some eyes or reinforced some things that people have learned before and hopefully get to put into practice. So again, I would love to help you with that. You can send me a message through LinkedIn or markgraven.com via email if you want to learn more about this methodology, the workshop, or other ways that I could help. And then before we go, Elisabeth, remind everyone where they can find you online.

Elisabeth Swan:
You can find me on, at the Just in Time cafe, JustInTimeCafe.com, or at ElisabethSwan on LinkedIn. And my name is with an s. Right. If you do Elizabeth Swan with a z, you'll get a million pictures of Keira Knightley from Pirates of the Caribbean. So don't do that.

Mark Graban:
Good tip. All right, Elisabeth, thank you for co-hosting and being part of this here today. Really appreciate it. Thanks. Thanks, everyone, for attending.

Elisabeth Swan:
Total pleasure.


What do you think? Please scroll down (or click) to post a comment. Or please share the post with your thoughts on LinkedIn – and follow me or connect with me there.

Did you like this post? Make sure you don't miss a post or podcast — Subscribe to get notified about posts via email daily or weekly.


Check out my latest book, The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation:

Get New Posts Sent To You

Select list(s):
Previous articleRecordings: The Lean Mindset Event with GE CEO Larry Culp and Many Special Guests
Next articleRyan McCormack’s Operational Excellence Mixtape: September 8th, 2023
Mark Graban
Mark Graban is an internationally-recognized consultant, author, and professional speaker, and podcaster with experience in healthcare, manufacturing, and startups. Mark's new book is The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation. He is also the author of Measures of Success: React Less, Lead Better, Improve More, the Shingo Award-winning books Lean Hospitals and Healthcare Kaizen, and the anthology Practicing Lean. Mark is also a Senior Advisor to the technology company KaiNexus.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.