8803-DataSanity-600x900CLICK HERE to request a free copy of the preface and chapter summary of my book…

DATA SANITY: A Quantum Leap to Unprecedented Results

What they’re saying about Davis…

"I was fortunate to select your presentation at the patient safety academy today and was delighted!  Informative and insightful you were able to put data analysis into a wonderful perspective.
Thank You for probably the smartest, funniest and most useful program on QI that I have ever attended. When you think about special vs common cause it just makes good “common” sense.
Thank you for the program today - it was a refreshing and energizing alternate approach to our current morass of indicators, benchmarking and flashing colored lights to please the board...Your presentation of the Provider level C-Section data gave me a double take."

"Thank you so much for your presentation today.  I really learned a lot and I'm excited to take it back and apply it to my organization."

"Thank you so much for the conference today,  great information that I can put to use right away.  Enjoyed it very much."

"Wonderful presentation today; I feel less intimidated working with stats than I did prior to attending."

A Statistician’s Favorite Answer: ‘It Depends,’ Part 2

Stop getting sucked into the swamp of calculation minutiae

When teaching the I-chart, I’m barely done describing the technique (never mind teaching it) when, as if on cue, someone will ask, “When and how often should I recalculate my limits?” I’m at the point where this triggers an internal “fingernails on the blackboard” reaction. So, I smile and once again say, “It depends.” By the way…

… Wrong question!

I made a point in Part 1 of this article that I feel is so important, I’m going to make it again: Do not bog down in calculation minutiae. If you feel the instinct to ask that question, pause and think of how you would answer these from me instead:

1. Could you please show me the data (or describe an actual situation) that are making you ask me this question?

2. Please tell me why this situation is important.

3. Please show me a run chart of these data plotted over time.

4. What ultimate actions would you like to take with these data?

And since writing Part 1, I’ve thought of a fifth question I’d like to add:

5. What “big dot” in the board room are these data and chart going to affect? Or less tactfully,

5a. Who cares whether the limits are correct or not?

When you supply me with the answers to questions 1 and 2, then we can begin a dialogue, during the course of which I would be happy to answer your question about limits.

OK, I’ll answer the question now… sort of

The purpose of the limits is to give a reasonable range of expected performance due to common cause. For the I-chart, as long as the limits are computed correctly—via the moving range between consecutive observations in time order—and “three sigma” are used, then they are “correct limits.” As Donald J. Wheeler likes to say, “Notice that the definite article is missing.” They are just “correct limits,” not “the correct limits.”

Ready for a blinding flash of the obvious? The time to recompute the limits for your charts comes when, in your best judgment, they no longer adequately reflect your experience with the process. There are no hard and fast rules. It is mostly a matter of deep thought analyzing the way the process behaves, the way the data are collected, and the chart’s purpose.

If the process has shifted to a new location, and you don’t think there will be a change in its common-cause variability, then you could use the former measure of variation in conjunction with the new measure of location to obtain temporarily useful limits. Meanwhile, it would probably be a good idea to keep track of the moving range on an MR-chart to note any obvious changes. There is no denying that you will need to ponder the issue of recalculating the limits. With today’s computers, as mentioned below, it’s less of an issue; however, it still requires good judgment.

Wheeler wrote a column 15 years ago that is every bit as relevant today. So, let’s have him ask you three questions:

1. Do the limits need to be revised for you to take the proper action on the process?

2. Do the limits need to be revised to adequately reflect the voice of the process?

3. Were the current limits computed using the proper formulas?

Still not sure? Look at the chart and ask these additional questions Wheeler added from Perry Regier of Dow Chemical Co.:

1. Do the data display a distinctly different kind of behavior than in the past?

2. Is the reason for this change in behavior known?

3. Is the new process behavior desirable?

4. Is it intended and expected that the new behavior will continue?

If the answer to all four questions is yes, then it is appropriate to revise the limits based on data collected since the change in the process.

If the answer to question 1 is no, then there should be no need for new limits.

If the answer to question 2 is no, then you should look for the special cause instead of tinkering with the limits.

If the answer to question 3 is no, then why aren’t you working to remove the detrimental special cause instead of tinkering with the limits?

If the answer to question 4 is no, then you should again be looking for the special cause instead of tinkering with the limits.

The objective is to discover what the process can do or can be made to do.

Yes, indeed: It depends.

Wait for it…

Frustrated by my lack of a concise answer and now trying to distract me from pressing for answers to all these questions, I then get asked, “Well, even though I can’t think of a situation, how many data points are needed to compute accurate limits?”

I generally answer, “How much data have you got?” (It’s usually not very much.)

In my experience, useful limits may be computed with small amounts of data. Even as few as seven to 10 observations are sufficient to start computing limits, especially if, as frequently happens to me, it’s all you’ve got. What else are you going to do? I dare you to find a more accurate way to assess the situation. I chuckle when I think of how many times executives have told me, “Your way of doing things has too much uncertainty.” I’ve been so tempted to answer, “So exactly what are you going to do instead?”

The limits do begin to solidify when 15 to 20 individual values are used in the computation. To argue semantics, when fewer data are available, the limits can be considered “temporary limits,” subject to revision as additional data become available. When more than 50 datum are used in computing limits, there will be little point to further revise the limits.

However, who does charts by hand anymore? Given today’s computer packages, limits are automatically updated as new data are added, so what’s the problem? You might have to make a decision about what period to aggregate for the appropriate moving range statistic, but it’s a somewhat minor point. Frankly, it’s a question I rarely consider; I generally have far too many questions regarding the process being improved. After those are settled, the calculation process always somehow seems to sort itself out. Rest assured, by focusing on the process, you will get “correct limits.”

So stop getting sucked into the swamp of calculation minutiae. Instead, spend all that energy using your charts to understand and improve your processes. And the first time you say, “It depends” in answer to someone’s question, let me know, and we’ll both smile.


Copyright © 2023 Harmony Consulting, LLC All rights reserved. | Phone: 207.899.0962 | Admin | Use & Privacy | Site Map
Site design, development & hosting by Small Web Solutions