8803-DataSanity-600x900CLICK HERE to request a free copy of the preface and chapter summary of my book…

DATA SANITY: A Quantum Leap to Unprecedented Results

What they’re saying about Davis…

"Outstanding presentation. A must review for everyone in leadership."

"Davis is a very passionate speaker. Good follow-through with promised e-mailing of other resources."

"Always a pleasure to hear Davis!"

"Really presented the material in a way that made it usable."

"His high energy was very good and motivated me to go home and fix things."

What Did Deming Really Say?

“I really didn’t say everything I said.” — Yogi Berra

My March 30, 2011 article ended with wisdom from Yogi Berra as a warning to the quality profession. Some prickly reactions to it got me thinking about the last 30 years or so of quality improvement.

The 1980 NBC television show, “If Japan Can, Why Can’t We?” introduced the teachings of W. Edwards Deming to U.S. viewers and caused a quantum leap in awareness of the potential for quality improvement in industry. During the late 1980s, the movement also caught fire in health care. Those of you familiar with Deming’s funnel rules (which shows that a process in control delivers the best results if left alone) will smile to realize that his rule No. 4—making, doing, or basing your next iteration based on the previous one—also known as a “random walk,” has been in operation for the last 30 years.

Jeff Liker, professor of industrial and operations engineering at the University of Michigan, beautifully describes the random walks that have taken place within the time spans of Six Sigma and lean. In a private correspondence with leadership expert Jim Clemmer, Liker writes:

“Originally Six Sigma was derived from Toyota Quality Management (TQM) by Motorola to achieve six sigma levels of quality, and then through Allied Signal and GE it morphed to projects by Black Belts based on statistics to become a cost-reduction program—every project needs a clear ROI. In other words, we denigrated the program from a leadership philosophy to a bunch of one-off projects to cut costs. It was a complete bastardization of the original, and it rarely led to lasting, sustainable change because the leadership and culture were missing.

“A similar thing happened to lean when it got reduced to a toolkit (e.g., value-stream mapping, KPI boards, cells, kanban).

“Lean and Six Sigma in no way reflect the original thinking of excellent Japanese companies or their teachers like Deming.”

Clemmer also cites multiple studies from 1996–2007 concluding that about 18 to 24 months after these various quality systems are launched, 50–70 percent of them fail. Liker concurs and feels that the four key failure factors, in this order, are:

  • Leadership lacking deep understanding and commitment
  • Focus on tools and techniques without understanding the underlying cultural transformation required
  • Superficial program instead of deep development of processes that surface problems solved by thinking people
  • Isolated process improvements instead of creating integrated systems for exceptional customer value

Virtually everyone agrees that the No. 1 barrier to improvement is still top management’s inability to be visibly committed to quality. Is this the “elephant in the living room” or as Clemmer calls it, “the moose on the table”? The longer I’m in improvement, the more I realize the wisdom of Deming’s statement, “If I could reduce my message to management to just a few words, I’d say it all has to do with reducing variation.” Why reduce variation? Because it affords better prediction. He said it so often: “Management is prediction!”

Deming also says in point No. 2 of his famous 14 Points: “Adopt the new philosophy.”

Unfortunately, Deming’s philosophy seems to have morphed into a training mill turning out “belts” by the thousands with statistical training that makes my palms sweat. I’ve said it before: People don’t need statistics; they need to know how to solve their problems. All that’s needed is a few simple tools and a working knowledge of variation to be able to distinguish between common and special causes. Only 1–2 percent of people need advanced statistical knowledge. Deming would roll over in his grave if he could see the statistical subculture of “hacks” (his term) that have been turned out in his name.

In Deming’s words

I think the best book on design of experiments (DOE) is Quality Improvement Through Planned Experimentation, by Ronald Moen, Thomas Nolan, and Lloyd Provost (McGraw-Hill Professional, 1999). It is the only book I’ve seen that uses a process-oriented approach, which is so sorely needed in the real world.

The foreword was written by none other than W. Edwards Deming, and in it he explains the approach to statistics needed:

“Prediction is the problem, whether we are talking about applied science, research and development, engineering, or management in industry, education, or government,” he says. “The question is, ‘What do the data tell us? How do they help us to predict?’

“Unfortunately, the statistical methods in textbooks and in the classroom do not tell the student that the problem in data use is prediction. What the student learns is how to calculate a variety of tests (t-test, F-test, chi-square, goodness of fit, etc.) in order to announce that the difference between the two methods or treatments is either significant or not significant. Unfortunately, such calculations are a mere formality. Significance or the lack of it provides no degree of belief—high, moderate, or low—about prediction of performance in the future, which is the only reason to carry out the comparison, test, or experiment in the first place.

“… [I]nterchange of any two numbers in the calculation of the mean of a set of numbers, their variance or their fourth moment does not change the mean, variance, or fourth moment.

“In contrast, interchange of two points in a plot of points may make a big difference in the message that the data are trying to convey for prediction.

“The plot of points conserves the information derived from the comparison or experiment.”

And, in addition to the process output being measured, determining the sample itself to be measured is its own separate process. The concepts of “randomness” and “sample size for significance” go out the window.

Deming coined the term “analytic” to describe studies to improve a product or process in the future:

  • Prediction is the aim.
  • There is a need to conduct multiple plan-do-study-act (PDSA) cycles over a wide range of conditions.
  • There are limitations of commonly used statistical methods such as analysis of variance to address the important sources of uncertainty in analytic studies.
  • Graphical methods of analysis are primary.

Confirmation of the results of exploratory analysis comes primarily from prediction rather than from using formal statistical methods such as confidence intervals. Satisfactory prediction of the results of future studies conducted over a wide range of conditions is the means to increase the degree of belief that the results provide a basis for action.

When planning to test a change, people are making a prediction that the change will be beneficial in the future. What people don’t realize is that a limited set of conditions will be present during the test; the conditions in the past, during the test, and in the future could all be different. Circumstances unforeseen or not present at the time of the test will arise in the future. Will the change still result in an improvement under these new, future conditions?

Knowledge about the change is based on the specific subject matter on which the change itself is based, as well as knowledge about the environment in which the change will be implemented. Extrapolating the test results to the future is the primary source of uncertainty when a change is tested. The question then becomes, “How does one randomly sample the future?” Easy: One can’t.

The connection between knowledge of the subject matter from which the change is developed and analysis of the data from a test of the change is essential to effective improvement. This cannot happen in a statistical vacuum.

Integrating statistics’ role into leadership philosophy

The fact that most leadership is clueless to the power of statistical thinking in everyday management certainly doesn’t help quality professionals’ efforts. That said, quality practitioners need to start by improving the  process of teaching statistics, especially before they attempt to bring current seminars into the “C-suite.” Much of what is currently taught shouldn’t be applied to daily management—or probably most anything else (except maybe manufacturing product quality). The wrong things continue to be taught: p-values, confidence intervals, normal distribution, sample size, and regression, to name a few.

I once gave a talk following an ASQ Fellow who tried to make a case for bringing a quincunx into the board room—and passing out three pages of statistical definitions. I could feel the tension in the room rising. I then began my talk by saying, “If I brought a quincunx into a board room, they’d throw me out on my ear,” and the room erupted in laughter.

Where to start? Here is a quote from Dr. Donald Berwick, a pioneer in health care improvement:

“Plotting measurements over time turns out, in my view, to be one of the most powerful devices we have for systemic learning…. Several important things happen when you plot data over time. First, you have to ask what data to plot. In the exploration of the answer, you begin to clarify aims, and also to see the system from a wider viewpoint. Where are the data? What do they mean? To whom? Who should see them? Why? These are questions that integrate and clarify aims and systems all at once…. If you follow only one piece of advice from this lecture when you get home, pick a measurement you care about and begin to plot it regularly over time, you won’t be sorry.”

Until the culture at large appreciates the concept of “process” and eradicates blame, true improvement will not take place. To “solve” their problems everyone in a culture truly committed to improvement must work from perspectives of:

  • Customer orientation
  • Continuous improvement
  • Elimination of waste
  • Prevention, not detection
  • Reduction of variation
  • Statistical thinking and use of data
  • Adherence to best-known methods
  • Use of best available tools
  • Respect for people and their knowledge
  • Results-based personal feedback

Creating this culture is far, far more important than teaching a bunch of statistical techniques.

Copyright © 2024 Harmony Consulting, LLC All rights reserved. | Phone: 207.899.0962 | Admin | Use & Privacy | Site Map
Site design, development & hosting by Small Web Solutions