Quality improvement people sure love those tools. A particular favorite, of course, is the control chart, of which, I think, seven are usually taught. Two questions I’m always asked are, “Which chart do I use for which situation?” and “When and how often should I recalculate my limits?”
Wrong questions!
Regarding the first (we’ll deal with second question in part 2), I’ve seen many flowcharts in books to help you determine which chart to use for which situation. I find them far too confusing for the average user. (They even give me sweaty palms.) I don’t even teach this in my work.
You know what the third-quarter review meeting means: a packet will be handed out with bar graphs and, no doubt, trend lines on each of about a zillion “key performance indicators” that show:
• This month vs. last month vs. 12 months ago (maybe year-to-date as well)
• The three months’ performance of the current quarter
• The first three quarters of the year
• This quarter vs. last quarter vs. third quarter a year ago
I attended a talk in 2006 given by a world leader in quality that contained a bar graph summary ranking 21 U.S. counties from best to worst (see figure 1). The counties were ranked from 1 to 21 for 10 different indicators, and these ranks were summed to get a total score for each county (e.g., minimum 21, maximum 210, average 110. Smaller score = better). Data presentations such as this usually result in discussions where terms like “above average,” “below average,” and “who is in what quartile” are bandied about. As W. Edwards Deming would say, “Simple… obvious… and wrong!” Any set of numbers needs a context of variation within which to be interpreted.
During my recent travels, I have noticed an increasing tendency toward formalizing organizational quality improvement (QI) efforts into a separate silo. Even more disturbing is an increasing (and excruciating) formality. Expressions such as “saving dark-green dollars” are creeping into justifications for such “programs,” usually referred to as Six Sigma, lean, or lean Six Sigma. As always, Jim Clemmer pinpoints this trend perfectly:
“The quality movement [has given] rise to a new breed of techno-manager—the qualicrat. These support professionals see the world strictly through data and analysis, and their quality improvement tools and techniques. While they work hard to quantify the ‘voice of the customer,’ the face of current customers (and especially potential new customers) is often lost. Having researched, consulted, and written extensively on quality improvement, I am a big convert to, and evangelist for, the cause. But some efforts are getting badly out of balance as customers, partners, and team members are reduced to numbers, charts, and graphs.”
Customer satisfaction data resulting in various quality indexes abound. The airline industry is particularly watched. The April 10 Quality Digest Daily had an article with the title “Study: Airline Performance Improves” and the subtitle “Better on-time performance, baggage handling, and customer complaints.”
The analysis method? In essence, a bunch of professors pored over some tables of data and concluded that some numbers were bigger than others…and gave profound explanations for the (alleged) differences. If I’m not mistaken, W. Edwards Deming called this “tampering:” They treated all differences (variation) as special cause.