SPC - "Breaking the Spell" of Statistical Quality Control
Shingo once said that it took him decades to "break away from the spell" of SPC. Today, I saw an ad on the back cover an electrical components magazine that said,
"We think statistical sampling is just another way of saying UNTESTED."
They went on to claim that all products are 100% tested. I like the fact that a company publicly declares SPC as indequate in their quality plans. This is the first step to better quality. But, they are still testing 100% of the time. All this does is prevent defects from being passed onto the customer. But what about internally? I wonder what is being done to catch errors before they turn into defects.
Labels: culture
2 Comments:
Hmmm...I don't understand why someone would want to reduce SPC and test everything.
If you test everything, and it always passes, isn't that overprocessing? Or maybe it doesn't always pass, but if that's the case, is testing it really solving the root cause of the problem? Perhaps it provides the data to determine the root cause, and that's OK as a tactical action, but once the root cause is addressed with a reasonable countermeasure, you go back to the question of is it overprocessing.
My take on SPC is that most people don't understand it. They take a control chart, put the engineering specs in as the UCL and LCL, and just use it as a tool to see if something falls out of spec or not. That limited approach of charting to engineering specs is not SPC, and ignores the real power of the method.
Real SPC has limits calculated by the process, in the form of a carefully-determined risk tolerance. This is commonly chosen to be a +/- 3 sigma limit, which if I recall has something like a 99.7% probably that a real unique problem exists (ie not normal variability). Now the real test of a process comes when you compare that statistically-determined limit to an engineering spec.
If the limit exceeds or equals the spec, then you have an unstable process that may be shipping junk (again depends on how much risk you can absorb which is reflected in your choice of how much variability you are willing to call normal).
However, if the limits are less than the spec by a sufficient margin (commonly your 3 sigma limit is nominally half your spec, which means Cp of 2), then SPC gives you a powerful indicator of continuous improvement.
That ratio, called Cp (or Cpk if you look at centering), is called "process capability" for a good reason. It not only tells you if you have a quality problem, but also tells you if you're overcontrolling at excessive cost. It might also tell you something about the training a person receives if there is a distribution among people.
Where the Cp is challenged (<2), a control chart is a powerful indicator of process improvement. Evidence of continuous improvement is seen as the process limits are recalculated and get lower. Do this over time, and you see a gradual reduction in limits and improvement in quality. That's what Shewhart and Deming meant by continuous improvement.
Sure you can get the same evidence by simply testing everything, but what's the cost? If you take the same resource you are using for testing, and apply it to improvement of the root cause, you get the same benefit, plus you have developed someone who can go do the same thing on the next problem. If you just test more, you just spend more, and you don't develop as much improvement skill...
My 2 cents...or maybe a little more.
OK, if you think I'm long winded, check out the Wiki on control charts:
http://en.wikipedia.org/wiki/Control_chart
Post a Comment
Your involvement is essential to ongoing evolution of the leadership community.
Subscribe to Post Comments [Atom]
<< Home