Ken K.
@kenkMember since June 2, 2001
was active Not recently activeForum Replies Created
Forum Replies Created

AuthorPosts

November 30, 2001 at 11:23 pm #70305
After reading some of these responses I feel like I should have prefaced all my posts with . . .
“The following comments are my opinions and my opinions alone – they likely do not represent the only solution or truth. There is no intention to outdo, confuse, or offend anyone, regardless of education, occupation, experience, nationality, culture, or place of origin.”
I posted a few comments after reading this thread and found myself questioning how everything I wrote would be interpreted or misinterpreted.
At this point responding to questions or comments seems hardly worth the grief of those who are offended. I thought I was helping people, but now I see that I’m somehow offending people. That takes the fun out of it, so this is likely my last post.
0November 30, 2001 at 10:37 pm #70304The training you run really depends on what you want the Green Belts to do.
I tend to think that a VERY fundamental set of tools can take people a long way, but 4 days is pretty short.
Hmmm, let me think. I suppose you’ll need to focus on simpler techniques. First take a look at Roger Hoerl’s article in Journal of Quality Technology – it might give you some ideas:
http://www.asq.org/pub/jqt/
>Emphasize data collection and making decisions based on fact rather than assumptions.
>Describing processes, identifying important inputs & outputs. Simple process mapping.
>Graphical representations of distributions – histograms, box plots, dot plots, scatterplots. Good graphical analyses can go a very long ways.
>Basic statistics – normal distributions (may not be time to cover percentiles & z calculations), means, standard deviations, maybe focus on using confidence intervals of means & standard deviations to compare groups rather than relying on tests (t, ANOVA, etc…I like Roger’s ideas here), correlation (not so much the coefficient as the idea – also discusss cause & effect). I’m not sure if there is time to cover tests of proportions too, but they are important for many situations. Make sure you provide good statistical software and use it in the training. Don’t make your people do all this by hand or with straight Excel if possible.
>Gage R&R – if your measurement systems are junk and you don’t know it, you’ll waste a lot of effort (I speak from experience here). – stick with the very basic methods – maybe even skip the reproducability part and focus more on repeatability.
>Capability indices are useful, but with only 4 days to cover stuff I’d tend to teach people how to relate histograms to process specs rather than focus on indices.
>Factorial designs – they are handy – maybe just cover very simple designs and analysis methods. By this point you’ll be running out of time though. Maybe give the GB’s an overview of how DOE is used, and let your BB’s do the DOE’s themselves.
Are we out of time yet? These are just ideas. Talk to your current BB’s and see what they recommend. Most BB’s aren’t shy to give opinions.0November 30, 2001 at 9:46 pm #70301Denton & Stan are right on both counts.
(WARNING – long answer ahead)
For many common statistical tests, such as z & t tests and Ftests in ANOVAs, the assumption is that the means are normally distributed. As Denton mentioned, the raw data don’t really have to be normally distributed (I has posted a question related to this some time ago).
The Central Limit Theorem (although it sounds “theoretical” it is important and pretty simple – I usually teach it to people) is the thing that makes these tests work regardless of the distribution of the raw data. It basically says that regardless of the distribution of the raw data, the mean of data will tend toward a normal distribution as the sample size for the mean gets larger. For symmetric distributions this can happen with sample sizes as small as 5. For wilder, nonsymmetric distributions even a sample size as small as 50 will still result in quite normaly distributed means.
Many textbooks DO say that the raw data have to be normally distributed for ttests, but to my knowledge that is not true. In my opinion you just need largish sample sizes, and the more nonnormal the distribution of the raw is, the larger sample sizes you’ll need.
Xbar control charts work the same way, so are much less affected by nonnormality. Individual charts, on the other hand, DO require normality of the raw data.
Other tests, such as tests of variances & standard deviations, tend to require a much larger sample size. In the past I’ve found that accurate estimation of the standard deviation can require sample sizes of, say, 100 or higher.
Other methods, such as process capability indices, rely on the distribution of the raw data rather than the means, so the Central Limit Theorem won’t help there. When working with process capability you really do need to assess the normality of the distribution. As Stan said, some characteristics are just not normally distributed – increased sample sizes do nothing to help that. That is where nonparametric or nonnormal methods come into play. For example, MINITAB provides a Weibull distributionbased capability tool (the Weibull is pretty flexible and can model many symmetric and nonsymmetric distributions). Other software uses Pearson Curves, which are a family of distributions that also is very flexible.
If you have large samples you can also use nonparametric percentiles to calculate capability indices, but that typically requires VERY big sampes (certainly 10,000 and more).
Another place that distribution assumptions become important is when doing reliability analyses. Typically you’ll try to find a distribution (scuh as Weibull or Lognormal) and its parameter estimates that accurately model your failure data. One of the tasks is finding a distribution that models the data well.
Sorry this was so long – its late on Friday PM and I’m just having fun.0November 30, 2001 at 9:16 pm #70299Although I can speak directly to quality levels for my business (that is sensitive information), for complicated products or processes I’d guess that Sigma levels that high are rare, but I suppose its possible.
Such levels are more possible for less complicated products or processes – I would think somewhere there are nearly perfect processes out there.0November 30, 2001 at 9:01 pm #70298Well put.
0November 30, 2001 at 2:25 pm #70272No offense taken, but I don’t recall “attacking” anyone or being agressive. That is just not in my nature.
All I can think of is that you are referring to my message to “MUHANNAD AL NABULSI” regarding some of his odd posts. My point was that they were quiztype questions that didn’t seem real. Examples:
“Sketch a boxplot of the following statistics:smallest=60, median=100, largest=115,1st quartile=80,3rd quartile=110….There are no outliers. Comment on the shape of this distribution.”
“Obtain a newspaper with a classified section and create a stemandleaf plot of the asking prices of 20 automobiles,selected at random.”
…and I was frustrated by them. I suppose I should have just kept my thoughts to myself on that one, though if you’ve noticed, after my post, his messages become much better, asking genuine questions or making comments that have received responses.
My other point was that this very web site, and of course search engines, has answers to lots of questions, and that people should take at least a quick look for an answer on their own before posting a question here. For example, before asking how to find the website for MINITAB (an example only), try searching for “minitab” in a search engine (I like Google) before posting the question. That is just my opinion.
In general I don’t see much (any?!) flaming in this forum, which is nice. I do see some definite opinions, but that is part of the fun of this forum. If someone responds with an incorrect answer, then of course others should jump in and provide the correct answer. I’m sure no offense is implied, and hope no offense is taken.
PLEASE don’t hesitate to post messages. That is what this forum is for. It really is an excellent forum. Like you, I truly enjoy reading the messages and replying to those I feel I can contribute to.
We’re here to help, not to instill fear!! “United We Stand”, as they say.
Ken K.0November 29, 2001 at 9:05 pm #70264I agree that statistics is just part of the BB “package”. That is why I specified “For statistics”.
For other topics we use a combination of sources – internal courses, external courses (such as Minitab Inc’s, or Ford’s 8D problem solving), and, lately, CDROMbased training. We also do some (not much) selflearning activities.
We are lucky enough to have an excellent supply of talent for teaching internal courses.
The Minitab Inc. material is good from both perspective – very good realistic examples and a good format.
The training material for each class is given to the student in a spiral bound paper format which flips over to minimize the “footprint”. The booklets are created to be used in lanscape positon, rather than portrait (flipped on its side). The pages tend to have text on one side and an example of output, graphics, or other information on the other side.
The booklets have welllabeled tabs on the bottom (the long side opposite the spiral) that helps send the student directly to the topic of interest (great for postclass reference).
The first page of every section tab lists the Objectives for that section. The second page lists the examples and exercises and the purpose for each one (along with the respective page number). Each example or excercise has accompanying introductory information, including definitions, comments, suggestions, and specific stepbystep guides on how to generate specific graphs or analyses for that topic.
The format is particularly good for postclass reference.
During the class the student follows along with the booklet. Usually NONE of the reference material is projected onto a wall. Instead the instructor uses a combination of handwritten notes (we use a flip chart, but I suppose a whiteboard would do) AND a live projected MINITAB desktop display. We project the MINITAB display onto a white board so the instructor can write all over the current display – it is really quite effective and lively.
I should mention that the booklets also come with a diskette containing the data used in the examples. While it is useful to enter the data once or twice during the class, using the disk allows better use of time.
Another think I should mention is how the classes are laid out. They tend use the examples to build on capabilities. Its kind of hard to explain. You will first learn how to do A, then you’ll learn how to do B and in the example to A & B. Then you’ll learn C and do ABC, and so on. It is really effectie. By the end, you become VERY good at doing the essentials – ABCD…
The examples look like they come from real life, but are likely doctored to protect their source. Lately Minitab Inc. has recently begun offering courses in Service Quality (statistics) that tend to focus more on analysis of attribute data. The examples focus on nonengineering/manufacturing issues.0November 29, 2001 at 2:08 pm #70244For statistics training we use Minitab Inc.’s trainers. They provide excellent classes, VERY skilled and experienced teachers, the training materials are very nice (not powerpoint) and the courses are 100% hands on. You can read about their training capabilities at
http://www.minitab.com/training/index.htm
As an example, this link takes you to a page with Jim Colton’s picture. He is one of our favorite instructors. He is a very good teacher, has lots of experience in quality and reliability (important for us), and is just an overall nice, very likable guy.
I couldn’t recommend them more.
By the way, I don’t work for Minitab, I’m just a very satisfied customer.0November 28, 2001 at 2:55 pm #70220Very good advice Mike!
In additon, to focus on the methodology:
1. Understand your process very well. Stare at it (Mike’s comment). Ask operators about it. Map it. Look for deviations from the “normal” process. Map them.
2. Find the key qualitative & quantitative characteristics of the process – the things you think impact the outgoing quality.
3. Figure out ways to measure those characteristics (the inputs) and the quality of the process output. This is not always easy. Variables measures are MUCH preferred over attribute measures.
4. Understand the measurement system you will use to measure the process characteristics. Use gage R&R to measure variability. Make sure gages are calibrated. Understand how temperature, humidity, and other factors affect the gage accuracy & precision. The rest of your work will be dramatically impacted by the quality of your measurements.
5. Take data on your process. Is the process acceptable? Are there correlations between the characteristics you’ve measured and the defects? Use this data to help narrow down the list of characteristics that might be modified to improve the process.
4. Consider running designed experiments to learn how changes in the charactericsts (inputs) affect the resulting quality (the outputs). Don’t put all your effort into one big experiment – plan for several experiments that incorporate things you’ve learned at you run through them. Focus on process optimization. Also do confirmation runs to make sure that the presumed optimum settings are truly optimal.
5. Once you’ve identified ways to improve the process, make sure you put controls in place to maintain the new settings. Keep measureing the inputs and outputs for a time to get a better understanding of the relationships. Make sure you document your efforts as much as possible.0November 28, 2001 at 2:41 pm #70219It seems that there is not a clear understanding of the root cause for your customer’s defects.
You need to try to form a JOINT team with your customer to understand the customer defect data, determine the root cause of the defect, and then use that information to identify exactly where the focus of your investigation should, well, focus.
If your product is within spec leaving your factory, but the thing your customer is building is not within spec, it would seem they must know where the problem starts.
Another possibility is that the tolerances speced by the customer are not sufficient – that is, your stuff may be in spec, but that might not be good enough for your customer. They may need to pay you, the supplier, more to produce “better” stuff, or simply find another source of “better” stuff (not that you did anything wrong – you sold them what they asked for). If your customer is truly a Six Sigma customer, then they should be able to provide at least some help in improving your stuff – if that is what is needed.
At this point it sounds like they (your customer) needs consulting help more than you do – they clearly can’t figure out what is wrong with their process.0November 28, 2001 at 2:26 pm #70217I’ll take a crack at this too . . .
If you are familiar with means and standard deviations then you’ve got a good start.
First, lets talk about means. Suppose you wanted to measure the amout of radiation each patient receives from a CAT scan from you rmodel XY321 CAT scan equipment. Certainly there is some variation there, and I could see where the distribution might be normally distributed. For the rest of this discussion I am assuming normality.
So the POPULATION we are focusing on is “all people who receive CAT scans from a model XY321 scanning of equipement”. If we could gather the measurements from every patient who will ever use this equipment (impossible task) and average them, that value would be the POPULATION MEAN, represented by the Greek letter MU.
Since we can’t really gather every patient, instead we take a SAMPLE of patients – say we measure the exposure for 200 patients. We take those 200 values and calculate the sample’s mean. This mean is our best estimate of the true, but unknowable, population mean, mu.
Now, we can do the same thing with variation. If we had every patient, we could calculate the POPULATION VARIENCE using the sum of the squared deviations from the mean, divided by n (I’m using n for the population variance). The square root of the population variance is the POPULATION STANDARD DEVIATION, represented by the Greek letter SIGMA.
Since we can’t really gather every patient, instead we take that SAMPLE of patients – the 200 patients – and calculate the sample’s standard deviation (you know this formula – dividing by n1 for a sample). This standard deviation is our best estimate of the true, but unknowable, population standard deviatoin, sigma.
Now, for a normal distribution, it turns out that the standard deviation has some pretty cool properties related to the width of the distribution:
1. The inflection points on either side of the peak a normal distribution – the places where the slope changes from increasing to decreasing on the left side and visa versa on the right side – will always be exactly one standard deviation away from the mean. At least I think that is kind of nifty.
2. Regardless of the exact values of the mean and standard deviation, we can make some general statements about the probability of observing certain values from the distribution:
38.3% of the data will fall within 0.5 standard deviations of the mean (the range defined by (mu – 1/2sigma) & (mu + 1/2sigma)
68.3% of the data will fall within 1 standard deviations of the mean
95% of the data will fall within 1.96 standard deviations of the mean
95.4% . . . . . 2 standard deviations
99.7% . . . . . 3 standard deivations . . . etc…
In Six Sigma world of quality, we also talk about a quality metric called “Sigma” (I tend to capitalize the term when referring to the metric and use lower case when referring to the population standard deviation). This is easiest to explain by giving a specific example:
Six Sigma: In this scenario the upper & lower spec limits are sitting precisely at (target + 6*sigma) and (target – 6*sigma). If our process is exactly centered on the target, we will only see 0.002PPM falling outside our spec limits (to get PPM we multiply the proportion of defects by one million, just to make these tiny numbers easier to work with).
But, if our process shifts by 1.5 sigma, our mean will be sitting 1.5 sigmas above or below the target. The spec limits don’t shift, so one spec limit will now be sitting at 7.5 standard deviations from the mean – we will see essentially zero defects that far out. The other spec limit will now be sitting at 4.5 standard deivaitons – we expect to see 3.4PPM beyond that spec limit.
So, in a 1.5 sigma shifted – six sigma scenario, we would expect a total of 3.4PPM defect rate. This is how we get 3.4PPM related to Six Sigma.
For a three sigma scenario, the spec limits are sitting at +& 3 standard deviations (sigmas) from the target. If the distribution is shifted 1.5 standad deivaitons (remember the spec limits don’t shift), then one spec limit is now at 4.5 standard deviations from the mean and the other spec limit is at 1.5 standard deviations. We can expect 3.4PPM beyond the 4.5 sigma spec limit and we can expect 66,807.2PPM beyond the 1.5 sigma spec limt. Combined we expect 66,810.6PPM from the 1.5 sigma shifted – three sigma scenario. This is how 66,811PPM is associated with 3 Sigma – although many people, for some reason, ignore the 3.4PPM on the one side and use simply 66,807PPM.0November 27, 2001 at 2:13 pm #70201You did not specify the test that the pvalue is associated with. It sort of sounds like you are talking about a test of normality.
There are several goodness of fit tests for normality – probably the most common are the ShapiroWilke Test (the RyanJoiner Test in MINITAB), the AndersonDarling A^2 Test, the KolmogorovSmirnov test, and the chisquare test.
The formulas are more complex than can be presented in this format, but they are all covered in Ralph B. D’Agostino & Michael A. Stephen’s book “Goodness ofFit Techniques”.
The ShapiroWilks test uses the correlation observed in the normal probability plot. The AndersonDarling and KolmogorovSmirnov tests both compare the empirical cdf to that of the best fit normal curve. The AndersonDarling test pays more attention to the tails than does the KS test.
D’Agostino & Stephen’s recommendations are that the ShapiroWilks test and AndersonDarling A^2 test are amoung the best to use. They indicate that the ShapiroWilks type tests are probably overall most powerful. MINITAB defaults to the AndersonDarling test, but I don’t really know why.
D’Agostino & Stephens go on to say:
“For testing for normality, the KolmogorovSmirnov test is only a historical curiosity. It should never be used. It has poor power in comparison to the above procedures.”
and
“For tetsing for normality, when a complete sample is available the chisquare test should not be used. It does not have good power when compared to the above tests.”
0November 26, 2001 at 9:46 pm #70197The ASQ book “How to Choose the Proper Sample Size” by Gary G. Brush gives the following formula (kinda tough in this format):
n = [Z{(1+A)/2] / ARCSIN{(delta/100) / SQRT(PB*(100PB)/10000)}]
where Z{p} is the zvalue for the standard normal distribution that gives a lower tail area of p.
A is the confidence level expressed as a proportion (between 0 & 1)
delta is the desired uncertainty in the estimate of the proportion (think of it as the +/ confidence half interval for p).
PB is the estimated “toward .5” bound for the proportion being estimated.0November 20, 2001 at 7:26 pm #70075Using:
Sigma =NORMINV(1DPMO/1000000,1.5,1)
The Excel NORMINV function’s estimate of Sigma is accurate to two decimal places between 1.4 and 5.7 Sigma. It is accurate to one decimal place between 5.7 and 6.1 Sigma.
Using a much longer formula:
Sigma =SQRT(LN(1/(1EXP(DPMO/1000000))^2))(2.55155+0.802853*SQRT(LN(1/(1EXP(DPMO/1000000))^2))+0.010328*SQRT(LN(1/(1EXP(DPMO/1000000))^2))^2)/(1+1.432788*SQRT(LN(1/(1EXP(DPMO/1000000))^2))+0.189269*SQRT(LN(1/(1EXP(DPMO/1000000))^2))^2+0.001308*SQRT(LN(1/(1EXP(DPMO/1000000))^2))^3)+1.5
This “long” formula is accurate to two decimal places between 3.2 and 9 Sigma. It is accurate to one decimal place between 2.5 and 3.2 Sigma. It should not be used below 2.5 Sigma or above 9 Sigma.0November 20, 2001 at 7:25 pm #70074Using:
Sigma =NORMINV(1DPMO/1000000,1.5,1)
The Excel NORMINV function’s estimate of Sigma is accurate to two decimal places between 1.4 and 5.7 Sigma. It is accurate to one decimal place between 5.7 and 6.1 Sigma.
Using a much longer formula:
Sigma =SQRT(LN(1/(1EXP(DPMO/1000000))^2))(2.55155+0.802853*SQRT(LN(1/(1EXP(DPMO/1000000))^2))+0.010328*SQRT(LN(1/(1EXP(DPMO/1000000))^2))^2)/(1+1.432788*SQRT(LN(1/(1EXP(DPMO/1000000))^2))+0.189269*SQRT(LN(1/(1EXP(DPMO/1000000))^2))^2+0.001308*SQRT(LN(1/(1EXP(DPMO/1000000))^2))^3)+1.5
This “long” formula is accurate to two decimal places between 3.2 and 9 Sigma. It is accurate to one decimal place between 2.5 and 3.2 Sigma. It should not be used below 2.5 Sigma or above 9 Sigma.0November 20, 2001 at 7:12 pm #70073Both.
Sometimes we get specs from customer. Someties we don’t, so we determine them ourselves. Sometimes we create specs more stringent that the customer’s and use those instead of the customer’s.0November 20, 2001 at 6:51 pm #70072The sample size tool in MINITAB is pretty darn easy to use and highly recommended. If you don’t have adequate statistical software . . .
https://www.isixsigma.com/library/content/c000709.asp – discussion
http://www.health.ucalgary.ca/~rollin/stats/ssize/n1.html – calculator
(Warning: I have not confirmed the calculator’s calculations – swim at your own risk)0November 17, 2001 at 11:42 pm #70018This is a prime example of the silly irritating little questions that this person tends to post.
All he has to do is go to http://www.google.com and enter the post,s title “The 689599.7 Rule” into the search field. He’d get at least a handfull of nice descriptions.
Please stop being to darn lazy and at least do a search of the internet before posting your questions here.0November 16, 2001 at 8:16 pm #70012Even two people with identical degrees from identical universities can have quite different capabilities, strengths, weakness, and areas of expertise. The same is going to be true for two Black Belts.
What is the driving force for this common standard?
Is the desire for “certification” based upon a potential employer’s need to easily weed through a plethora of applications? Get real. Even if every single applicant attended the exact same training classes, there will be that natural variation plus the varied backgrounds which will make some applicants more capable than others.
I’ve been in the Six Sigma Black Belt business as long as it has existed. I appreciate the need for standardization within a company, but I just don’t see why standardization is necessary between companies.
0November 14, 2001 at 5:34 pm #69966Bang!!
You hit the nail right on target!
Many of the poster’s questions have appeared to generic homeworklike questions rather than genuine requests for help.
Also, if the poster is simply trying to instigate discussion, I have other things to do than discuss minor details of common techniques.
I am more than happy to help those who seem to have done some research or appear to be truly stuck, but when the question is “What is Six Sigma?”, they clearly would rather take my time than “waste” theirs searching for the answer.
For me, the real fun is answering a tricky technical question posted by someone who really seems to have tried or seems to be stuck. The generic questions already answered by the iSixSigma website just don’t tickle my answering response. I think its kind of like the chicks who strain to get a worm from their mother. The dull lifeless chick never gets fed.0November 14, 2001 at 5:24 pm #69965The standard deviation is a measure of variation, usually applied to normal distributions, although it is also defined to be the square root of the second moment of any distribution.
The standard deviation is simply the square root of the variance. They are oneone. For some circumstances, such as linear combinations of normal variates, the variance is easier to work with (the variance of a sum is the sum of the variances). For other circumstances, such as a discussion about the “width” of the distribution, the standard deviation is easier to work with, since it is in the same units as the random variable itself.0November 14, 2001 at 2:13 pm #69962Also check out this Quality Digest Article:
http://www.qualitydigest.com/oct97/html/excel.html0November 14, 2001 at 2:09 pm #69961My own favorite reference for DOE continues to be Douglas Montgomery’s “Design and Analysis of Experiments”. A very well written text – and well received by the statistical community.
Here are a few web sites that might help:
NIST/Sematech Engineering Statistics Handbook:http://www.itl.nist.gov/div898/handbook/index.html
Statistical DOE Project:http://www.macomb.k12.mi.us/math/web1.htm
A Taguchi vs Classical DOE site:http://www.kavanaugh.com/doe_taguchi.html
John Grout’s PokaYoke Pages:http://www.campbell.berry.edu/faculty/jgrout/0November 12, 2001 at 1:54 pm #69891In response to Ravi’s comments (shown in bold):
1) GRR number shall always be looked in reference to the tolerance. Ultimately what matters is the ability of the gage to resolve a tolerance.
I have to cordially disagree. There are many occasions when the gage is used to take measurements for process improvement activities, such as DOE’s. In these cases, a gage’s ability to detect improvements MUCH smaller than the tolerance can result in a substantial quality imrprovement. In these cases it is more important to have the measurement variation small with respect to the process variation (which I’m assuming to be equivalent to the experimental variation.)
2) The acceptability numbers are should be also in reference to tolerance. A common industry standard is 5% for a good gage
Again, I have to disagree. The accetance criteria for gage R&R is clearly stated in the AIAG MSA Reference Manual. As a matter of fact, suppliers to the “Big 3” and other QS9000 subscribers are contractually obligated to follow these acceptance criteria unless they have written customer approval for alternative acceptance criteria.Under 10 percent error – accpetable measurement system.
10 percent to 30 percent error – may be acceptable based upon importance of application, cost of measurement device, cost of repair, etc.
Over 30 percent – considered not acceptable – every effort should be made to improve the measurement system.
3) The acceptable GRR number is a very much BUSINESS decision. It is all about RISK management. If you can handle lot of risk, you can have a sloppy GRR. In general, I may say that a critical part may require better GRR than a non critical part.
I agree.
4) GRR for a destructive testing cases is in theory not possible (That is what an expert with Ph.D. told me). However, there are always engineering approached and the results should be interpreted with other information such as part to part variation etc.
Again, I agree, at least in theory. Close approximations to the correct GR&R results may be possible if nearly homogenious samples can be obtained from a single batch or similar circumstance. These homogenious samples are then treated as psuedorepeated measurements. Another method is to find nondestructive ways of replicating the taking of measurements, such hanging weights on a strain gage, commonly used for pull breakage tests. Even this may not accurately simulate the actual testing conditions though.0November 7, 2001 at 1:45 pm #69786I should have mentioned that the AIAG MSA reference manual clearly prefers the metrics based upon the production variation over those based upon the tolerance.
I quote from the chapter titled “MEASUREMENT SYSTEM DISCRIMINATION”:
“Thus a recommendation for adequate descrimination would be for the apparent resolution to be at most onetenth of the total process six sigma standard deviation instead of the traditional rule which is the apparent resolution be at most onetenth of the tolerance spread.”0November 7, 2001 at 1:40 pm #69784Oh no! Now ASQ is going to try to start certifying Six Sigma consulting companies!!!
0November 7, 2001 at 1:39 pm #69783The same criteria applies when using process variation in the denominator. The AIAG MSA reference manual says:
“Acceptability Criteria – The criteria as to whether a measurement system is satisfactory are depnedent upon the percentage of part tolerance or the manufacturing production process variability that is consumed by measurement system variation. For measurement systems whose purpose is to analyze a process, a general rule of thumb for measurement system acceptability is as follows:Under 10 percent error – accpetable measurement system.
10 percent to 30 percent error – may be acceptable based upon importance of application, cost of measurement device, cost of repair, etc.
Over 30 percent – considered not acceptable – every effort should be made to improve the measurement system.
The final acceptance criteria of a measurement system should not come down to a single set of indices. The performance of the measurement systems should also be reviewed using graphical analyses.”0November 6, 2001 at 5:21 pm #69773November 6, 2001 at 5:17 pm #69772It seems you could do a combination of both.
You can learn a lot about problem solving, FMEA, process capability, and control charting just from the internet and via books. The MSA, FMEA, & SPC reference manuals from AIAG:(http://www.aiag.org/publications/quality/dcxfordgm.html)
provide a wealth of information for only $38!!
Find out what statistical software JP Morgan uses (most Six Sigma companies use MINITAB), and consider purchasing a copy of that software and then buy ActivStats for that software (http://www.activstats.com) ($300 from Minitab Inc).
The software will cost more than the combined training!!
0November 6, 2001 at 4:53 pm #69768I myself don’t see those costs as too outrageous, depending on what exactly the consultant does. The GB should easily recoup those costs fairly rapidly.
I don’t see the GB, BB, & MBB as a “caste” system. In my business these labels are used to provide and easy/quick identification of person’s level of skill and experience. BB’s are expected to develop GB’s & new BB’s. MBB’s have a leadership, training, mentoring, and consulting role.
In other businesses (GE) the titles also distinguish the person’s job. What I mean is that GB’s stay in their current job (the inside man that provides local expertise), BB’s take on a new leadership job (the outside man brought in to lead), and MBB’s take on the training, mentoring, and consulting role.0November 4, 2001 at 2:34 pm #69721Six Sigma methods are applicable to all industries and service organizations.
This web site provides a good start. You can do searches using http://www.google.com to find details on specific methods. For statistics I tend to prefer this link:
http://davidmlane.com/hyperstat/
For FMEA try
http://www.fmeainfocentre.com/
For SPC, MSA, and FMEA I strongly recommend you purchase the AIAG reference manuals, all available at:
http://www.aiag.org/publications/quality/dcxfordgm.html
For problem solving & root cause analysis, here is a pretty good link:
http://www.mapnp.org/library/prsn_prd/prob_slv.htm
0November 4, 2001 at 2:14 pm #69718Jim,
Have you seen Pyzdek’s book? If so, how do you think it compares to Breyfogle’s and Harry’s books?
Those three seem to be the most often referenced Six Sigma books.
Most reviews suggest Breyfolgle’s Implementing Six Sigma is the most usable guide to Six Sigma, from a tools standpoint. Would you agree?0November 4, 2001 at 2:00 pm #69717OK, get ready for a long equation that I use in Excel. It is accurate to two decimal places between 3.2 and 9 Sigma, and to one decimal place between 2.5 and 3.2 Sigma. It should not be used below 2.5 Sigma or above 9 Sigma (were you that lucky).
Sigma equals . . .
=SQRT(LN(1/(1EXP(C1/1000000))^2))(2.55155+0.802853*SQRT(LN(1/(1EXP(C1/1000000))^2))+0.010328*SQRT(LN(1/(1EXP(C1/1000000))^2))^2)/(1+1.432788*SQRT(LN(1/(1EXP(C1/1000000))^2))+0.189269*SQRT(LN(1/(1EXP(C1/1000000))^2))^2+0.001308*SQRT(LN(1/(1EXP(C1/1000000))^2))^3)+1.5
Where C1 is the cell containing your DPMO.
You can also use the short form (uses Excel’s function):
Sigma equals . . .
=NORMINV(1C1/1000000,1.5,1)
This formula is accurate to two decimal places between 1.4 and 5.7 Sigma, and to one decimal place between 5.7 and 6.1 Sigma. Outside of those ranges it should not be used.
I prefer the first long formula when my process is around 5.5 Sigma or above, since it is much more accurate than the short formula.
Hope that helps.0November 2, 2001 at 2:39 pm #69689The beta risk is most certainly considered and minimized IF the testing is set up correctly. The beta risk is minimized by using proper sample sizes.
Before any data are collected you need to consider the minimum samples sizes required for the analysis that will be used. The maximum acceptable beta and alpha are both involved in the calculation of the minimum sample size.
In MINITAB, you would use the Stat > Power and Sample Size command. You select the Power/Sample Size tool that matches the test you will use to analyze. Then you will typically ener the process standard deviation, the test resolution needed, the test alpha, and the test beta (and possibly whether this is a one or two tailed test). The software does the rest.0November 1, 2001 at 2:00 pm #69643GT is correct.
By manufacturing several (3) bonds without changing the equipment settings, there is an assumption made that these three bonds are nearly identical. Of course the variation measurement you obtain will actually be the combination of measurement variability PLUS parttopart variability for very similar parts. There is really no way to get around this. For destructive tests this will always be confounded.
The idea is that you are hoping that shortterm parttopart variability is much less than long term parttopart variability. This may or may not be true.
Another consideration is to find another way to excercise the gage and obtain a measurement without actually breaking parts. Try repeatedly hanging a weight from the device to see its repeatability – or something similar.0November 1, 2001 at 1:53 pm #69642What KMB and Dave are saying is . . .
If the pvalue for a correlation coefficient test is less than 0.05, it indicates that the correlation coefficient IS significantly different from zero (either positive or negative) at the alpha = 0.05 level.
This means that there is some significant amount of linear relationship between your two variables of interest.
This test uses a test statistic t0=[r*SQRT(n2)]/SQRT(1r^2). It has been proven that IF the true correlation coefficient is equal to zero, then t0 will follow the tdistribution with n2 degrees of freedom.
Extreme values of this t0 statistic are indicative that t0 does not actually follow a tdistribution with n2 df, thus it indicates that the true correlation coefficient is NOT equal to zero.
Extreme values of t0 are characterized by small pvalues. Thus small pvalues indicate that the true correlation coefficient is NOT equal to zero.
Ken K.0October 30, 2001 at 8:46 pm #69620You’re Hired!!
0October 30, 2001 at 4:12 pm #69615Please post a link, or at least reference a text or journal article that describes what you call “variable search”. Your posts haven’t given much of a clue as to what, specifically, you are talking about.
I have been using designed experiments for over 15 years and have only heard the term used in the sense of the use of fractional factional screening designs. Are you talking about supersaturated designs?
Ken K.0October 24, 2001 at 1:02 pm #69468Nope,
Motorola still owns the Six Sigma trademark, but unlike Mikel Harry, has been kind enough to allow others to use its trademark without seeking legal action.
Though there seems no tendency to do so, Motorola still has every legal right to demand the “Six Sigma” community stop using its trademark.
On the other hand, Mike Harry tried to trademark a whole bunch of other terms, such as “black belt”, “green belt”, “master black belt”, “breakthrough”, and “champion”. See http://www.qualitydigest.com/may00/html/news.html
Micrografix lists this statement in its website:
Black Belt, Master Black Belt, Green Belt, and Champion are Service Marks of Sigma Holdings, Inc.
A Six Sigma Academy site says Champion, Master Black Belt, Black Belt, and Green Belt are registered service marks of the Six Sigma Academy
I have heard rumors that Motorola lawyers have threatened legal action if Mikel Harry tries to enforce these marks. Clearly these terms were used by the Motorola Six Sigma Black Belt organization long before Mikel left Motorola, but I don’t know what the current status is.
As far as the methods are concerned, those are public domain (except for the Shainin stuff).
Ken K.0October 22, 2001 at 4:00 am #69431Care does need to be taken when fitting statistical models to data obtained using Monte Carlo methods.
The ability to estimate model coefficients relys on the full rank of the X matrix (the data going into the fit). If the model is not balanced the coefficients are not estimable – although a nearly balanced design will likely provide nearly accurate main effect coefficients and likely less accurate interaction effects.
Since most Monte Carlo results are not controlled, they are likely to be moderatly unbalanced.
I recommend people consider using simulated data taken from DOE arrays instead. This ensures the proper design balance and allows the investigator to properly investigate interaction effects and understand the relative impact of each of the terms on the total variation. Of course there will be no error term, since the simulation typically has no error, but the sums of squares for the different sources of variation will be correct.0October 22, 2001 at 4:00 am #69432October 22, 2001 at 4:00 am #69433The nicest way I’ve seen is to related the standard deviations to the lengths of the sides of a right triangle.
The length of the hypotenuse is equal to the square root of the sum of the squared side lengths.
Clearly the length of the hypotenuse can’t be simply the sum of the side lengths – that just wouldn’t make sense from a visual perspective.
The other nice thing about this analogy is to show the relative effect of measurement error. If the horizontal side is the actual value’s variability (standard deviation) and the vertical side is the inprecision due to measurement error (the standard deviation due to GR&R), then the hypotenuse represents the total variation that you will actually observe.0October 22, 2001 at 4:00 am #69441I’m assuming your choices are to run one anlaysis on just the little rocks vs running two analyses: one on little rocks and one on big rocks.
A lot depends on the nature of the variation in your gage. I would venture to guess that the repeatability would be pretty much the same regardless of the size of the rocks. But, the reproducability could be different if the operators react to the different rock sizes differently.
Whether or not the GR&R is affected by rock size, certainly you’ll need to be able to compare the GR&R to the tolerances of both rock sizes, and more preferably, you’ll be able to compare the GR&R to the total variation, including the parttopart variation for both large and small rocks. The problem with getting parttopart variation outside of a Gage R&R study, such as historical data, is that it will include the measurement system variation.
My advice – it is usually easier to run both studies (one for large rocks and another for small rocks) than to mess with all the assumptions you’ll have to make otherwise. If running the Gage R&R is very expensive or consumes resources, then if may be worth the “mess”.
I recommend you have a statistician help you sorting out that mess.
Ken K.0October 22, 2001 at 4:00 am #69442I’ll agree that some topics are fairly consistent across many companies, but not all. I would venture to guess that 50% or more of the skills are the same for all BB’s, based upon a solid foundation of statistics and quality practices, but the other half needs to be based upon the objectives of the business.
Take for instance the Motorola program. It is quite different from the GE program in that Black Belts are not full time Black Belts in the GE sense. In Motorola their job is to utilize the skills in their own function and to encourage the use of the skills by others. In this case many of the project selection methods taught by GE are not necessarily applicable to Motorola Black Belts.Their projects develop more naturally based upon local objectives.
That’s just one example, but there are others.0October 21, 2001 at 4:00 am #69404I agree with Mike. A lot of effort is wasted on non valueadded details.
This issue is also very related to my distaste for the Black Belt standardization efforts. Each company has its own unique set of needs that demands its own type of organization and its own set of tools, methods, and solutions.
My advice is to invest in two things first: 1. Top management understanding and buyin, and 2. hiring/aquisition of competent expertise to define and implement the Black Belt initiative. For both of these, you can go a very long way by hiring a well known and experienced Black Belt consultant – such as Mike’s or others. iSixSigma lists many of these consultants at http://www.i6sigma.com/co/six_sigma/
Don’t waste time worrying about the differences – worry about what YOUR business needs and how to have those needs met.0October 9, 2001 at 4:00 am #69150I agree 100% with Anon, and don’t agree with standardization or certification for “Black Belts”.
Each company has its own particular needs and doesn’t need some overbearing, supposedly notforprofit organization (not refering to any particular one) telling it how to run its training or its business.
As someone in this forum already said, a decent interview with a person for 15 minutes should make it pretty clear if they are qualified for your particular needs.0October 5, 2001 at 4:00 am #69121Boy, you obviously have a beef with Motorola or Motorola University.
Here is a link for information on the current Motorola Black Belt Program:
https://www.isixsigma.com/forum/showmessage.asp?messageID=48840October 2, 2001 at 4:00 am #69045I can assure you that Motorola continues to have a strong Six Sigma environment AND a mature and active Six Sigma Black Belt organization/ community that has never wavered since its inception.
Keep in mind that Motorola has been involved in Six Sigma for almost 15 years!! To a great extent the Six Sigma activities been absorbed into the daily routine of the businesses and expanded into a process that Motorola calls Performace Excellence. Think of this as Six Sigma on steriods, spreading the methods and expectations to all areas of the business and invoving a wide array of metrics for business decision makeing. That is partially why web sites and other information doesn’t identify Six Sigma specifically.
Sigma levels for product are still tracked and reviewed routinely on an ongoing basis, and used as one of many bases for improvement strategies.
A Corporate Six Sigma Black Belt Steering Committee oversees and manages the Black Belt program in coordination with Corporate Quality organizations. This committee charters Sector Six Sigma Black Belt Steering Committees who are tasked with supporting and driving the program within the businesses. An annual Six Sigma Black Belt Symposium draws Black Belts from around the world and encourages active communication and growth.
The Six Sigma Black Belt program format and requirements remain essentially as they were from the very beginning. Unlike GE & others, Motorola Black Belts, remain in their positions and use their Six Sigma methods to improve products and processes locally, and also to promote the use of these methods throughout the Corproration.
Motorola continues to recognize three levels: Green, Black, and Master Black. Candidates have a mentor and a management sponsor, who work together to ensure that appropriate skills and experience are received. Motorola’s objective is to have fewer (compared to most other companys), very well trained and experienced (compared to most other SS companies) Black Belts in key areas of the business to drive improvement.
Green & Black Belt recognition is based upon completion of a mildly complex matrix of skill requirements (including statistics, quality tools, problem solving), advanced technical skills electives, interpersonal skill electives, and skill demonstration requirements (this requirement is much more substantial than most BB programs – it typically takes 1 to 2 years to complete). Green Belts require much fewer skills and demonstrations than Black Belts. Green Belt typically requires about 16 days of training, Black Belt typically requires about 30 days of traning. Much more training than in most programs.
Master Black Belt recognition requires nomination from a sector Steering Committee, recommendation and recognition from Motorola upper management (VicePresident), mentoring of six Black Belts, and five years of active application of Six Sigma methods since becoming a Black Belt. These Master Black Belts provide leadership, continuity, and a wealth of experince.0September 10, 2001 at 4:00 am #68602I work for a large company that has been doing Six Sigma for over ten years. Our product is comprized of circuit boards which are populated with components and inserted into cabinets. We have always cacluated opportunites per product using a very simple formula:
# of Parts x # of Electrical Connections
The circuit board is considered one part.
When we coat the board with a moisture barrier, we count 1 opp per spray pass and another if we use a stencil mask.
If we glue something to the board before soldering, that gluing process is considered an extra opp.
Solder flux, primers, activators, etc.. are considered a part of another operation and are not added as opps.
Valueadded operations, such as tuning, flash programming, are considered an opp. In process testing is not.
For some other parts of our business we use this formula:
3N + P + 2
where N = number of operations which change the product and P = the number of components.
Clearly the coefficient for N could vary, depending on the particular product.0September 10, 2001 at 4:00 am #68603I don’t think the frequency is specified. It depends on your business and the reliability of your gages.
I would suggest you do linearity studies more frequently at first, maybe as follows:
When the gage is purchased or adjusted during calibrationOne Day Later (after first purchase only – since there is no real measure of stability yet)One Week Later (after first purchase only)One Month LaterEvery Month for 1 yearEvery 3 Months for 2 yearsEvery 6 Months ongoing, or until gage is adjusted, or if damage/change is suspected0August 30, 2001 at 4:00 am #68348In my opinion the four best texts on Design & Analysis of Experiments is:
1. Douglas Montgomery’s “Design & Analysis of Experiments”
2. Douglas Montgomery’s “Design & Analysis of Experiments”
3. Douglas Montgomery’s “Design & Analysis of Experiments”
4. Douglas Montgomery’s “Design & Analysis of Experiments”
George Box & J. Stuart Hunter are phenominal statisticians, but their book “Statstics for Experimenters”, unchanged since 1978, has not aged well. Montgomery’s book is in its 5th edition and is very modern.
Cox’s book is old enough it is in Wiley’s “classic” collection.
Schmidt & Launsby’s book is OK, but it too is showing it’s age. I really don’t like how he presents the ANOVAs (from RS/Discover output), but has a nice introduction to quality improvement. –
BUT if you want to learn about quality improvement you might be better of buying Douglas Montgomery’s Introduction to Statistical Quality Control text, which also covers DOE with the style I expect out of Montgomery. A VERY good book overall.0August 30, 2001 at 4:00 am #68350You might check out http://www.reseng.com .
They have several very good CDROMbased training courses related to Six Sigma. Their courses cover: Six Sigma Introduction, SPC, FMEA, Gage Maintanance & Analysis, and Mistake Proofing, I’ve seen the courses and they are all first rate.
Also, Minitab is soon to release (Sept 14) a new CDROMbased training course on introductory statistics (up to ANOVA, also includes a simple introduction to designed experiments) called ActivStats. This is by far the best etraining tool I’ve seen yet. You can find out more at http://www.minitab.com .
I have not yet seen a similar product covering DOE or Problem Solving.0August 30, 2001 at 4:00 am #68352So head off to your cabin and start that manifesto.
Every decade introduces new technologies not even dreamed off before. Fire, Domesticated Livestock, Wheel, Sailing Ship, Sextant, Mathematics, Electricity, Telegraph, Telephone, Locomotive, Automobile, Radio, Television, Computer, Cellular Phone, Pager, and the list goes on. Is the wheel evil? I’ll bet you use wheels. Is electricity evil? Obviously you use electricity. Technology is inanimate – it cannot be evil.
Technology only changes people if people let it change them. The vast vast majority of people are good people who put their families first and really care about their community and their place of employment. Nothing has changed. Keep the faith!0August 29, 2001 at 4:00 am #68323KSR,
Is your place of employment embracing Six Sigma?
I would guess not.0August 22, 2001 at 4:00 am #68155Sounds like an old dog that doesn’t want to learn a new trick.
I just don’t see how anyone can fault a continuous improvement concept that emphasizes financial results, minimuming defect rates, increased customer satisfaction, and using databased decisionmaking.
As some have said, the methods are not new, its the focus on improving processes that is “new”.0August 20, 2001 at 4:00 am #68130Simple answer #1: Read the relavent chapter in Douglas Montgomery’s “Design and Analysis of Experiments”.
Simple answer #2: Get a good statistical package (MINITAB, StatGraphics, JMP, DesignExpert) and use it to calculate the interaction.0August 19, 2001 at 4:00 am #68108I think I can guess what’s messing you up —
Just the other day I discovered (mostly because I never noticed the option) that by default MINITAB uses unbiasing constants for both the Cpk (as specified by the AIAG SPC Reference Manual – uses the pooled standard deviation divided by Csub4), and Ppk (which is NOT specified by the AIAG SPC Reference Manual).
These unbiasing constants are near 1 when the sample size is 10 or more. With your sample size of 4, surely these constants can have an impact.
This means that Pp and Ppk values calculated in MINITAB will be slightly different than those calculated by hand. You can tell MINITAB not to use these unbiasing constants (a checkbox in the Estimate options). I found that MINITAB would kick out an error saying that with certain control charts would still use the unbiasing constant – but it doesn’t for Ppk. Most likely it still does for Cpk, but not for the Ppk (I’ve never checked).
As you were warned, the confidence interval for Cpk’s & Ppk’s with sample size of 4 will be quite large indicating that the indices are only moderately usable.0August 17, 2001 at 4:00 am #68085Hey, this one is easy. The fact is that your data are NOT normally distributed. Being lifetimes the odds are that they follow a Weibull distribution.
You never really said what you’re trying to do with the data.
If you are lucky enough to have MINITAB statistical software (http://www.minitab.com) then you can do a Weibullbased capability analysis.
If you’re creating control charts, the means are quite likely nearly normally distributed, but you might need to have larger subgroup size (10ish). You could simulate this and see for yourself using MINITAB or Excel (using Crystal Ball would be handy for that).
0August 17, 2001 at 4:00 am #68087Anytime you are using means (as in an Xbar chart) you can feel comfortable that the means will be nearly normally distributed, especially if the subgroup sample size is larger. The Central Limit Theory says that sums of random variates, regardless of their distributions, will tend toward a normal distribution as the number of samples comprizing the sum get larger. That is, means tend toward normality regardless of the underlying population distributions as the sample size increases.
See http://davidmlane.com/hyperstat/A14043.html
Control charts for individuals charts DO assume normaility, so you need to be very careful and make sure the incoming data ae normal. Another way to construct the control chart limits for non normal data would be to use nonparametric percentiles (if you had a very large sample to base them on).0August 15, 2001 at 4:00 am #68046Can you give the Section names that that reference from Montgomery is in? I have the 3rd Edition and would like to try to find it (if its there), but clearly page numbering is different.
0August 7, 2001 at 4:00 am #67919Yes, Yes, and No!
0August 6, 2001 at 4:00 am #67904Although most of the actual statistical methods used are actually less than 100 years old.
Here is a timeline from an ASU website:
http://www.public.asu.edu/~warrenve/s20_stat.html
1713 Bernoulli, Ars conjectandi 1718 de Moivre, The Doctrine of Chances 1730 de Moivre, Miscellanea Analytica 1738 de Moivre, The Doctrine of Chances, 2nd edition 1740 Simpson, The Nature and Laws of Chance 1764 Bayes’ theorem is published 1805 Legendre publishes the method of least squares 1809 Gauss, Theoria Motus Corporum Coelestium 1810 Laplace proves the central limit theorem 1812 Laplace, Theorie analytique des probabilites 1814 Laplace, Philosophical Essay on Probabilities 1834 Quetelet founds the Royal Statistical Society 1835 Quetelet, Essay on Social Physics 1842 Quetelet, Treatise on Man 1869 Galton, Hereditary Genius 1877 Galton devises the methods of correlation and regression analysis 1881 Edgeworth, Mathematical Psychics 1889 Galton, Natural Inheritance 1893 Pearson begins his publications 1900 Pearson develops the chisquare test 1901 Galton and Pearson found the journal Biometrika 1908 Gosset develops the “student’s ttest” 1915 Fisher begins his publications 1921 Fisher develops the analysis of variance
As you can see, most of the tests we now use were developed in the 20th century. R.A. Fisher’s work in the 1920’s and later produced what is now considered modern statistics including the concept of random sampling and analysis of variance(based upon the works in statistical theory of many great men going way back – into the 1700’s and earlier).
Of course there are many other great statisticians, including John Tukey, Gertrude Cox, William Cochran, George Snedecor, George Box, just to name a few, who have made great contributions to applied statistics.0August 2, 2001 at 4:00 am #67826The best source for APQP information is to purchase the AIAG APQP & Control Plan guide. You can purchase it at
http://www.aiag.org/publications/quality/dcxfordgm.html
Only $12 for nonAIAG members.
The 7D, to my knowledge is just like an 8D without the last D (congratulate your team). Try this web site – it provides a short description of the 8D:
http://www.dortec.com – follow the Quality link, and then the Problem Solving Process link.0August 2, 2001 at 4:00 am #67828I’m the person who started this whole thread.
My point was simple. The DPMOtoSigma tables that Mikel Harry originally published many years ago were wrong – plain & simple.
Refering to the tables listed in my original post such as
http://sd.znet.com/~sdsampe/6sigma.htm#conv
If the tables are going to be published with “exact looking” PPM values (like 308,538 for 2 Sigma, which should really be 308,770) then it should be correct to the dispayed number of significant digits.
For all those who feel 10’s and 100’s of PPM’s don’t really matter at the smaller Sigma levels, I have absolutely no problems with rounding the PPM values, if it is done correctly. In the link mention in my original post
http://www.longacre.cwc.net/sixsigma/sigma_table.htm
The rounded PPM values less than 1.5 Sigma are incorrectly rounded. That darned other tail started to make a difference even in the thousands of PPM.
My point is simple and somewhat academic in nature (as opposed to really making a difference in how, where, or to what extent improvements are made), and certainly I don’t mean to belittle the bizillions of improvement projects that have been done in the name of Six Sigma.
The concern I have is that incorrect tables have multiplied like rabbits. It is just so easy to create correct tables (and correctly rounded tables) that it bugs me to see incorrect tables published by authorities on the subject.0August 1, 2001 at 4:00 am #67800The complexity doesn’t really matter.
One of the main ideas of the Six Sigma metric is to use opportunities per unit to specify the complexity of the product/process. By converting defects per unit to defects per opportunity we normalize the complexity of products/processes so they can be fairly compared.
Don’t waste time worrying about the complexity of your product – go start learning about it, measuring it, and trying to improve it.
Ken0July 31, 2001 at 4:00 am #67781Purchase AIAG’s SPC Reference Manual:
http://www.aiag.org/publications/quality/dcxfordgm.html
Only $12 for nonmembers0July 31, 2001 at 4:00 am #67782I think you did the right thing. Just yesterday I was discussing a Gage R&R for a rubber part with the supplier – I think that is also suffering from WIV.
Your results indicate that the gage CAN measure the diameter of the screw. Your next task is to figure out how to make certain you are always measuring a consistent location on the screw – maybe mount the micrometer such that it is always measuring the same distance from the screw head.
0July 31, 2001 at 4:00 am #67787Resource Engineering, Inc. is about to release (8/15) a very nice introduction to Six Sigma training program. It is CDROM based, relatively short, and introduces most, if not all, of the issues related to a Six Sigma implementation. I can’t recall the length of the program, but you can call them to find out.
The web site is http://www.reseng.com/Six_Sigma/index.htm
I’m not affiliated with them in any way, but I have seen the product and like it. They also have very nice courses in FMEA, SPC, MistakeProofing, and Gage Maintanence.0July 31, 2001 at 4:00 am #67788Also, Minitab Inc. is about to release a CDROM based training program called ActiveStats for MINITAB. It is very very well done – probably the best CDROM based training I’ve ever seen. It covers basic statistics up to, and including, ANOVA.
That web site is http://www.minitab.com/products/student/activstats.htm
If you’re not using MINITAB you can look at other ActivStats products by going to http://www.datadesk.com/ActivStats/0July 30, 2001 at 4:00 am #67765Don’t forget Reliability Analysis – parametric and nonparametric analysis of censored data (usually lifetime data). Very important when analyzing field failure data, or censored laboratory data (common example – pull tests where the fixture or part fails before the adhesive).
0July 27, 2001 at 4:00 am #67741I can’t emphasize enough what a nice book the AIAG Measurement System Analysis Reference Manual is, especially considering it only costs $11 (US dollars) for nonAIAG members. You can purchase it by following this link:
http://www.aiag.org/publications/quality/dcxfordgm.html
The Gage R&R is a methodology in which you run what amounts to a small nested experimental design – You give an operator 10 parts and have them measure each part 3 times (in random order), and then repeat this process with two additional operators. When you’re done you’ll have 90 data points (10 parts x 3 repeated measurements x 3 operators).
Now think of this as a designed experiment, BUT instead of trying to find out if there is a difference between operators, parts, and repeated measurements, you are trying to measure the variation between operators, between parts, and between repeated measurements. The idea here is that the operators represent some random subset of the potential pool of operators.
Through various computations (please!!, buy good statistical software to do these – I like MINITAB, but also find Statistica, StatGraphics, and JMP to be good) you can obtain the actual variance estimates for variation between operators, parts, and repeated measurements – if you use the ANOVA method you can even assess the interaction between operators & parts.
The idea is this – you get the variances, total them up to estimate the total variation (you can sum variances), and then assess the various sources of variation by calculating the respective percentages (call the % Contribution) of the total variation. You expect the part variation to be the largest part. The combined variation related to operators (reproducibility) and repeated measurements (repeatability) is called the Gage R&R, or just R&R. This combined variation (in terms of % contribution) should typically not be more than 1% of the total variation, and defintely not more than 510%.
If there is too much variation associated with the combined Gage R&R, you can use the percentages to determine if the majority of the variation is due to problems with the operators or the gage repeatability, and then make the appropriate corrections.
Many descriptions of the Gage R&R method also work with the ratios of the standard deviations, but keep in mind those don’t add up to 100%. The typical cutoff for the standard deviationbased %Gage R&R is 10% of total.
If the Gage R&R percentage is too high and you’ve done everything you can to improve the system, a quick and dirty method of improving precision is to just take multiple readings and average them (instead of just using one measurement.
Note to those used to the 10% cutoff – above I am talking about the ratios of the varances, not the ratio of the standard deviation. I prefer that metric since it sums to 100%. To move from the standard deviation world to the variance world you simply square the value, so I simply square the 10% cutoffs and get 1%.
There is an excellent article on GR&R at this link:
http://www.minitab.com/company/virtualpressroom/articles/index.htm0July 24, 2001 at 4:00 am #67667Because the whole idea of MSA is to understand the sources of variation that exist in your measurement system. If the accuracy of the measuring device varies significantly over the operating range (range of measurement), then don’t you think that is something you’d want to know?
0July 18, 2001 at 4:00 am #67603Take a look at a web site that discusses “How to Write Good Procedures”:
http://www.isogroup.iserv.net/procedur.htm0July 16, 2001 at 4:00 am #67584Ultimately you’ll be calculating DPMO by dividing the total number of defects by the total number of opportunities (for a defect), and then multiplying by 1,000,000.
The toughest task is to assess your process(es) and determine the number of opportunities for defects per unit of work (OPU) (for us it is per product). You need to honest and realistic. Consider the potential places in your process where a defect can occur, but don’t start adding all the different kinds of defects.
For an electronic PC board we tend to use Opps = # of parts + # of electrical connections. For completion of forms, you might count the number of fields to be filled out (don’t worry about all the different ways a field could be filled in wrong). It might be the number of steps taken to complete the task.
Now, start tracking the number of defects per unit, or start summing up the total number of defects and track the total number of uits. To calculate an average defects per unit (DPU) you can just divide the total number of defects by the total number of units.
DPMO = 1,000,000*DPU/OPU
= 1,000,000*Total Defects / (Total Units *OPU)0July 13, 2001 at 4:00 am #67569This is starting to look like an ISO9000/QS9000 discussion forum.
I think you might be better served posting your message at the Cayman Cove forum (http://16949.com).
Go to Forums and then select the appropriate discussion group in which to post your question.0July 12, 2001 at 4:00 am #67547Its really pretty simple – you use either the CPL or CPU, whichever makes sense.
I’m happy to say that MINITAB does the “correct” thing when only one spec limit is provided. Make sure though that you don’t mark the Hard Limit option unless it truely is a true physical impossibility to fall beyond that hard limit, otherwise calculations will be incorrect.0July 12, 2001 at 4:00 am #67548No. Design functions are only associated with QS9000 when they are connected to the manufacture of production materials or parts, or finishing services provided directly to the OEM.
In the introduction of QS9000 3rd Edition is states:
“QS9000 applies to all internal and external supplier sites of:
a) production materials,
b) production or service parts, or
c) heat treating, painting, plating or other finishing services directly to OEM customers subscribing to this document.”
“Remote locations where design function is performed shall undergo surveillance audits at least once within each consecutive 12month period.”
Research that is not directly tied to the QS9000 subscriber product or service (for finishing services) is generally outside the applicability of QS9000.0July 12, 2001 at 4:00 am #67549All five MSA studies are required for measurement systems referenced in the Control Plan (in manufacturing). In our design functions only GR&R has been emphasised by the auditor; although we still do some other studies.
In the PPAP it clearly states that MSA studies “for gage families” is not acceptable. PPAP also says that MSA studies are required for “all gages and equipment used to measure special product characteristics or special process parameters.” That mean if one of these is left out of the Control Plan for some reason you still have to do MSA studies on it.0July 12, 2001 at 4:00 am #67559Yes, that is what our registrar is expecting.
It is also the right thing to do. How do you know how your calipers are really behaving unless you measure their behavior? I’m sure you calibrate them – bias. Does their bias change over time? – stability. Does their bias change over the operating range? – linearity. Niether of these questions are that bizaar, nor are they difficult.
Of course you also need to do the GR&R on these.
If you don’t agree, you need to ask the question to your own registrar. They are the final word for your own certificate.0July 11, 2001 at 4:00 am #67517The basic idea here is that the parts themselves have variation that must be measured. This within part variation would normally be incorrectly added to the repeatability, but if measured, can then be removed from the R&R.
The formula you refer to comes from standard nested design analysis methods. I might suggest you look at Douglas Montgomery’s “Design and Analysis of Experiments”, Chapter 13.
An interesting note: I took the data from Figure 16 in the AIAG MSA manual and ran it using the ANOVA method (using MINITAB’s GLM tool with the variance component output and then generating the % Contribution and % Study Variation values using an Excel table). I got very different results from those listed in Figure 17. Here I’m giving you the values from Figure 17 and the % Study Variation counterparts when using MINITAB GLM’s variance components.
Source = Fig 17 Value / ANOVA Value%EV = 4.2% / 13.8%%AV = 15.3% / 13.6%%R&R = 15.9% / 19.4%%PV = 97% / 90.8%%WIV = 18.7% / 37.0%
I know the math in Figure 16 includes lots of rounding, but always hate to blame such large differences on “rounding errors”.0July 10, 2001 at 4:00 am #67500You hit the nail on the head. The “sidedness” of the tolerance has nothing to do with whether the measurement system has sufficient resolution to identify changes in the process.
0July 6, 2001 at 4:00 am #67463One of my biggest concerns related to Six Sigma standardization is that I don’t think most people realize how different Six Sigma companies can be. Most, by far, of the current Six Sigma companies are Mikel Harry clones or clones of clones. But the original Six Sigma was, and still is, in many ways quite different from what Mikel Harry brought to GE.
My guess is that this “panel” didn’t have any members from the original Six Sigma companies (Motorola, IBM, Kodak, Texas Instruments). I have even seen surprizing departures in methodologies between those companies since they parted Six Sigma ways in the mid/early ’90s.
The other concern is that once we start to “standardize” we may impeed the natural evolution that breeds improvement. Mutations are sometimes benefitial (not to suggest that Mikel Harry is a mutant — I’d better not go there).
Ken K.
0July 5, 2001 at 4:00 am #67452Standard for who? by who? to who?
(this is starting to sound like a Dr. Zuess book – “I do not like Six Sigma – I do not like it in a box – I do not like it with a fox …” — just kidding)Even when ASQ creates a “standard” there still isn’t a standard.
Six Sigma is still whatever each implementing company needs/wants it to be. At this point I really don’t see any problem with that.
The only time the definitions become important is when people are out looking at new jobs and claiming to be a Black Belt or Six Sigma expert or whatever. In that case I tend to simply say “buyer beware”!
0July 5, 2001 at 4:00 am #67453I heartily second that recommendation.
Not only business schools, but most engineering, landgrant, and major universities offer very good training in statistics. In some you’ll find and can take advantage of very nice relationships between the Statisics department and other departments such as Engineering, Biology, Economics, Agriculture, Business, etc…
Check with the Statitsics or Mathematics departments at the school you’re planning on attending. It may even be as good as Iowa State University – my dear allma matter (sp? – my degree is in Statistics, not English).
0July 4, 2001 at 4:00 am #67432Just because it was designed by “statistical engineering” (whatever that is) doesn’t mean that the people who used these tools knew how to use them or did a good job of using them. Clearly they did something wrong.
It appears there were characteristics of the bridge that went unnoticed or at least untested.
Just because the outcome failed doesn’t imply that the basic methodologies themselves are to blame.
The space shuttle is another commonly applied example. The models were just fine, except that there was an incorrect assumption that the seals were statistically independent and, therefore, acting as redundant systems. In actuallity they were not independent and when one failed due to cold the others had a much higher propensity to fail – thus BOOOM!
My point – the methods & tools were valid and useful – the assumptions made were not.
Ken K.
0June 28, 2001 at 4:00 am #673621. Go to http://www.google.com
2. Enter the words “simpson paradox” in the large field in the middle of the screen (without the quotes)
3. Press the Enter key or click on the Google Search button
4. Follow the links listed. They’ll contain lots of information on Simpson’s Paradox.
Ken K.
(mildly irritated that you didn’t even try to search for information on the internet first)0June 28, 2001 at 4:00 am #67365Even if you only have one operator (usually) you’ll still want to understand the typical variability between operators, unless, of course, your current operator is going to be forced to do that same job FOREVER.
If you still want to use MINITAB with just ONE operator you can use the General Linear Model tool to do the analysis.
Enter the column containing the measurments in the Response field, and enter the column identifying the different parts into the BOTH the Model and Random factors fields. Make sure you click on the Results button and select the “Display expected mean squares…” option.
At the bottom of the output you’ll see “Variance Components” listed for the parts column and for “Error”. The latter item relates to repeatability. These values are variances.
The Total variance is calculated as
TotalVar=PartVar + ErrorVar
The Repeatability and R&R variance is equal to the ErrorVar since there is no reproducability, nor is there the interaction between operators and parts.
Take the square root of each of these to get the respective Total, Part, and Repeatability (R&R) standard devations.
Multiply each of the stanard deviations by 5.15 to get TV, PV, and EV, respectively.
Obtain the %Study Variations by dividing PV and EV by TV. Multiply by 100 to get a % value. The resulting 100*EV/TV is the %R&R value of interest to most. This value should be less than 10%
If you divide each of the variances by the respective TotalVar you’ll get the % Contribution that MINITAB outputs. They are 100*PartVar/TotalVar and 100*ErrorVar/TotalVar.
0June 27, 2001 at 4:00 am #67305The only thing I’ve seen that makes sense to me is:
1. Try to assess the gage independent of the object being tested. For example, if you’re doing a test where one object is pulled from another, and the force of the pull is captured in pounds, you might be able to hang calibrated weights onto the gage and assess the measurement reading. It takes quite a bit of knowledge of the gage and the possible implications/errors that might arrise from the activity.
2. Create “pseudoreplicates”. Find parts that are as close to identical as possible. Maybe use parts from the same batch or that were manufactured at the very same time. This obviously has its limitations.
You’ll use the pseudoreplicates to estimate repeatability.
Since even psudoreplicates can only be measured one time, you can’t estimate operator variation (reproducibility) the usual way (each operator measure each part). You’ll need to run this as a nested (or hierachical) design instead of the usual crossed design. MINITAB has a procedure for this now that should help.
If you’re not familiar with nested designs I might suggest you pick up a copy of Douglas Montgomery’s “Design and Analysis of Experiments” book. It is well worth the price. He’s an excellent author (I like his writting style).
If you don’t already have a good general statistics textbook I would suggest his “Applied Statistics and Probability for Engineers”. It covers a wide range of topics except, unfortunately, measurement system analysis. For that I recommend the Automotive Industry Action Group’s MSA Reference Manual. You can find it at http://www.aiag.org/publications/quality/msa2.html
If you don’t already have them I also strongly recommend their SPC and FMEA references.
Ken K.
0June 27, 2001 at 4:00 am #67306I agree with the other respondants.
If your management’s buyin precludes the investment in statistical and other quality tools then I really don’t think you have sufficient buyin for a succesfull Six Sigma program.
My other thought is that if you or yours sees MINITAB as being too sophisticated/complex for what you want to do, then you almost certainly need more training and maybe experience before embarking on a fullfledged Six Sigma program.
Maybe start by taking some statistics training AND start observing the processes in your functional area. Just use of observation and simple graphing tools can take you a long way.
Don’t forget to get some problem solving training too.
0June 26, 2001 at 4:00 am #67282You might need to reconsider your definition of Sigma.
I take mine from the article “The Nature of Six Sigma Quality” by Mikel Harry, published by the Motorola University Press in the early 1990’s (the specific copyright date varies depending on the date of printing.)
By definition, an X Sigma process is one which, when centered halfway between the lower and upper specification limits (LSL & USL), the LSL is located at [mu – X*sigma] (where s is the process standard deviation), and the USL is located at [mu + X*sigma].
A 6 Sigma process is one where, IF the process were centered, the LSL is exactly at [mu – 6*sigma] and the USL is exactly at [mu + 6*sigma).
This is the definition given by Mikel Harry in his article. It’s shown in Figure 8.
When the much debated 1.5*sigma shift occurs (here assuming a shift to the right for the sake of computation), the LSL & USL are no longer as described before, now they are at [mu – (X + 1.5)*sigma] & [mu + (X – 1.5)*sigma], respectively. For a 6 Sigma scenario this is [mu – 7.5*sigma] & [mu + 4.5*sigma], respectivley. This scenario is shown in Figure 9 of the above mentioned article.
In BOTH Figures 8 and 9 Mikel Harry recognizes the propensity for errors in the left tail of the distribution. BOTH tails must be considered.
For a centered distribution the left tail provides a defect probability of 0.001 PPM – this gives rise the slightly famous 0.002 PPM defect probability for a centered 6 Sigma scenario.
For a 1.5*sigma shifted 6 Sigma scenario this left tail defect probability decreases to nearly zero. Mikel actually shows it as “~0” in Figure 9 – the important thing is he does recognize it exists. As the Sigma level decreases the probabilities in the left tail increase:
Sigma Level — Left Tail Probability
6 — ~0
5 — ~0
4 — ~0
3 — 3.4 PPM
2 — 233 PPM
1 — 6210 PPMThe exclusion of the left tail is where the published tables are incorrect.
Ken
0June 26, 2001 at 4:00 am #67283Quite simply, we calculate opportunities as:
(# of parts) + (# of electrical connections)
By connection we mean just the soldered connections or wire bonds – not including those internal to parts.
We make fairly complicated products and lots of them. That is exactly why quality is so very critical. One critical defect and someone is likely to be walking home.
Typically we’ll have 1000 to 2500 opportunities per unit, depending on the model.
0June 26, 2001 at 4:00 am #67285Like I said, 10 ppm for us relates to roughly a thousand defects per month.
The difference between 0.06000 and 0.06001 you mention is 10 PPM.
Would you like to be one of those walking home after you car stops functioning when we, as a company, decide we “have more important things to do” than worry about those thousand defects?
That’s the whole point. Every defect hurts. Every defect must be measured, understood, and eliminated. It takes a lot of effort to permanently remove a defect and people don’t want those efforts lost in the rounding – regardless of whether their at 2 Sigma or 6 Sigma.
And then again, I always come back to the need to provide accuracy when publishing tables. Leave the rounding to those who are closer to the consequences.
0June 22, 2001 at 4:00 am #67194But even rounding as you do there are errors.
All your values for Sigma 1.8 and less are incorrect. They should be:
0.0 1,000,000
0.1 974,000
0.2 948,000
0.3 921,000
0.4 893,000
0.5 864,000
0.6 834,000
0.7 802,000
0.8 769,000
0.9 734,000
1.0 698,000
1.1 660,000
1.2 621,000
1.3 582,000
1.4 542,000
1.5 501,000
1.6 461,000
1.7 421,000
1.8 383,000
1.9 345,000
2.0 309,000I don’t buy all this talk of “it just doesn’t matter”. These are published tables based upon the Normal distribution which in most cases are NOT rounded.
I’d accept rounding as you do if they are correctly calculated.
And I don’t accept the “nothing is really normally distributed” excuse. Again, these are published tables based upon a specified, well characterized distribution.
In my business the total number of opportunities per product adds up to hundreds of millions of opportunities per month. With the volumes we run, rounding just 10PPM reflects a difference of thousands of defects. We may miscount a defect here or there, but not by thousands.
In my experience people work so hard to achieve improvements that they want to claim every decimal of Sigma coming to them.
Face it, Mikel got the table wrong over 10 years ago and none of his GE or postGE disciples have bothered to correct it since then.
0June 20, 2001 at 4:00 am #67143I couldn’t have said it better! Very well put.
Ken K.
0June 18, 2001 at 4:00 am #67107While I cannot provide specifics, let me assure you that my Motorola business’ products have indeed maintained a Sigma level considerably greater than 5 Sigma for many many years. Certainly the manufacturing processes involved in creating those products are even higher.
0June 18, 2001 at 4:00 am #67109One thing to keep in mind is that the “opportunities” is supposed to be a measure of your process’ complexity. The whole reason this stuff was created was to allow us to compare products and processes of varying complexities.
In my business we have a wide variety of products, but the vast majority involve populating printed circuit boards with electronic components. It that situation we consider the number of opportunities very simply as the number of parts plus the number of connections (solder or otherwise).
If the process involves completing a form, then the opportunities is the number of potential defects per unit – maybe the number of fields in the form, but don’t start counting all the different kinds of defects.
0June 14, 2001 at 4:00 am #67050If you need to do real time SPC and data collection, AND you are currently using MINITAB, then may I suggest you look into the Hertzler Systems software. Their website is http://www.hertzler.com .
Not only do they have powerful SPC screening and data filtering/summarization tools, but they have the unique advantage of being (I think) the only real time SPC software that has partnered with Minitab Inc. to provide a transparent method of moving data from the Hertzler software into MINITAB. No manual conversions. No exporting. No importing. All you do is click on one command in the Hertzler software and up comes MINITAB with the data already formatted correctly.
If all you need to do is create electronic control charts, then you might be able to do that with less capable software, but if you need to have a system that can collect real time data, organize it, create control charts, filter it, and provide easy access to the data for future improvement projects, then look at the Hertzler Systems software.
P.S. – I don’t work for them – I’ve just been doing a lot of research on SPC/data collection software lately.
Ken K.
0June 14, 2001 at 4:00 am #670511. define
2. measure
3. analyze
4. improve
5. control
6. see dead people(scarey movie!)
0June 12, 2001 at 4:00 am #66999I wouldn’t go so far as to say that “all the causes of variations are known”. That is a pretty extreme statement.
I would tend to say a stable process is one that is comprized of mostly common cause variation, as opposed to special cause variation. As you hinted, that common cause variation will be comprized of a whole bunch of sources of variation, some will be knowable and some won’t.
The whole idea of process improvement is to understand many of those sources of variation and try to remove/control them.
0June 10, 2001 at 4:00 am #66947The “proper” method really depends on which distribution (that of x or xbar) you are trying to convert to the standard normal distribution.
The general formula for the tstatistic is:
t = (constant – mean)/standard deviation
where mean is the estimate of the population mean of the distribution and standard deviation is an estimate of the population standard devaiation. If the true population standard deviation was known, this would then be a zstatistic.
If you are working with the base distribution of X, then respective statistical esitmates are:
Mean(X) = Xbar
StDev(X) = sIf you are working wth the distribution of Xbar (the mean of X) then the respective statistical estimates are:
Mean(Xbar) = Xbar
StDev(Xbar) = s/sqrt(n)where n is the number of observations used to calculate Xbar.
If you are characterizing a process, you’ll probably be working with the distribution of X itself, thus using just s in the tstatistic denominator.
If you are working with means, such as a test of means, or a control chart in which the mean subsamples if size 5 are calculated, then you’ll probably be working with the distribution of Xbar, thus using s/sqrt(n) in the tstatistic denominator.
Don’t get too caught up on the formulas without remembering the basic premise behind them – which are usually pretty simple.
By the way, I always recommend people take a look at the video series Standard Deviants Statistics – an excellent introduction to the basics of statistics. It is a fun and relatively short video series in 3 tapes, available from amazon.com, amoung other places.
The tapes don’t take the place of classroom learning, but they can make the classroom training much more productive – and isn’t that what this stuff is all about?!
Ken K.
0 
AuthorPosts