Gathering and analyzing samples instead of the entire population is often the most practical and cost effective way to make inferences about an entire population, There are various techniques for gathering samples and these should be understood by a Six Sigma project manager.
When there is an attempt to measure across the entire population, then this is referred to as census. But this can be costly, destructive, and time consuming.
There are several methods of sampling. It is important to choose the best plan to provide the best output of information about the entire population. A few common methods are listed below and explained further down the page.
Often when sample sizes are large enough and the data is normally distributed, the Central Limit Theorem applies opening up the use a several simple parametric hypothesis tests for statistical analysis in the ANALYZE phase.
The size of the sample must be consciously decided on by the Six Sigma Project Manager based on the allowable alpha and beta-risks (statistical significance) and the magnitude of shift that you need to observe for a change of practical significance.
As the sample size increases, the estimate of the true population parameter gets stronger and can more reliably detect smaller differences.
This seems logical, the closer you get to analyzing all the population, the more accurate your inferences will be about that population.
It is very important for the GB/BB to get the proper amount of samples to understand the power of the test, ensure the assumptions are met, analyze normality, but yet minimize resources and destruction of parts.
Random Sampling Methods
If you were trying to evaluate the average length of every rainbow trout in the freshwater lakes of Minnesota, it would not be practical or affordable. A sampling plan would be devised to gather a some of the trout and study them. From this, references to the population with specified levels of confidence can be done.
Nonrandom Sampling
These techniques are not preferred due to the additional risk of sampling error introduced. The error can not be calculated and these results are not preferred to infer about the population from with the sample was selected. Therefore, we will not discuss them further within this website.
The denominator in the standard deviation for a population is N, the denominator for a sample is n-1. The "n-1" is an unbiasing factor and as the sample size approaches infinity, the value of "n-1" approaches "N".
Understand the results of the statistical program or calculator being used.
The difference between long term and short term samples
Descriptive measures that describe a POPULATION are called PARAMETERS and are usually denoted with Greek letters. A population parameter is the true value of a population attribute.
Descriptive measures that describe a SAMPLE are called STATISTICS. A sample statistic is an estimate (based on the sampled data) of a population. Therefore, the sampling method is critical to infer the most accurate information about a population.
The quality of a sample statistic is strongly affected by how the data is gathered and how well it represents (accuracy and precision) the population.
Simple Random Sampling
Select this plan if every sample in the population has an equal chance of being selected and there are no subgroups known within the population. The picture below assumes a samples (x’s) are equal and that selecting any of them (a sampling) from the entire population will represent and behave similar to the rest of the population.
Statistical analysis is only valid when there is a random sampling approach. One example of this method is picking names out of a hat. If all the names are in a hat on the exact same medium (none are heavier, bigger, etc) and each name is entered the same amount of times then each name has the same chance of being selected. This is analogous to the lottery approach.
Stratified Sampling
Dividing the population into subgroups of interest and sampling either sequentially or randomly within each subgroup. This is important to make sure there is representation from all stratifications in the population.
A subgroup may be data taken at certain temperature range, specific shift, under certain pressure, different machine groups, slower speed versus higher speed, and other different conditions.
Sequential Sampling
Acquiring data at specified intervals such as every hour, every 5th form, or on a particular shift. Ensure the interval does not introduce a pattern that may be biased to a specific person, machine, or part each time the data point is collected.
An appropriate and disciplined plan needs to be clearly understood by those collecting the data. Since the collection process can be expensive and time consuming there may be bias introduced by people making educated guesses, predictions of data, and collecting data that is convenient and simple.
There are also guidelines for the quantity of samples needed for various types of data. The more data you can obtain the more likely it will represent the performance of the entire population (long-term performance of the process).
When describing and presenting the data, inform the audience and record the method used to collect the data on the Data Collection Plan.
Templates, Tables, and Calculators
Return to the Six-Sigma-Material Home Page
Mar 31, 21 08:36 AM
One site with the most common Six Sigma material, videos, examples, calculators, courses, and certification.
Mar 11, 21 10:56 AM
Exploring the t-distribution and t-test, 1 sample t-test, 2 sample t-test
Jan 06, 21 08:32 AM
Describes the types of Six Sigma certification as Green Belt, Black Belt, and Master Black Belt
Six Sigma Modules
The following presentations are available to download
Click Here
Green Belt Program 1,000+ Slides
Basic Statistics
SPC
Process Mapping
Capability Studies
MSA
Cause & Effect Matrix
FMEA
Multivariate Analysis
Central Limit Theorem
Confidence Intervals
Hypothesis Testing
T Tests
1-Way Anova Test
Chi-Square Test
Correlation and Regression
SMED
Control Plan