Hypothesis testing and selecting the correct test can be challenging especially in the learning stages.
In hypothesis testing, samples are most often taken to represent a subset of the population since the entire population can rarely be studied. From these samples, hypothesis testing used to infer conclusions about the population. There's always a chance or risk (known as alpha-risk and beta-risk) that the selected sample is not representative of the population and one could infer the incorrect conclusion.
Assumptions are inferred that allows the estimation of the probability (known as p-value) of getting a wrong conclusion. Statistical software has simplified the work to the point where comprehension of these tests is convenient to overlook.
CAUTION: A statistical difference doesn't always imply a practical difference, numbers don't always reflect reality.
Parametric Tests are used when:
Nonparametric tests are used when:
In general, the power of standard parametric tests are greater than the power of the alternative nonparametric test. As the sample size increases and becomes very large the power of the nonparametric test approaches its parametric alternative.
Nonparametric tests also assume that the underlying distributions are symmetric but not necessarily normal. The assumption is these test statistics are "distribution-free. In other words, there aren't any assumptions made about the population parameters.
When the choice exist on whether to use the parametric or nonparametric, if the distribution is fairly symmetric, the standard parametric tests are better choices than the nonparametric alternatives.
A few example nonparametric tests are Spearman rank correlation coefficient, Mann-Whitney U, Sign, Wilcoxon rank sum, Levene's, and Kruskal-Wallis H.
For 1 sample: Chi-square
For 2 samples: F-Test or ANOVA for >2 variances. The F-test assumes the data is normal.
Levene's test is an option to compare variances of nonparametric data.
For >2 samples: Use Bartlett's Test for parametric data and Levene's Test for nonparametric data
For 1 sample: One Proportion Test
For 2 samples: Two Proportion Test
For >2 samples: Chi square
A test for Proportions is used when each observation can only be conforming (not defective) or nonconforming (defective). Each response is one of those two options.
A test for Means is used when each observation could have >2 values and potentially infinite values (if going to extreme decimals) within a range of values.
If p-value < α, reject HO and accept HA
If p-value > α, fail to reject the Null, HO
Try to re-run the test (if practical) to further confirm results. The next step is to take the statistical results and translate it to a practical solution.
It is also possible to determine the critical value of the test and use to calculated test statistic to determine the results. Either way, using the p-value approach or critical value should provide the same result.
The statistical power is 1 - β. Usually β is between 10-20%; therefore, the Power typically is 80-90%.
This is the likelihood of finding an effect when there is actually an effect. This is the chance of rejecting the null hypothesis when the null hypothesis is actually false.
A minimum detectable difference, δ, can also be specified. This detectable difference is used to examine a desired difference among:
The minimum detectable difference desired relative to the standard deviation is the sensitivity of the test. It is the size of the difference expressed in standard deviations.
Similar to the Coefficient of Variation in that the mean is expressed as relative magnitude in standard deviations. The numerator itself doesn't provide much information, it is when it (or the δ) are expressed in terms of standard deviations are you able to compare two or more values with more meaning.
To simplify the testing process, break down the process into 4 small steps.
Create a table similar to the one below and begin by completing the top two quadrants. The bottom-left contains the results from the test and then converting those numbers into meaning is the practical result which belongs in the bottom-right quadrant.
Null Hypothesis characteristics:
This is the hypothesis being tested or the claim being tested. The null hypothesis is either "rejected" or "failed to reject". Rejecting the null hypothesis means accepting the alternative hypothesis.
The null hypothesis is valid until it is proven wrong. The burden of truth rest with the alternative hypothesis. This is done by collecting data and using statistics with a specified amount of certainty. The more samples of data usually equates as more evidence and reduces the risk of an improper decision.
The null hypothesis is never accepted, it can be "failed to reject" due to lack of evidence, just as a defendant is not proven guilty due to lack of evidence. The defendant is not necessarily innocent but is determined (based on the evidence) "not guilty".
There is simply not enough evidence and the decision is made that no change exists so the defendant started the trial as not guilty and leaves the trial not guilty.
Alternative Hypothesis characteristics:
The shape of a distribution is normally distributed
There is a relationship between sales of a toy and placing it on the ends of aisles
Supplier ABC’s Part # 34565 weight is not the same as Supplier XYZ’s
People that eat carrots have better eyesight
Running more tests allows you to hone in on the differences and conclude more information that can lead to more effective improvements. There are ways to improve the accuracy of results such as being more specific with testing.
For example, testing for specific numerical differences or looking for differences (or lack of) within a gender, a region, an industry, an age group, a religion, an affiliation, or combination of them.
If you detect a change from large group of people from another that is helpful.....but what about more detail?
Therefore, if possible, test the data by gender, by age group, by hair color, by religion, by political party affiliation, by region, etc. You will begin to identify more meaningful information and generate new discussion.
If the problem or the test is asking for a directional inequality, such as greater than or less than, then it is a one-tailed test.
If there is no specified direction, such as not equal, then it is a two-tailed test where both the right tail and left tail are under evaluation.
A "pooled" t-test assumes is used to compare means among two independent samples and their standard deviations are assumed equal. An "unpooled" t-test assumes the standard deviations are assumed to be not equal. The "pooled" t-test pools the sample standard deviations (not the sample means).
A "pooled" Proportions test assumes both Proportions are equal and "unpooled" assumes both are not the same.
"Pooled" and "Unpooled" use different formulas thus the results will be different.
If unsure which to assume, run the test both ways and understand if the difference in results makes a material impact on your project. If the worst case result still provides an acceptable outcome, then this assumption may not be worth haggling over.
The test statistic is a value (z, t, F, etc.) from the sample, or observed, data from the actual experiment (not the area under a curve).
It has a relationship to probability of occurrence that will be the p-value. A measure of how far x (observed) is from the hypothesized value of μ.
It has a relationship to the p-value. The p-value represents the area under the curve (below and/or above) the test statistic value.
In the picture above, the test statistic is less than the critical value AND the area under the curve is < area under the alpha-risk value.
If the alpha-risk was chosen to be 0.05 that means 5% of the area of the total curve is under the critical value in this one-tailed picture above (not to scale).
In this case the p-value will be <0.05 (< 5% of total are under the curve) and thus reject the null hypothesis and infer the new mean is less than the original mean.
Several hypothesis test flowcharts are available to subscribers at no additional charge along with other free downloads and tools to help a Green Belt or Black Belt. A couple generic visual aids are shown below.
This Green Belt training program contains a complete module with lessons and detail about commonly used hypothesis tests. This is often a new area of study for those learning about the Six Sigma methodology and represents a significant challenge on certification exams and in real-life application.
Six Sigma Certification
Six Sigma Slides
Green Belt Program (1,000+ Slides)
Cost of Quality
Cause & Effect Matrix
Central Limit Theorem
Correlation and Regression
MTBF and MTTR
... and more