**Hypothesis testing** and selecting the correct test can be challenging especially in the learning stages.

In hypothesis testing, samples represent a subset of the population which are used to infer conclusions about the population. There's always a chance or risk (known as alpha-risk and beta-risk) that the selected sample is not representative of the population and one could infer the incorrect conclusion.

Assumptions are inferred that allows the estimation of the probability (known as p-value) of getting a wrong conclusion.

Statistical software has simplified the work to the point where comprehension of these tests is convenient to overlook.

CAUTION: A statistical difference doesn't always imply a practical difference, numbers don't always reflect reality.

**Parametric Tests are used when:**

- Normally distributed data
- Non-normal distribution but transformable
- Sample size is large enough to satisfy the Central Limit Theorem
- Require that the data be interval or ratio data

**Nonparametric tests are used when:**

- The above criteria are not met or if distribution is unknown
- These tests are used when analyzing nominal or ordinal data
- Nonparametric test can also analyze interval or ratio data

In general, the power of standard parametric tests are greater than the power of the alternative nonparametric test. As the sample size increases and becomes very large the power of the nonparametric test approaches its parametric alternative.

Nonparametric tests also assume that the underlying distributions are symmetric but not necessarily normal. The assumption is these test statistics are "distribution-free. In other words, there aren't any assumptions made about the population parameters.

When the choice exist on whether to use the parametric or nonparametric, if the distribution is fairly symmetric, the standard parametric tests are better choices than the nonparametric alternatives.

A few example nonparametric tests are Spearman rank correlation coefficient, Mann-Whitney U, Sign, Wilcoxon rank sum, Levene's, and Kruskal-Wallis H.

**For 1 sample:** Chi-square

**For 2 samples:** F-Test or ANOVA for >2 variances. The F-test assumes the data is normal.

Levene's test is an option to compare variances of nonparametric data.**For >2 samples:** Use Bartlett's Test for parametric data and Levene's Test for nonparametric data

**For 1 sample:** One Proportion Test

**For 2 samples:** Two Proportion Test

**For >2 samples:** Chi square

**A test for Proportions** is used when each observation can only be conforming (not defective) or nonconforming (defective). Each response is one of those two options.

**A test for Means** is used when each observation could have >2 values and potentially infinite values (if going to extreme decimals) within a range of values.

- Define the Problem
- State the Objectives
- Establish the Hypothesis (left-tailed, right-tailed, or two tailed test).
- State the Null Hypothesis (H
_{O}) - State the Alternative Hypothesis (H
_{A}) - Select the appropriate statistical test
- State the alpha-risk (α) level
- State the beta-risk (β) level
- Establish the Effect Size
- Create Sampling Plan, determine sample size
- Gather samples
- Collect and record data
- Calculate the test statistic
- Determine the p-value

**If p-value < α, reject H _{O} and accept H_{A}**

**If p-value > α, fail to reject the Null, H _{O}**

Try to re-run the test (if practical) to further confirm results. The next step is to take the statistical results and translate it to a practical solution.

It is also possible to determine the critical value of the test and use to calculated test statistic to determine the results. Either way, using the p-value approach or critical value should provide the same result.

The **statistical power** is 1 - β. Usually β is between 10-20%; therefore, the Power typically is 80-90%.

This is the likelihood of finding an effect when there is actually an effect. This is the chance of rejecting the null hypothesis when the null hypothesis is actually false.

A minimum detectable difference, δ, can also be specified. This detectable difference is used to examine a desired difference among:

- Target (or given) value and a sample mean - using 1 sample t test
- Two sample means - using Paired t or 2 sample t tests
- > 2 sample means in ANOVA
- Target (or given) value and a sample proportion
- Two proportions

The minimum detectable difference desired relative to the standard deviation is the **sensitivity **of the test. It is the size of the difference expressed in standard deviations.

Similar to the Coefficient of Variation in that the mean is expressed as relative magnitude in standard deviations. The numerator itself doesn't provide much information, it is when it (or the δ) are expressed in terms of standard deviations are you able to compare two or more values with more meaning.

To simplify the testing process, break down the process into **4** small steps.

Create a table similar to the one below and begin by completing the top two quadrants. The bottom-left contains the results from the test and then converting those numbers into meaning is the practical result which belongs in the bottom-right quadrant.

**Null Hypothesis** characteristics:

- Denoted as "H
_{O}" - Assumed to be true until proven otherwise
- Represents "no difference" or "no change"
- Factor is not statistically significant
- Population follows (or can be assumed) a normal distribution
- Variation is from random, inherent sources.

This is the hypothesis being tested or the claim being tested. The null hypothesis is either "rejected" or "failed to reject". Rejecting the null hypothesis means accepting the alternative hypothesis.

The null hypothesis is valid until it is proven wrong. The burden of truth rest with the alternative hypothesis. This is done by collecting data and using statistics with a specified amount of certainty. The more samples of data usually equates as more evidence and reduces the risk of an improper decision.

The null hypothesis is never accepted, it can be "failed to reject" due to lack of evidence, just as a defendant is not proven guilty due to lack of evidence. The defendant is not necessarily innocent but is determined (based on the evidence) "not guilty".

There is simply not enough evidence and the decision is made that no change exists so the defendant started the trial as not guilty and leaves the trial not guilty.

**Alternative Hypothesis** characteristics:

- Denoted as "H
_{A}" - Has the burden of proof
- Represents "a difference" or "a change"
- Factor is statistically significant
- Population does not follow a normal distribution
- Variation is from non-random sources.

**The shape of a distribution is normally distributed**

- H
_{o}= Data is Normal - H
_{A}= Data is not Normal

**There is a relationship between sales of a toy and placing it on the ends of aisles**

- H
_{O}: Slope = 0 - H
_{A}: Slope does not equal 0

**Supplier ABC’s Part # 34565 weight is not the same as Supplier XYZ’s**

- H
_{o}= Mean ABC = Mean XYZ - H
_{A}= Mean ABC does not equal the Mean XYZ

**People that eat carrots have better eyesight **

- H
_{o}= eating carrots and eyesight are independent - H
_{A }= eating carrots and eyesight are dependent

Running more tests allows you to hone in on the differences and conclude more information that can lead to more effective improvements. There are ways to improve the accuracy of results such as being more specific with testing.

For example, testing for specific numerical differences or looking for differences (or lack of) within a gender, a region, an industry, an age group, a religion, an affiliation, or combination of them.

If you detect a change from large group of people from another that is helpful.....but what about more detail?

Therefore, if possible, test the data by gender, by age group, by hair color, by religion, by political party affiliation, by region, etc. You will begin to identify more meaningful information and generate new discussion.

If the problem or the test is asking for a directional inequality, such as *greater than* or *less than*, then it is a one-tailed test.

- If
*greater than*, it is a one-tailed test in the right tail. - If
*less than*, it is a one-tailed test in the left tail.

If there is no specified direction, such as *not equal*, then it is a two-tailed test where both the right tail and left tail are under evaluation.

A "pooled" t-test assumes is used to compare means among two independent samples and their standard deviations are assumed equal. An "unpooled" t-test assumes the standard deviations are assumed to be not equal. The "pooled" t-test pools the sample standard deviations (not the sample means).

A "pooled" Proportions test assumes both Proportions are equal and "unpooled" assumes both are not the same.

"Pooled" and "Unpooled" use different formulas thus the results will be different.

If unsure which to assume, run the test both ways and understand if the difference in results makes a material impact on your project. If the worst case result still provides an acceptable outcome, then this assumption may not be worth haggling over.

Several hypothesis test flowcharts are available to subscribers at no additional charge along with other free downloads and tools to help a Green Belt or Black Belt. A couple generic visual aids are shown below.

This Green Belt training program contains a complete module with lessons and detail about commonly used hypothesis tests. This is often a new area of study for those learning about the Six Sigma methodology and represents a significant challenge on certification exams and in real-life application. |

Search active job openings related to Six Sigma

Custom Search

**Six Sigma**

**Templates, Tables & Calculators**

**Six Sigma Certification**

**Six Sigma** Slides

*Green Belt Program (1,000+ Slides)*

*Basic Statistics*

*Cost of Quality*

*SPC*

*Process Mapping*

*Capability Studies*

*MSA*

*SIPOC*

*Cause & Effect Matrix*

*FMEA*

*Multivariate Analysis*

*Central Limit Theorem*

*Confidence Intervals*

*Hypothesis Testing*

*T Tests*

*1-Way ANOVA*

*Chi-Square*

*Correlation and Regression*

*Control Plan*

*Kaizen*

*MTBF and MTTR*

*Project Pitfalls*

*Error Proofing*

Z Scores

*OEE*

*Takt Time*

*Line Balancing*

*Yield Metrics*

*Practice Exam*

*... and more*