The **p-value** (probability value) represents the probability that the change is due to random, inherent sources. It's also the chance of being wrong when deciding to reject the null hypothesis. In other words, the probability of making a Type I error.

It is one of the quantitative values used to make a decision regarding the validity of the null hypothesis based on the testing results.

The lower the p-value, the lower the risk of being wrong when the decision is made to reject the null and infer the alternative hypothesis (and vice versa).

Most often there are three test statistics used in hypothesis testing:

The calculated test statistic (depending on which comparison test is being used) is compared to the critical statistic of that same distribution (such as Z, t, F) to decided whether to "reject" or "fail to reject" the null hypothesis.

The formulas for each test statistics are shown in the hypothesis testing section where there is more information on using the p-value in statistical tests.

**IF the...**

calculated test statistic is
**>** than the critical test statistic (found from a table),

**THEN the...**

decision is to __reject the null
hypothesis__.

Using this method requires a
calculation on the degrees of freedom, which requires only knowing the sample
size(s).

Another method of making a decision in a hypothesis test is by comparing the calculated p-value to the alpha risk .

**If the p-val****ue is > Alpha Risk, fail to reject the Ho (null hypothesis)****If the p-value is < Alpha Risk, reject the Ho (null hypothesis) and the data can be assumed normal**.

__Recall:__ **Level of Confidence = 1 - alpha risk**

The p-value is also used to determine if a data distribution meets the normality assumptions. Generally, with an alpha risk of 0.05 this would mean the Confidence Level = 0.95 or 95%.

If the p-value is greater than 0.05 then the data is assumed to meet normality assumptions. There is a single point of central positioning (mean ~ median ~ mode) and bell shaped distribution within the histogram. In the ANALYZE phase, a Six Sigma project managers would apply parametric tests.

The data can still be assumed normal if there can be outliers when generating a Box Plot. Outliers do not automatically indicate a non-normal distribution.

Most statistical software programs have an option to select 'Equal Variances' when running a t-test or ANOVA. The assumption of unequal variances impacts the overall estimate of the variance related to error. This impacts that z, t, and f statistics values.

The p-value in reality is higher than the calculated p-value when the assumption of equal variances is violated. In other words, the probability of making a Type I error is actually larger than calculated.

If you are assuming equal variances, then this assumption should be tested (similar to testing the residuals in a Regression or ANOVA test for normality to meet their respective assumption).

Click here to read an excellent description on the reasoning why 0.05 was selected as the general criteria to determine statistical significance.

Templates, Tables, Calculators

Click for a Password

to access entire site

Templates & Calculators

**Six Sigma Modules**

The following presentations are available to **download**

**Click Here**

*
Green Belt Program 1,000+ Slides
Basic Statistics
SPC
Process Mapping
Capability Studies
MSA
Cause & Effect Matrix
FMEA
Multivariate Analysis
Central Limit Theorem
Confidence Intervals
Hypothesis Testing
T Tests
1-Way Anova Test
Chi-Square Test
Correlation and Regression
SMED
Control Plan*