Vous êtes sur la page 1sur 3

In statistics, normality tests are used to determine if a data set is well-modeled by a normal

distribution and to compute how likely it is for a random variable underlying the data set to be normally
distributed.
More precisely, the tests are a form of model selection, and can be interpreted several ways, depending
on one's interpretations of probability:
In descriptive statistics terms, one measures a goodness of fit of a normal model to the data if
the fit is poor then the data are not well modeled in that respect by a normal distribution,
without making a judgment on any underlying variable.
In frequentist statistics statistical hypothesis testing, data are tested against the null
hypothesis that it is normally distributed.
In Bayesian statistics, one does not "test normality" per se, but rather computes the likelihood
that the data comes from a normal distribution with given parameters ,(for all ,), and
compares that with the likelihood that the data come from other distributions under
consideration, most simply using a Bayes factor (giving the relative likelihood of seeing the data
given different models), or more finely taking a prior distribution on possible models and
parameters and computing a posterior distributiongiven the computed likelihoods.
Contents
[hide]
1 Graphical methods
2 Back-of-the-envelope test
3 Frequentist tests
4 Bayesian tests
5 Applications
6 Notes
7 References
8 External links
9 Videos
Graphical methods[edit]
An informal approach to testing normality is to compare a histogram of the sample data to a normal
probability curve. The empirical distribution of the data (the histogram) should be bell-shaped and
resemble the normal distribution. This might be difficult to see if the sample is small. In this case one
might proceed by regressing the data against the quantiles of a normal distribution with the same mean
and variance as the sample. Lack of fit to the regression line suggests a departure from normality.
A graphical tool for assessing normality is the normal probability plot, a quantile-quantile plot (QQ plot)
of the standardized data against the standard normal distribution. Here the correlation between the
sample data and normal quantiles (a measure of the goodness of fit) measures how well the data is
modeled by a normal distribution. For normal data the points plotted in the QQ plot should fall
approximately on a straight line, indicating high positive correlation. These plots are easy to interpret
and also have the benefit that outliers are easily identified.
Back-of-the-envelope test[edit]
Simple back-of-the-envelope test takes the sample maximum and minimum and computes their z-score,
or more properly t-statistic (number of sample standard deviations that a sample is above or below the
sample mean), and compares it to the 689599.7 rule: if one has a 3 event (properly, a 3s event) and
significantly fewer than 300 samples, or a 4s event and significantly fewer than 15,000 samples, then a
normal distribution significantly understates the maximum magnitude of deviations in the sample data.
This test is useful in cases where one faces kurtosis risk where large deviations matter and has the
benefits that it is very easy to compute and to communicate: non-statisticians can easily grasp that
"6 events dont happen in normal distributions".
Frequentist tests[edit]
Tests of univariate normality include D'Agostino's K-squared test, the JarqueBera test, the Anderson
Darling test, the Cramrvon Mises criterion, the Lilliefors test for normality (itself an adaptation of
the KolmogorovSmirnov test), the ShapiroWilk test, the Pearson's chi-squared test, and the Shapiro
Francia test. A 2011 paper fromThe Journal of Statistical Modeling and Analytics
[1]
concludes that
Shapiro-Wilk has the best power for a given significance, followed closely by Anderson-Darling when
comparing the Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors, and Anderson-Darling tests.
Some published works recommend the JarqueBera test.
[2][3]
But it is not without weakness. It has low
power for distributions with short tails, especially for bimodal distributions.
[4]
Other authors have
declined to include its data in their studies because of its poor overall performance.
[5]

Historically, the third and fourth standardized moments (skewness and kurtosis) were some of the
earliest tests for normality. Mardia's multivariate skewness and kurtosis tests generalize the moment
tests to the multivariate case.
[6]
Other early test statistics include the ratio of the mean absolute
deviation to the standard deviation and of the range to the standard deviation.
[7]

More recent tests of normality include the energy test
[8]
(Szkely and Rizzo) and the tests based on the
empirical characteristic function (ecf) (e.g. Epps and Pulley,
[9]
HenzeZirkler,
[10]
BHEP test
[11]
). The energy
and the ecf tests are powerful tests that apply for testing univariate or multivariate normality and are
statistically consistent against general alternatives.
Bayesian tests[edit]
KullbackLeibler divergences between the whole posterior distributions of the slope and variance do not
indicate non-normality. However, the ratio of expectations of these posteriors and the expectation of
the ratios give similar results to the ShapiroWilk statistic except for very small samples, when non-
informative priors are used.
[12]

Spiegelhalter suggests using a Bayes factor to compare normality with a different class of distributional
alternatives.
[13]
This approach has been extended by Farrell and Rogers-Stewart.
[14]

Applications[edit]
One application of normality tests is to the residuals from a linear regression model. If they are not
normally distributed, the residuals should not be used in Z tests or in any other tests derived from the
normal distribution, such as t tests, F tests and chi-squared tests. If the residuals are not normally
distributed, then the dependent variable or at least one explanatory variable may have the wrong
functional form, or important variables may be missing, etc. Correcting one or more of these systematic
errors may produce residuals that are normally distributed.

Vous aimerez peut-être aussi