Vous êtes sur la page 1sur 4

Nonparametric statistics (also called “distribution free statistics”) are those that can describe some

attribute of a population, test hypotheses about that attribute, its relationship with some other
attribute, or differences on that attribute across populations , across time or across related constructs,
that require no assumptions about the form of the population data distribution(s) nor require interval
level measurement.

1. The first meaning of non-parametric covers techniques that do not rely on


data belonging to any particular distribution. These include, among others:
 distribution free methods, which do not rely on assumptions that the
data are drawn from a given probability distribution. As such it is the
opposite of parametric statistics. It includes non-parametric statistical
models, inference and statistical tests.
 non-parametric statistics (in the sense of a statistic over data, which is
defined to be a function on a sample that has no dependency on
a parameter), whose interpretation does not depend on the population
fitting any parametrized distributions. Statistics based on the ranks of
observations are one example of such statistics and these play a central
role in many non-parametric approaches.
2. The second meaning of non-parametric covers techniques that do not
assume that the structure of a model is fixed. Typically, the model grows in
size to accommodate the complexity of the data. In these techniques,
individual variables are typically assumed to belong to parametric
distributions, and assumptions about the types of connections among
variables are also made. These techniques include, among others:
 non-parametric regression, which refers to modeling where the
structure of the relationship between variables is treated non-
parametrically, but where nevertheless there may be parametric
assumptions about the distribution of model residuals.
 non-parametric hierarchical Bayesian models, such as models based
on the Dirichlet process, which allow the number of latent variables to
grow as necessary to fit the data, but where individual variables still follow
parametric distributions and even the process controlling the rate of growth
of latent variables follows a parametric distribution.
Examples:

1. Binomial Test procedureThe Binomial Test procedure compares an observed proportion of cases
to the proportion expected under a binomial distribution with a specified probability parameter.
The observed proportion is defined either by the number of cases having the first value of a
dichotomous variable or by the number of cases at or below agiven cut point on a scale variable.
By default, the probability parameter for both groups is 0.5, although this may be changed. To
change the probability, youenter a test proportion for the first group. The probability for the
second group is equal to 1 minus the probability for the first group. Additionally, descriptive
statistics and/or quartiles for the test variable may be displayed.
2. The Chi-Square Test procedure tabulates a variable into categories and tests thehypothesis that
the observed frequencies do not differ from their expected values.Chi-Square Test allows you
to:
 Include all categories of the test variable, or limit the test to a specific range.
 Use standard or customized expected values.
 Obtain descriptive statistics and/or quartiles on the test variable.

RunsTest 
A statistical procedure that examines whether a string of data is occurring randomly given a
specific distribution. The runs test analyzes the occurrence of similar events that are
separated by events that are different.

the Kolmogorov–Smirnov test (K–S test) is a nonparametric test for the equality of continuous, one-
dimensional probability distributions that can be used to compare a sample with a reference probability
distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). The Kolmogorov–
Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the
cumulative distribution function of the reference distribution, or between the empirical distribution
functions of two samples. The null distribution of this statistic is calculated under the null hypothesis
that the samples are drawn from the same distribution (in the two-sample case) or that the sample is
drawn from the reference distribution (in the one-sample case). In each case, the distributions
considered under the null hypothesis are continuous distributions but are otherwise unrestricted.

The two-sample KS test is one of the most useful and general nonparametric methods for comparing
two samples, as it is sensitive to differences in both location and shape of the empirical cumulative
distribution functions of the two samples.

In statistics, the Siegel–Tukey test, named after Sidney Siegel and John Tukey, is a non-parametric test
which may be applied to data measured at least on an ordinal scale. It tests for differences in scale
between two groups.

The test is used to determine if one of two groups of data tends to have more widely dispersed values
than the other. In other words, the test determines whether one of the two groups tends to move,
sometimes to the right, sometimes to the left, but away from the center (of the ordinal scale).
Parametric statistical procedures rely on assumptions about the shape of the distribution

(i.e., assume a normal distribution) in the underlying population and about the form or

parameters (i.e., means and standard deviations) of the assumed distribution.

Nonparametric statistical procedures rely on no or few assumptions about the shape or

parameters of the population distribution from which the sample was drawn.

Parametric statistics is a branch of statistics that assumes that data have come


from a type of probability distribution and makes inferences about the parameters of
the distribution. Most well-known elementary statistical methods are parametric.

Paired t test

This function gives a paired Student t test, confidence intervals for the difference between a pair of
means and, optionally, limits of agreement for a pair of samples (Armitage and Berry, 1994; Altman,
1991).

The paired t test provides an hypothesis test of the difference between population means for a pair of
random samples whose differences are approximately normally distributed. Please note that a pair of
samples, each of which are not from normal a distribution, often yields differences that are normally
distributed.

Single sample t test

This function gives a single sample Student t test with a confidence interval for the mean difference.

The single sample t method tests a null hypothesis that the population mean is equal to a specified
value. If this value is zero (or not entered) then the confidence interval for the sample mean is given

The unpaired t method tests the null hypothesis that the population means related to
two independent, random samples from an approximately normal distribution are equal 

A Z-test is any statistical test for which the distribution of the test statistic under


the null hypothesis can be approximated by a normal distribution Due to the central
limit theorem, many test statistics are approximately normally distributed for large
samples. Therefore, many statistical tests can be performed as approximate Z-tests
if the sample size is not too small.

Vous aimerez peut-être aussi