Vous êtes sur la page 1sur 8

Students ''t''-test

A t-test is any statistical hypothesis test in which the testStudents t-test work was submitted to and accepted in the
statistic follows a Students t-distribution under the null journal Biometrika and published in 1908.[5] Company
hypothesis. It can be used to determine if two sets of policy at Guinness forbade its chemists from publishing
data are signicantly dierent from each other. their ndings, so Gosset published his statistical work un-
der the pseudonym Student (see Students t-distribution
A t-test is most commonly applied when the test statistic
would follow a normal distribution if the value of a scaling for a detailed history of this pseudonym, which is not to
student).
term in the test statistic were known. When the scaling be confused with the literal term
term is unknown and is replaced by an estimate based Guinness had a policy of allowing technical sta leave
on the data, the test statistics (under certain conditions) for study (so-called study leave), which Gosset used
follow a Students t distribution. during the rst two terms of the 19061907 academic
year in Professor Karl Pearson's Biometric Laboratory at
University College London.[6] Gossets identity was then
1 History known to fellow statisticians and to editor-in-chief Karl
Pearson.[7]
It is not clear how much of the work Gosset performed
while he was at Guinness and how much was done when
he was on study leave at University College London.

2 Uses
Among the most frequently used t-tests are:

A one-sample location test of whether the mean of a


population has a value specied in a null hypothesis.

A two-sample location test of the null hypothesis


such that the means of two populations are equal.
All such tests are usually called Students t-tests,
though strictly speaking that name should only be
used if the variances of the two populations are
also assumed to be equal; the form of the test used
when this assumption is dropped is sometimes called
Welchs t-test. These tests are often referred to as
unpaired or independent samples t-tests, as they
are typically applied when the statistical units un-
derlying the two samples being compared are non-
William Sealy Gosset, who developed the "t-statistic and pub- overlapping.[8]
lished it under the pseudonym of Student.
A test of the null hypothesis that the dierence be-
The t-statistic was introduced in 1908 by William Sealy tween two responses measured on the same statis-
Gosset, a chemist working for the Guinness brewery in tical unit has a mean value of zero. For example,
Dublin, Ireland. Student was his pen name.[1][2][3][4] suppose we measure the size of a cancer patients
Gosset had been hired owing to Claude Guinness's pol- tumor before and after a treatment. If the treatment
icy of recruiting the best graduates from Oxford and is eective, we expect the tumor size for many of
Cambridge to apply biochemistry and statistics to Guin- the patients to be smaller following the treatment.
nesss industrial processes.[2] Gosset devised the t-test as This is often referred to as the paired or repeated
an economical way to monitor the quality of stout. The measures t-test:[8][9] see paired dierence test.

1
2 4 UNPAIRED AND PAIRED TWO-SAMPLE T-TESTS

A test of whether the slope of a regression line dif- Most two-sample t-tests are robust to all but large devia-
fers signicantly from 0. tions from the assumptions.[11]

3 Assumptions
4 Unpaired and paired two-sample
Most t-test statistics have the form t = Z/s, where Z and
s are functions of the data. Typically, Z is designed to be
t-tests
sensitive to the alternative hypothesis (i.e., its magnitude
tends to be larger when the alternative hypothesis is true),
whereas s is a scaling parameter that allows the distribu-
tion of t to be determined.
As an example, in the one-sample t-test t = Zs =

(X)/(/ n) is the sample mean from a sam-
s , where X
ple X1 ,X2 ,,Xn, of size n, s is the ratio of sample stan-
dard deviation over population standard deviation, is
the population standard deviation of the data, and is
the population mean.
The assumptions underlying a t-test are that

X follows a normal distribution with mean and


variance 2
ps2 follows a 2 distribution with p degrees of free- Type I error of unpaired and paired two-sample t-tests as a func-
dom under the null hypothesis, where p is a positive tion of the correlation. The simulated random numbers originate
constant from a bivariate normal distribution with a variance of 1. The
signicance level is 5% and the number of cases is 60.
Z and s are independent.

In a specic type of t-test, these conditions are conse-


quences of the population being studied, and of the way
in which the data are sampled. For example, in the t-test
comparing the means of two independent samples, the
following assumptions should be met:

Each of the two populations being compared should


follow a normal distribution. This can be tested
using a normality test, such as the ShapiroWilk
or KolmogorovSmirnov test, or it can be assessed
graphically using a normal quantile plot.
If using Students original denition of the t-test,
the two populations being compared should have the
same variance (testable using F-test, Levenes test,
Bartletts test, or the BrownForsythe test; or assess- Power of unpaired and paired two-sample t-tests as a function of
able graphically using a QQ plot). If the sample the correlation. The simulated random numbers originate from a
sizes in the two groups being compared are equal, bivariate normal distribution with a variance of 1 and a deviation
Students original t-test is highly robust to the pres- of the expected value of 0.4. The signicance level is 5% and the
ence of unequal variances.[10] Welchs t-test is in- number of cases is 60.
sensitive to equality of the variances regardless of
whether the sample sizes are similar.
Two-sample t-tests for a dierence in mean involve in-
The data used to carry out the test should be sampled dependent samples or unpaired samples. Paired t-tests
independently from the two populations being com- are a form of blocking, and have greater power than un-
pared. This is in general not testable from the data, paired tests when the paired units are similar with respect
but if the data are known to be dependently sam- to noise factors that are independent of membership in
pled (i.e., if they were sampled in clusters), then the the two groups being compared.[12] In a dierent con-
classical t-tests discussed here may give misleading text, paired t-tests can be used to reduce the eects of
results. confounding factors in an observational study.
3

4.1 Independent (unpaired) samples 5 Calculations


The independent samples t-test is used when two sepa- Explicit expressions that can be used to carry out various
rate sets of independent and identically distributed sam- t-tests are given below. In each case, the formula for a test
ples are obtained, one from each of the two populations statistic that either exactly follows or closely approximates
being compared. For example, suppose we are evaluating a t-distribution under the null hypothesis is given. Also,
the eect of a medical treatment, and we enroll 100 sub- the appropriate degrees of freedom are given in each case.
jects into our study, then randomly assign 50 subjects to Each of these statistics can be used to carry out either a
the treatment group and 50 subjects to the control group. one-tailed or two-tailed test.
In this case, we have two independent samples and would
Once the t value and degrees of freedom are determined,
use the unpaired form of the t-test. The randomization is
a p-value can be found using a table of values from Stu-
not essential here if we contacted 100 people by phone
dents t-distribution. If the calculated p-value is below
and obtained each persons age and gender, and then used
the threshold chosen for statistical signicance (usually
a two-sample t-test to see whether the mean ages dier by
the 0.10, the 0.05, or 0.01 level), then the null hypothesis
gender, this would also be an independent samples t-test,
is rejected in favor of the alternative hypothesis.
even though the data is observational.

5.1 One-sample t-test


4.2 Paired samples
In testing the null hypothesis that the population mean is
Main article: Paired dierence test equal to a specied value 0 , one uses the statistic

Paired samples t-tests typically consist of a sample of x 0


t=
matched pairs of similar units, or one group of units that s/ n
has been tested twice (a repeated measures t-test).
where x is the sample mean, s is the sample standard devi-
A typical example of the repeated measures t-test would ation of the sample and n is the sample size. The degrees
be where subjects are tested prior to a treatment, say for of freedom used in this test are n 1. Although the par-
high blood pressure, and the same subjects are tested ent population does not need to be normally distributed,
again after treatment with a blood-pressure lowering the distribution of the population of sample means, x , is
medication. By comparing the same patients numbers assumed to be normal. By the central limit theorem, if
before and after treatment, we are eectively using each the sampling of the parent population is independent and
patient as their own control. That way the correct rejec- the rst moment of the parent population exists then the
tion of the null hypothesis (here: of no dierence made sample means will be approximately normal.[14] (The de-
by the treatment) can become much more likely, with gree of approximation will depend on how close the par-
statistical power increasing simply because the random ent population is to a normal distribution and the sample
between-patient variation has now been eliminated. Note size, n.)
however that an increase of statistical power comes at a
price: more tests are required, each subject having to be
tested twice. Because half of the sample now depends on 5.2 Slope of a regression line
the other half, the paired version of Students t-test has
only n/21 degrees of freedom (with n being the total Suppose one is tting the model
number of observations). Pairs become individual test
units, and the sample has to be doubled to achieve the
same number of degrees of freedom. Y = + x + ,
A paired samples t-test based on a matched-pairs sam-
ple results from an unpaired sample that is subsequently where x is known, and are unknown, and is a nor-
used to form a paired sample, by using additional vari- mally distributed random variable with mean 0 and un-
ables that were measured along with the variable of known variance 2 , and Y is the outcome of interest. We
interest.[13] The matching is carried out by identifying want to test the null hypothesis that the slope is equal
pairs of values consisting of one observation from each to some specied value 0 (often taken to be 0, in which
of the two samples, where the pair is similar in terms of case the null hypothesis is that x and y are independent).
other measured variables. This approach is sometimes Let
used in observational studies to reduce or eliminate the
eects of confounding factors.
Paired samples t-tests are often referred to as dependent b, b = estimators least-squares,

samples t-tests. SEb , SEb = estimators least-squares of errors standard the.
4 5 CALCULATIONS

Then of the two samples. The denominator of t is the standard


error of the dierence between two means.
For signicance testing, the degrees of freedom for this
b 0 test is 2n 2 where n is the number of participants in
tscore = Tn2
SEb each group.

has a t-distribution with n 2 degrees of freedom if the


null hypothesis is true. The standard error of the slope 5.3.2 Equal or unequal sample sizes, equal variance
coecient:
This test is used only when it can be assumed that the
two distributions have the same variance. (When this as-
n sumption is violated, see below.) Note that the previous
i=1 (yi y
bi )
1 2
n2
SEb = n formulae are a special case valid when both samples have
i=1 (xi x)
2
equal sizes: n = n1 = n2 . The t statistic to test whether the
means are dierent can be calculated as follows:
can be written in terms of the residuals. Let

X1 X
2
bi = yi ybi = yi (b b i ) = residuals = errors estimated,
+ x t=
sp n11 + n12
n
SSR = bi = residuals of squares of sum.
2

i=1 where

Then tscore is given by:



(n1 1)s2X1 + (n2 1)s2X2
sp = .
(b 0 ) n 2 n1 + n2 2
tscore = n .
2
SSR/ i=1 (xi x) is an estimator of the pooled standard deviation of the
two samples: it is dened in this way so that its square is
an unbiased estimator of the common variance whether or
5.3 Independent two-sample t-test not the population means are the same. In these formulae,
n 1 is the number of degrees of freedom for each group,
5.3.1 Equal sample sizes, equal variance and the total sample size minus two (that is, n1 + n2 2)
is the total number of degrees of freedom, which is used
Given two groups (1, 2), this test is only applicable when: in signicance testing.

the two sample sizes (that is, the number, n, of par-


ticipants of each group) are equal; 5.3.3 Equal or unequal sample sizes, unequal vari-
ances
it can be assumed that the two distributions have the
same variance; Main article: Welchs t-test

Violations of these assumptions are discussed below. This test, also known as Welchs t-test, is used only when
The t statistic to test whether the means are dierent can the two population variances are not assumed to be equal
be calculated as follows: (the two sample sizes may or may not be equal) and
hence must be estimated separately. The t statistic to test
whether the population means are dierent is calculated
X1 X 2 as:
t=
sp 2/n

where X1 X2
t=
s

where
s2X1 + s2X2
sp =
2

Here sp is the pooled standard deviation for n=n1 =n2 and s21 s2
s = + 2.
s2X1 and s2X2 are the unbiased estimators of the variances n1 n2
6.1 Unequal variances 5

Here s2 is the unbiased estimator of the variance of each We will carry out tests of the null hypothesis that the
of the two samples with ni = number of participants in means of the populations from which the two samples
group i, i=1 or 2. Note that in this case s2 is not a pooled were taken are equal.
variance. For use in signicance testing, the distribution The dierence between the two sample means, each de-
of the test statistic is approximated as an ordinary Stu- noted by X i , which appears in the numerator for all the
dents t distribution with the degrees of freedom calcu- two-sample testing approaches discussed above, is
lated using

X 1 X 2 = 0.095.
(s21 /n1 + s22 /n2 )2
d.f. = .
(s21 /n1 )2 /(n1 1) + (s22 /n2 )2 /(n2 1) The sample standard deviations for the two samples are
approximately 0.05 and 0.11, respectively. For such small
This is known as the WelchSatterthwaite equation. The samples, a test of equality between the two population
true distribution of the test statistic actually depends variances would not be very powerful. Since the sample
(slightly) on the two unknown population variances (see sizes are equal, the two forms of the two-sample t-test
BehrensFisher problem). will perform similarly in this example.

5.4 Dependent t-test for paired samples 6.1 Unequal variances


This test is used when the samples are dependent; that is, If the approach for unequal variances (discussed above)
when there is only one sample that has been tested twice is followed, the results are
(repeated measures) or when there are two samples that
have been matched or paired. This is an example of a
paired dierence test. s21 s2
+ 2 0.04849
n1 n2

X D 0 and the degrees of freedom, df,


t= sD .

n

For this equation, the dierences between all pairs must df 6.982.
be calculated. The pairs are either one persons pre-test
and post-test scores or between pairs of persons matched The test statistic is approximately 1.959, which gives a
into meaningful groups (for instance drawn from the same 2-tailed test p-value of 0.0956.
family or age group: see table). The average (XD) and
standard deviation (sD) of those dierences are used in
the equation. The constant 0 is non-zero if you want to
6.2 Equal variances
test whether the average of the dierence is signicantly
If the approach for equal variances (discussed above) is
dierent from 0 . The degree of freedom used is n 1,
followed, the results are
where n represents the number of pairs.

sp 0.084
6 Worked examples
and the degrees of freedom, df,
Let A1 denote a set obtained by drawing a random sample
of six measurements:
df = 10.

The test statistic is approximately equal to 1.959, which


A1 = {30.02, 29.99, 30.11, 29.97, 30.01, 29.99} gives 2-sided p-value of 0.07857.

and let A2 denote a second set obtained similarly:


7 Alternatives to the t-test for loca-
A2 = {29.89, 29.93, 29.72, 29.98, 30.02, 29.98} tion problems
These could be, for example, the weights of screws that The t-test provides an exact test for the equality of the
were chosen out of a bucket. means of two normal populations with unknown, but
6 11 NOTES

equal, variances. (The Welchs t-test is a nearly exact is preferable for hypothesis testing. Fishers Method for
test for the case where the data are normal but the vari- combining multiple tests with alpha reduced for positive
ances may dier.) For moderately large samples and a correlation among tests is one. Another is Hotellings T 2
one tailed test, the t is relatively robust to moderate vio- statistic follows a T 2 distribution. However, in practice
lations of the normality assumption.[15] the distribution is rarely used, since tabulated values for
2 2
For exactness, the t-test and Z-test require normality of T are hard to nd. Usually, T is converted instead to
the sample means, and the t-test additionally requires that an F statistic.
the sample variance follows a scaled 2 distribution, and For a one-sample multivariate test, the hypothesis is that
that the sample mean and sample variance be statistically the mean vector ( ) is equal to a given vector ( 0 ). The
independent. Normality of the individual data values is test statistic is Hotellings t 2 :
not required if these conditions are met. By the central
limit theorem, sample means of moderately large sam-
ples are often well-approximated by a normal distribution t2 = n(x 0 ) S1 (x 0 )
even if the data are not normally distributed. For non-
normal data, the distribution of the sample variance may where n is the sample size, x is the vector of column
deviate substantially from a 2 distribution. However, if means and S is a m m sample covariance matrix.
the sample size is large, Slutskys theorem implies that the
For a two-sample multivariate test, the hypothesis is that
distribution of the sample variance has little eect on the
the mean vectors ( 1 , 2 ) of two samples are equal.
distribution of the test statistic. If the data are substan-
The test statistic is Hotellings two-sample t 2 :
tially non-normal and the sample size is small, the t-test
can give misleading results. See Location test for Gaus-
sian scale mixture distributions for some theory related n1 n2
to one particular family of non-normal distributions. t2 = (x1 x2 ) Spooled 1 (x1 x2 ).
n +n
1 2
When the normality assumption does not hold, a non-
parametric alternative to the t-test can often have bet-
ter statistical power. For example, for two independent 9 Software implementations
samples when the data distributions are asymmetric (that
is, the distributions are skewed) or the distributions have Many spreadsheet programs and statistics packages, such
large tails, then the Wilcoxon rank-sum test (also known as QtiPlot, LibreOce Calc, Microsoft Excel, SAS,
as the MannWhitney U test) can have three to four times SPSS, Stata, DAP, gretl, R, Python, PSPP, Matlab and
higher power than the t-test.[15][16][17] The nonparametric Minitab, include implementations of Students t-test.
counterpart to the paired samples t-test is the Wilcoxon
signed-rank test for paired samples. For a discussion on
choosing between the t-test and nonparametric alterna- 10 See also
tives, see Sawilowsky (2005).[18]
One-way analysis of variance (ANOVA) generalizes the Conditional change model
two-sample t-test when the data belong to more than two
groups. F-test

Noncentral t-distribution#Use in power analysis

8 Multivariate testing Students t-statistic

Z-test
Main article: Hotellings T-squared distribution
MannWhitney U test
A generalization of Students t statistic, called Hotellings idk correction for t-test
t-squared statistic, allows for the testing of hypotheses
on multiple (often correlated) measures within the same Welchs t-test
sample. For instance, a researcher might submit a num-
ber of subjects to a personality test consisting of multiple Analysis of variance (ANOVA)
personality scales (e.g. the Minnesota Multiphasic Per-
sonality Inventory). Because measures of this type are
usually positively correlated, it is not advisable to con- 11 Notes
duct separate univariate t-tests to test hypotheses, as these
would neglect the covariance among measures and in- [1] Richard Mankiewicz (2004). The Story of Mathemat-
ate the chance of falsely rejecting at least one hypoth- ics (Paperback ed.). Princeton, NJ: Princeton University
esis (Type I error). In this case a single multivariate test Press. p. 158. ISBN 9780691120461.
7

[2] O'Connor, John J.; Robertson, Edmund F., William [17] Fay, Michael P.; Proschan, Michael A. (2010).
Sealy Gosset, MacTutor History of Mathematics archive, WilcoxonMannWhitney or t-test? On assumptions for
University of St Andrews. hypothesis tests and multiple interpretations of decision
rules. Statistics Surveys. 4: 139. doi:10.1214/09-
[3] Fisher Box, Joan (1987). Guinness, Gosset, Fisher, SS051. PMC 2857732 . PMID 20414472.
and Small Samples. Statistical Science. 2 (1): 4552.
doi:10.1214/ss/1177013437. JSTOR 2245613. [18] Sawilowsky, Shlomo S. (2005). Misconceptions Lead-
ing to Choosing the t Test Over The Wilcoxon Mann
[4] http://www.aliquote.org/cours/2012_biomed/biblio/ Whitney Test for Shift in Location Parameter. Journal
Student1908.pdf of Modern Applied Statistical Methods. 4 (2): 598600.
Retrieved 2014-06-18.
[5] The Probable Error of a Mean (PDF). Biometrika. 6
(1): 125. 1908. doi:10.1093/biomet/6.1.1. Retrieved
24 July 2016.
12 References
[6] Raju, T. N. (2005). William Sealy Gosset and William
A. Silverman: Two students of science. Pediatrics. O'Mahony, Michael (1986). Sensory Evaluation of
116 (3): 7325. doi:10.1542/peds.2005-1134. PMID Food: Statistical Methods and Procedures. CRC
16140715. Press. p. 487. ISBN 0-82477337-3.

[7] Dodge, Yadolah (2008). The Concise Encyclopedia of Press, William H.; Saul A. Teukolsky; William T.
Statistics. Springer Science & Business Media. pp. 234 Vetterling; Brian P. Flannery (1992). Numerical
5. ISBN 978-0-387-31742-7. Recipes in C: The Art of Scientic Computing.
Cambridge University Press. pp. p. 616. ISBN 0-
[8] Fadem, Barbara (2008). High-Yield Behavioral Sci- 521-43108-5. Archived from the original on 2015-
ence (High-Yield Series). Hagerstwon, MD: Lippincott
11-28.
Williams & Wilkins. ISBN 0-7817-8258-9.

[9] Zimmerman, Donald W. (1997). A Note on Interpre-


tation of the Paired-Samples t Test. Journal of Ed- 13 Further reading
ucational and Behavioral Statistics. 22 (3): 349360.
doi:10.3102/10769986022003349. JSTOR 1165289. Boneau, C. Alan (1960). The eects of violations
of assumptions underlying the t test. Psychological
[10] Markowski, Carol A.; Markowski, Edward P. (1990).
Bulletin. 57 (1): 4964. doi:10.1037/h0041412.
Conditions for the Eectiveness of a Preliminary Test of
Variance. The American Statistician. 44 (4): 322326. Edgell, Stephen E., & Noon, Sheila M (1984). Ef-
doi:10.2307/2684360. JSTOR 2684360.
fect of violation of normality on the t test of the cor-
[11] Martin Bland (1995). An Introduction to Medical Statis-
relation coecient. Psychological Bulletin. 95 (3):
tics. Oxford University Press. p. 168. ISBN 978-0-19- 576583. doi:10.1037/0033-2909.95.3.576.
262428-4.

[12] John A. Rice (2006), Mathematical Statistics and Data 14 External links
Analysis, Third Edition, Duxbury Advanced.

[13] David, H. A.; Gunnink, Jason L. (1997). The Paired t Hazewinkel, Michiel, ed. (2001), Student test,
Test Under Articial Pairing. The American Statistician. Encyclopedia of Mathematics, Springer, ISBN 978-
51 (1): 912. doi:10.2307/2684684. JSTOR 2684684. 1-55608-010-4

[14] George Box, William Hunter, and J. Stuart Hunter, Statis- A conceptual article on the Students t-test
tics for Experimenters, ISBN 978-0471093152, pp. 66
Econometrics lecture (topic: hypothesis testing) on
67.
YouTube by Mark Thoma
[15] Sawilowsky, Shlomo S.; Blair, R. Cliord (1992). A
More Realistic Look at the Robustness and Type II Error
Properties of the t Test to Departures From Population
Normality. Psychological Bulletin. 111 (2): 352360.
doi:10.1037/0033-2909.111.2.352.

[16] Blair, R. Cliord; Higgins, James J. (1980). A Com-


parison of the Power of Wilcoxons Rank-Sum Statistic
to That of Students t Statistic Under Various Nonnormal
Distributions. Journal of Educational Statistics. 5 (4):
309335. doi:10.2307/1164905. JSTOR 1164905.
8 15 TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

15 Text and image sources, contributors, and licenses


15.1 Text
Students t-test Source: https://en.wikipedia.org/wiki/Student{}s_t-test?oldid=783379926 Contributors: Zundark, Michael Hardy, Shya-
mal, Ronz, Kragen, Cherkash, Jitse Niesen, Andrewman327, Selket, Robbyjo~enwiki, Moriori, Seglea, Giftlite, Nmg20, BenFrantzDale,
Tom harrison, Art Carlson, Chowbok, Pgan002, ALE!, MarkSweep, Eranb, Thorwald, Running, Noromaru, DanielCD, Rich Farmbrough,
Paul August, Bender235, Rubicon, Steerpike, Sliver~enwiki, Bobo192, Arcadian, NickSchweitzer, Alansohn, Sean3000, Sciurin, Oleg
Alexandrov, RHaworth, Lamawaaie~enwiki, Robert K S, JeremyA, Bennetto, GregorB, Btyner, Yurik, Rjwilmsi, MZMcBride, Mathbot,
Srleer, Bgwhite, YurikBot, Wavelength, RobotE, Zwobot, NBauman, Bmju, Harput, Paul Magnussen, Pwhitey86, Cedar101, CWenger,
Bentong Isles, Naught101, Andrew73, GrinBot~enwiki, Zvika, A bit iy, SmackBot, Tom Lougheed, Slashme, McGeddon, Jtneill, Mid-
way, Sundaryourfriend, Bluebot, Jprg1966, Silly rabbit, Nbarth, Scwlong, Eliezg, Richard001, Nrcprm2026, G716, Danstats, Loodog,
Tim bates, Beetstra, Dicklyon, Hu12, DwightKingsbury, Susko, Lvzon, Chris53516, Bairam, Hammer Raccoon, Dan1679, JForget, CR-
Greathouse, CBM, Moreschi, Pewwer42, Yaris678, Chasingsol, Christian75, Ameliorate!, Eribro, Darkwraith, Editor at Large, Masarnau,
Talgalili, Thijs!bot, Wikid77, A3RO, Ubuntu2, Erik53081, Robert Ham, Phu, Turlo Lomon, JAnDbot, Acroterion, Magioladitis, Alb-
mont, CattleGirl, RT100, Joseph.slater, Ldm, MartinBot, Pupster21, Jonathan Hall, It Is Me Here, Runbananas, Coppertwig, Policron,
Selinger, Necromancer44, Aagtbdfoua, Chezsruli, Ischemia, Xenonice, Markj789, Gottinou, Walshki, Jesin, Antbring, YourEyesOnly,
Garde, Qst, Cookey118, Ddxc, Edstat, Juan.j.l, Angelgirltaz, Melcombe, EPadmirateur, ClueBot, MATThematical, Derosatitanio, Mild
Bill Hiccup, Picknchewz, LizardJr8, Dashing Leech, DragonBot, Pbrandao, Excirial, PixelBot, Skbkekas, Radissthor, Versus22, Qwfp,
DumZiBoT, Dragon, W82~enwiki, Mdruiter, Lastchance 000, Daughter of Mmir, MystBot, Jeferman, Tayste, Addbot, Michele.allegra,
DOI bot, Fgnievinski, Se'taan, Antifrag, MrOllie, Protonk, Grabsomepine~enwiki, Armadillo1985, Movado73, Luckas-bot, Fraggle81, The
Earwig, Alirwan09, Kindlychung, Alessio Facchin, Eric-Wester, AnomieBOT, Haruhiko Okumura, Flewis, Materialscientist, Citation
bot, ArthurBot, PanTostado, Usuallylucid, Xqbot, Sandi Suandi, SabyasachiSK, SciberDoc, Shadowjams, Undsoweiter, FrescoBot, Ace
of Spades, Dger, Haeinous, Citation bot 1, Yuanfangdelang, Hmjbarbosa, Toolnut, The1Creator, Wotnow, Lostisle, Vovchyck, Rjwilmsi-
Bot, Born2bgratis, EmausBot, Alfaisanomega, Dewritech, Mpf3205, Tsujimasen, HiW-Bot, Kgwet, Mikilas, Gjshisha, Adrien16, ClueBot
NG, Asitgoes, Mathstat, Ion vasilief, Angelt239, Satellizer, Habil zare, Ryan Vesey, TimBock, Digit0xin21, Helpful Pixie Bot, Mishnadar,
Tlinnet, BG19bot, Marcocapelle, Chafe66, Lukys~enwiki, CitationCleanerBot, MathewTownsend, Aks23121990, Pratyya Ghosh, Kevin-
guang, Meiloorun, Everettr2, Majilis, BetseyTrotwood, Epicgenius, SPhotographer, Lingzhi, Blythwood, EJM86, Hamoudafg, Titangs,
JorritE, Cato The Censor, LokeshRavindranathan, Nbleioatts, Monkbot, BarracudaMc, Studentttt, Some Gadget Geek, Snowman3f, Aidy-
barnett, Lcat0718, Texyalen, EggsInMyPockets, PoponmnS, GreenC bot, Bender the Bot, Moritz Kohls, Deadphysicist, E to the Pi times
i, Xuanquangnp and Anonymous: 418

15.2 Images
File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-
main Contributors: Own work, based o of Image:Ambox scales.svg Original artist: Dsmurat (talk contribs)
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Origi-
nal artist: ?
File:Fisher_iris_versicolor_sepalwidth.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Fisher_iris_versicolor_
sepalwidth.svg License: CC BY-SA 3.0 Contributors: en:Image:Fisher iris versicolor sepalwidth.png Original artist: en:User:Qwfp (origi-
nal); Pbroks13 (talk) (redraw)
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:File:
Free-to-read_lock_75.svg Original artist: User:Trappist the monk
File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open-
Clipart Original artist: OpenClipart
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ?
Original artist: ?
File:Power_of_t-tests.png Source: https://upload.wikimedia.org/wikipedia/commons/5/57/Power_of_t-tests.png License: CC BY 4.0
Contributors: Own work Original artist: Moritz Kohls
File:Type_1_error.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c8/Type_1_error.png License: CC BY 4.0 Contrib-
utors: Own work Original artist: Moritz Kohls
File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA 3.0
Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)
File:William_Sealy_Gosset.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/42/William_Sealy_Gosset.jpg License:
Public domain Contributors: scanned from Gossets obituary in Annals of Eugenics Original artist: User Wujaszek on pl.wikipedia

15.3 Content license


Creative Commons Attribution-Share Alike 3.0

Vous aimerez peut-être aussi