Vous êtes sur la page 1sur 8

Type I and type II errors

This article is about erroneous outcomes of statistical


tests. For related, but non-synonymous terms in binary
classication and testing generally, see false positives
and false negatives.

it was designed to detect, in a patient who really has the


disease; a re breaking out and the re alarm does not
ring; or a clinical trial of a medical treatment failing to
show that the treatment works when really it does.[2]

In statistical hypothesis testing, a type I error is the incorrect rejection of a true null hypothesis (a false positive), while a type II error is the failure to reject a false
null hypothesis (a false negative). More simply stated, a
type I error is detecting an eect that is not present, while
a type II error is failing to detect an eect that is present.
The terms type I error and type II error are often
used interchangeably with the general notion of false positives and false negatives in binary classication, such as
medical testing, but narrowly speaking refer specically
to statistical hypothesis testing in the NeymanPearson
framework, as discussed in this article.

In terms of false positives and false negatives, a positive


result corresponds to rejecting the null hypothesis (or instead choosing the alternative hypothesis, if one exists),
while a negative result corresponds to failing to reject
the null hypothesis (or choosing the null hypothesis, if
phrased as a binary decision); roughly positive = alternative, negative = null, or in some cases positive = null,
negative = alternative, depending on the situation & requirements, though exact interpretation diers. In these
terms, a type I error is a false positive (incorrectly choosing alternative hypothesis instead of null hypothesis), and
a type II error is a false negative (incorrectly choosing the
null hypothesis instead of the alternative hypothesis).

When comparing two means, concluding the means were


dierent when in reality they were not dierent would be
a Type I error; concluding the means were not dierent
1 Denition
when in reality they were dierent would be a Type II
error. Various extensions have been suggested as "Type
In statistics, a null hypothesis is a statement that one seeks III errors", though none have wide use.
to nullify with evidence to the contrary. Most commonly
All statistical hypothesis tests have a probability of makit is a statement that the phenomenon being studied proing type I and type II errors. For example, all blood tests
duces no eect or makes no dierence. An example of
for a disease will falsely detect the disease in some proa null hypothesis is the statement This diet has no eect
portion of people who don't have it, and will fail to detect
on peoples weight. Usually an experimenter frames a
the disease in some proportion of people who do have it.
null hypothesis with the intent of rejecting it: that is, inA tests probability of making a type I error is denoted
tending to run an experiment which produces data that
by . A tests probability of making a type II error is deshows that the phenomenon under study does make a
noted by . These error rates are traded o against each
dierence.[1] In some cases there is a specic alternative
other: for any given sample set, the eort to reduce one
hypothesis that is opposed to the null hypothesis, in other
type of error generally results in increasing the other type
cases the alternative hypothesis is not explicitly stated, or
of error. For a given test, the only way to reduce both eris simply the null hypothesis is false in either event
ror rates is to increase the sample size, and this may not
this is a binary judgment, but the interpretation diers
be feasible.
and is a matter of signicant dispute in statistics.
These terms are also used in a more general way by social
A type I error (or error of the rst kind) is the incorscientists and others to refer to aws in reasoning.[3] This
rect rejection of a true null hypothesis. Usually a type I
article is specically devoted to the statistical meanings
error leads one to conclude that a supposed eect or relaof those terms and the technical issues of the statistical
tionship exists when in fact it doesn't. Examples of type I
errors that those terms describe.
errors include a test that shows a patient to have a disease
when in fact the patient does not have the disease, a re
alarm going o indicating a re when in fact there is no
re, or an experiment indicating that a medical treatment 2 Statistical test theory
should cure a disease when in fact it does not.
A type II error (or error of the second kind) is the In statistical test theory the notion of statistical error is
failure to reject a false null hypothesis. Examples of type an integral part of hypothesis testing. The test requires an
II errors would be a blood test failing to detect the disease unambiguous statement of a null hypothesis, which usu1

3 EXAMPLES

ally corresponds to a default state of nature, for example this person is healthy, this accused is not guilty or
this product is not broken. An alternative hypothesis is
the negation of null hypothesis, for example, this person
is not healthy, this accused is guilty or this product is
broken. The result of the test may be negative, relative
to null hypothesis (not healthy, guilty, broken) or positive
(healthy, not guilty, not broken). If the result of the test
corresponds with reality, then a correct decision has been
made. However, if the result of the test does not correspond with reality, then an error has occurred. Due to
the statistical nature of a test, the result is never, except
in very rare cases, free of error. Two types of error are
distinguished: type I error and type II error.

esis causes type I and type II errors to switch roles.

2.1

3.1 Example 1

Type I error

The goal of the test is to determine if the null hypothesis


can be rejected. A statistical test can either reject or fail
to reject a null hypothesis, but never prove it true.

2.3 Table of error types


Tabularised relations between truth/falseness of the null
hypothesis and outcomes of the test:[1]

3 Examples

A type I error, also known as an error of the rst kind,


occurs when the null hypothesis (H 0 ) is true, but is rejected. It is asserting something that is absent, a false
hit. A type I error may be compared with a so-called
false positive (a result that indicates that a given condition
is present when it actually is not present) in tests where a
single condition is tested for.

Hypothesis: Adding water to toothpaste protects against


cavities.

The type I error rate or signicance level is the probability of rejecting the null hypothesis given that it is true.[4][5]
It is denoted by the Greek letter (alpha) and is also
called the alpha level. By convention, the signicance
level is set to 0.05 (5%), implying that it is acceptable
to have a 5% probability of incorrectly rejecting the null
hypothesis.[4]

A type I occurs when detecting an eect (adding water


to toothpaste protects against cavities) that is not present.
The null hypothesis is true (i.e., it is true that adding water to toothpaste has no eect on cavities), but this null
hypothesis is rejected based on bad experimental data.

Null hypothesis: Adding water to toothpaste has no eect


on cavities.
This null hypothesis is tested against experimental data
with a view to nullifying it with evidence to the contrary.

3.2 Example 2

Type I errors are philosophically a focus of skepticism


and Occams razor. A Type I error occurs when we beHypothesis: Adding uoride to toothpaste protects
lieve a falsehood.[6] In terms of folk tales, an investigator
against cavities.
may be crying wolf without a wolf in sight (raising a
Null hypothesis: Adding uoride to toothpaste has no effalse alarm) (H 0 : no wolf).
fect on cavities.

2.2

Type II error

A type II error, also known as an error of the second


kind, occurs when the null hypothesis is false, but erroneously fails to be rejected. It is failing to assert what is
present, a miss. A type II error may be compared with
a so-called false negative (where an actual 'hit' was disregarded by the test and seen as a 'miss) in a test checking
for a single condition with a denitive result of true or
false. A Type II error is committed when we fail to believe a truth.[6] In terms of folk tales, an investigator may
fail to see the wolf (failing to raise an alarm). Again,
H 0 : no wolf.

This null hypothesis is tested against experimental data


with a view to nullifying it with evidence to the contrary.
A type II error occurs when failing to detect an eect
(adding uoride to toothpaste protects against cavities)
that is present. The null hypothesis is false (i.e., adding
uoride is actually eective against cavities), but the experimental data is such that the null hypothesis cannot be
rejected.

3.3 Example 3

Hypothesis: The evidence produced before the court


The rate of the type II error is denoted by the Greek letter proves that this man is guilty.
(beta) and related to the power of a test (which equals Null hypothesis (H0 ): This man is innocent.
1).
A type I error occurs when convicting an innocent person
What we actually call type I or type II error depends di- (a miscarriage of justice). A type II error occurs when
rectly on the null hypothesis. Negation of the null hypoth- letting a guilty person go free (an error of impunity).

3
A positive correct outcome occurs when convicting a
guilty person. A negative correct outcome occurs when
letting an innocent person go free.

3.4

Example 4

3.5

Theory

(a) the error of rejecting a hypothesis that


should have been accepted, and
(b) the error of accepting a hypothesis that
should have been rejected.[7]p.31

In 1930, they elaborated on these two sources of error,


Hypothesis: The Medical A has a better treatment eect remarking that:
than Medical B "
...in testing hypotheses two considerNull hypothesis (H0 ): Medical A has a better treatment
ations must be kept in view, (1) we
eect than Medical B "
must be able to reduce the chance of
rejecting a true hypothesis to as low
This kind of hypothesis error may happen when the sta
a value as desired; (2) the test must
failure on operation process (like applied wrong Medibe so devised that it will reject the
cal), medical outdated, measured tools error ...... etc.
hypothesis tested when it is likely to
Type-1 will let experimenters thought Medical A have
be false.[9]
batter eect than B, Type-2 is reverse situation.

From the Bayesian point of view, a type I error is one that


looks at information that should not substantially change
ones prior estimate of probability, but does. A type II
error is one that looks at information which should change
ones estimate, but does not. (Though the null hypothesis
is not quite the same thing as ones prior estimate, it is,
rather, ones pro forma prior estimate.)
Hypothesis testing is the art of testing whether a variation between two sample distributions can be explained
by chance or not. In many practical applications type I errors are more delicate than type II errors. In these cases,
care is usually focused on minimizing the occurrence of
this statistical error. Suppose the probability for a type I
error is 1% , then there is a 1% chance that the observed
variation is not true. This is called the level of signicance, denoted with the Greek letter (alpha). While
1% might be an acceptable level of signicance for one
application, a dierent application can require a very different level. For example, the standard goal of six sigma
is to achieve precision to 4.5 standard deviations above or
below the mean. This means that only 3.4 parts per million are allowed to be decient in a normally distributed
process

Etymology

In 1928, Jerzy Neyman (18941981) and Egon Pearson (18951980), both eminent statisticians, discussed
the problems associated with "deciding whether or not a
particular sample may be judged as likely to have been
randomly drawn from a certain population"[7]p. 1 : and,
as Florence Nightingale David remarked, "it is necessary
to remember the adjective 'random' [in the term 'random
sample'] should apply to the method of drawing the sample
and not to the sample itself".[8]
They identied "two sources of error", namely:

In 1933, they observed that these "problems are rarely


presented in such a form that we can discriminate with
certainty between the true and false hypothesis" (p. 187).
They also noted that, in deciding whether to accept or
reject a particular hypothesis amongst a "set of alternative
hypotheses" (p. 201), H 1 , H 2 , . . ., it was easy to make
an error:
...[and] these errors will be of two kinds:
(I) we reject H 0 [i.e., the hypothesis
to be tested] when it is true,
(II) we accept H 0 when some alternative hypothesis HA or H 1 is
true.[10]p.187 (There are various notations for the alternative).
In all of the papers co-written by Neyman and Pearson
the expression H 0 always signies the hypothesis to be
tested.
In the same paper[10]p. 190 they call these two sources of
error, errors of type I and errors of type II respectively.

5 Related terms
5.1 Null hypothesis
Main article: Null hypothesis
It is standard practice for statisticians to conduct tests
in order to determine whether or not a "speculative
hypothesis" concerning the observed phenomena of the
world (or its inhabitants) can be supported. The results of
such testing determine whether a particular set of results
agrees reasonably (or does not agree) with the speculated
hypothesis.
On the basis that it is always assumed, by statistical convention, that the speculated hypothesis is wrong, and the

6 APPLICATION DOMAINS

so-called "null hypothesis" that the observed phenomena


simply occur by chance (and that, as a consequence, the
speculated agent has no eect) the test will determine
whether this hypothesis is right or wrong. This is why the
hypothesis under test is often called the null hypothesis
(most likely, coined by Fisher (1935, p. 19)), because it
is this hypothesis that is to be either nullied or not nullied by the test. When the null hypothesis is nullied, it
is possible to conclude that data support the "alternative
hypothesis" (which is the original speculated one).

A threshold value can be varied to make the test more restrictive or more sensitive, with the more restrictive tests
increasing the risk of rejecting true positives, and the
more sensitive tests increasing the risk of accepting false
positives.

6.1 Inventory control

An automated inventory control system that rejects highquality goods of a consignment commits a type I error,
The consistent application by statisticians of Neyman and while a system that accepts low-quality goods commits a
Pearsons convention of representing "the hypothesis to be type II error.
tested" (or "the hypothesis to be nullied") with the expression H 0 has led to circumstances where many understand the term "the null hypothesis" as meaning "the nil 6.2 Computers
hypothesis" a statement that the results in question have
arisen through chance. This is not necessarily the case The notions of false positives and false negatives have a
the key restriction, as per Fisher (1966), is that "the null wide currency in the realm of computers and computer
hypothesis must be exact, that is free from vagueness and applications, as follows.
ambiguity, because it must supply the basis of the 'problem of distribution,' of which the test of signicance is the
solution."[11] As a consequence of this, in experimental 6.2.1 Computer security
science the null hypothesis is generally a statement that
a particular treatment has no eect; in observational sci- Main articles: computer security and computer insecurity
ence, it is that there is no dierence between the value of
a particular measured variable, and that of an experimen- Security vulnerabilities are an important consideration in
tal prediction.
the task of keeping computer data safe, while maintaining
access to that data for appropriate users. Moulton (1983),
stresses the importance of:

5.2

Statistical signicance

avoiding the type I errors (or false negatives) that


The extent to which the test in question shows that the
classify authorized users as imposters.
speculated hypothesis has (or has not) been nullied
avoiding the type II errors (or false positives) that
is called its signicance level; and the higher the sigclassify imposters as authorized users.
nicance level, the less likely it is that the phenomena
in question could have been produced by chance alone.
British statistician Sir Ronald Aylmer Fisher (1890 6.2.2 Spam ltering
1962) stressed that the null hypothesis":
A false positive occurs when spam ltering or spam
blocking techniques wrongly classify a legitimate email
... is never proved or established, but is
message as spam and, as a result, interferes with its depossibly disproved, in the course of experilivery. While most anti-spam tactics can block or lter
mentation. Every experiment may be said to
a high percentage of unwanted emails, doing so without
exist only in order to give the facts a chance of
creating signicant false-positive results is a much more
disproving the null hypothesis.
demanding task.
1935, p.19
A false negative occurs when a spam email is not detected
as spam, but is classied as non-spam. A low number of
false negatives is an indicator of the eciency of spam
ltering.
6 Application domains
Statistical tests always involve a trade-o between:
1. the acceptable level of false positives (in which a
non-match is declared to be a match) and
2. the acceptable level of false negatives (in which an
actual match is not detected).

6.2.3 Malware
The term false positive is also used when antivirus software wrongly classies an innocuous le as a virus. The
incorrect detection may be due to heuristics or to an incorrect virus signature in a database. Similar problems
can occur with antitrojan or antispyware software.

6.5
6.2.4

Medical screening
Optical character recognition

6.5 Medical screening

Detection algorithms of all kinds often create false posi- In the practice of medicine, there is a signicant diertives. Optical character recognition (OCR) software may ence between the applications of screening and testing.
detect an a where there are only some dots that appear
Screening involves relatively cheap tests that are
to be an a to the algorithm being used.
given to large populations, none of whom manifest
any clinical indication of disease (e.g., Pap smears).

6.3

Security screening

Main articles: explosive detection and metal detector


False positives are routinely found every day in airport
security screening, which are ultimately visual inspection
systems. The installed security alarms are intended to
prevent weapons being brought onto aircraft; yet they are
often set to such high sensitivity that they alarm many
times a day for minor items, such as keys, belt buckles,
loose change, mobile phones, and tacks in shoes.
The ratio of false positives (identifying an innocent traveller as a terrorist) to true positives (detecting a would-be
terrorist) is, therefore, very high; and because almost every alarm is a false positive, the positive predictive value
of these screening tests is very low.
The relative cost of false results determines the likelihood
that test creators allow these events to occur. As the cost
of a false negative in this scenario is extremely high (not
detecting a bomb being brought onto a plane could result in hundreds of deaths) whilst the cost of a false positive is relatively low (a reasonably simple further inspection) the most appropriate test is one with a low statistical
specicity but high statistical sensitivity (one that allows
a high rate of false positives in return for minimal false
negatives).

Testing involves far more expensive, often invasive,


procedures that are given only to those who manifest
some clinical indication of disease, and are most often applied to conrm a suspected diagnosis.
For example, most states in the USA require newborns
to be screened for phenylketonuria and hypothyroidism,
among other congenital disorders. Although they display a high rate of false positives, the screening tests
are considered valuable because they greatly increase the
likelihood of detecting these disorders at a far earlier
stage.[Note 1]
The simple blood tests used to screen possible blood
donors for HIV and hepatitis have a signicant rate of
false positives; however, physicians use much more expensive and far more precise tests to determine whether a
person is actually infected with either of these viruses.

Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. The US rate of false positive
mammograms is up to 15%, the highest in world. One
consequence of the high false positive rate in the US is
that, in any 10-year period, half of the American women
screened receive a false positive mammogram. False positive mammograms are costly, with over $100 million
spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a
result of the high false positive rate in the US, as many
as 9095% of women who get a positive mammogram
6.4 Biometrics
do not have the condition. The lowest rate in the world
Biometric matching, such as for ngerprint recognition, is in the Netherlands, 1%. The lowest rates are generally
facial recognition or iris recognition, is susceptible to type in Northern Europe where mammography lms are read
I and type II errors. The null hypothesis is that the input twice and a high threshold for additional testing is set (the
does identify someone in the searched list of people, so: high threshold decreases the power of the test).
The ideal population screening test would be cheap, easy
the probability of type I errors is called the false to administer, and produce zero false-negatives, if posreject rate (FRR) or false non-match rate (FNMR), sible. Such tests usually produce more false-positives,
which can subsequently be sorted out by more sophisti while the probability of type II errors is called cated (and expensive) testing.
the false accept rate (FAR) or false match rate
(FMR).[12]
If the system is designed to rarely match suspects then
the probability of type II errors can be called the "false
alarm rate. On the other hand, if the system is used for
validation (and acceptance is the norm) then the FAR is a
measure of system security, while the FRR measures user
inconvenience level.

6.6 Medical testing


False negatives and false positives are signicant issues in
medical testing. False negatives may provide a falsely reassuring message to patients and physicians that disease is
absent, when it is actually present. This sometimes leads
to inappropriate or inadequate treatment of both the patient and their disease. A common example is relying

REFERENCES

on cardiac stress tests to detect coronary atherosclerosis,


even though cardiac stress tests are known to only detect
limitations of coronary artery blood ow due to advanced
stenosis.

Statisticians and engineers cross-reference of statistical terms

False negatives produce serious and counter-intuitive


problems, especially when the condition being searched
for is common. If a test with a false negative rate of only
10%, is used to test a population with a true occurrence
rate of 70%, many of the negatives detected by the test
will be false.

Type III error

False positives can also produce serious and counterintuitive problems when the condition being searched for
is rare, as in screening. If a test has a false positive rate of
one in ten thousand, but only one in a million samples (or
people) is a true positive, most of the positives detected
by that test will be false. The probability that an observed
positive result is a false positive may be calculated using
Bayes theorem.

6.7

Paranormal investigation

The notion of a false positive is common in cases of


paranormal or ghost phenomena seen in images and such,
when there is another plausible explanation. When observing a photograph, recording, or some other evidence
that appears to have a paranormal origin in this usage,
a false positive is a disproven piece of media evidence
(image, movie, audio recording, etc.) that actually has a
natural explanation.[Note 2]

See also
Binary classication
Detection theory
Egon Pearson
False positive paradox
Family-wise error rate
Information retrieval performance measures
NeymanPearson lemma
Null hypothesis
Probability of a hypothesis for Bayesian inference
Precision and recall
Prosecutors fallacy
Prozone phenomenon
Receiver operating characteristic
Sensitivity and specicity

Testing hypotheses suggested by the data

8 Notes
[1] In relation to this newborn screening, recent studies have
shown that there are more than 12 times more false positives than correct screens (Gambrill, 2006. )
[2] Several sites provide examples of false positives, including The Atlantic Paranormal Society (TAPS) and
Moorestown Ghost Research.

9 References
[1] Sheskin, David (2004). Handbook of Parametric and
Nonparametric Statistical Procedures. CRC Press. p. 54.
ISBN 1584884401.
[2] Peck, Roxy and Jay L. Devore (2011). Statistics: The Exploration and Analysis of Data. Cengage Learning. pp.
464465. ISBN 0840058012.
[3] Cisco Secure IPS Excluding False Positive Alarms
http://www.cisco.com/en/US/products/hw/vpndevc/
ps4077/products_tech_note09186a008009404e.shtml
[4] Lindenmayer, David; Burgman, Mark A. (2005). Monitoring, assessment and indicators. Practical Conservation
Biology (PAP/CDR ed.). Collingwood, Victoria, Australia: CSIRO Publishing. pp. 401424. ISBN 0-64309089-4.
[5] Schlotzhauer, Sandra (2007). Elementary Statistics Using
JMP (SAS Press) (1 ed.). Cary, NC: SAS Institute. pp.
166423. ISBN 1-599-94375-1.
[6] Shermer, Michael (2002). The Skeptic Encyclopedia of
Pseudoscience 2 volume set. ABC-CLIO. p. 455. ISBN
1-57607-653-9. Retrieved 10 January 2011.
[7] Neyman, J.; Pearson, E.S. (1967) [1928]. On the Use
and Interpretation of Certain Test Criteria for Purposes
of Statistical Inference, Part I. Joint Statistical Papers.
Cambridge University Press. pp. 166.
[8] David, F.N. (1949). Probability Theory for Statistical
Methods. Cambridge University Press. p. 28.
[9] Pearson, E.S.; Neyman, J. (1967) [1930]. On the Problem of Two Samples. Joint Statistical Papers. Cambridge
University Press. p. 100.
[10] Neyman, J.; Pearson, E.S. (1967) [1933]. The testing of
statistical hypotheses in relation to probabilities a priori.
Joint Statistical Papers. Cambridge University Press. pp.
186202.
[11] Fisher, R.A. (1966). The design of experiments. 8th edition. Hafner:Edinburgh.

[12] Williams, G.O. (1996). Iris Recognition Technology


(PDF). debut.cis.nctu.edu.tw. p. 56. Retrieved 2010-0523. crossover error rate (that point where the probabilities
of False Reject (Type I error) and False Accept (Type II
error) are approximately equal) is .00076%

Betz, M.A. & Gabriel, K.R., Type IV Errors and


Analysis of Simple Eects, Journal of Educational
Statistics, Vol.3, No.2, (Summer 1978), pp. 121
144.
David, F.N., A Power Function for Tests of Randomness in a Sequence of Alternatives, Biometrika,
Vol.34, Nos.3/4, (December 1947), pp. 335339.
Fisher, R.A., The Design of Experiments, Oliver &
Boyd (Edinburgh), 1935.
Gambrill, W., False Positives on Newborns Disease Tests Worry Parents, Health Day, (5 June
2006). 34471.html
Kaiser, H.F., Directional Statistical Decisions,
Psychological Review, Vol.67, No.3, (May 1960),
pp. 160167.
Kimball, A.W., Errors of the Third Kind in Statistical Consulting, Journal of the American Statistical
Association, Vol.52, No.278, (June 1957), pp. 133
142.
Lubin, A., The Interpretation of Signicant Interaction, Educational and Psychological Measurement, Vol.21, No.4, (Winter 1961), pp. 807817.
Marascuilo, L.A. & Levin, J.R., Appropriate Post
Hoc Comparisons for Interaction and nested Hypotheses in Analysis of Variance Designs: The
Elimination of Type-IV Errors, American Educational Research Journal, Vol.7., No.3, (May 1970),
pp. 397421.
Mitro, I.I. & Featheringham, T.R., On Systemic Problem Solving and the Error of the Third
Kind, Behavioral Science, Vol.19, No.6, (November 1974), pp. 383393.
Mosteller, F., A k-Sample Slippage Test for an
Extreme Population, The Annals of Mathematical
Statistics, Vol.19, No.1, (March 1948), pp. 5865.
Moulton, R.T., Network Security, Datamation,
Vol.29, No.7, (July 1983), pp. 121127.
Raia, H., Decision Analysis: Introductory Lectures
on Choices Under Uncertainty, AddisonWesley,
(Reading), 1968.

10 External links
Bias and Confounding presentation by Nigel
Paneth, Graduate School of Public Health, University of Pittsburgh

11

11
11.1

TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Text and image sources, contributors, and licenses


Text

Type I and type II errors Source: https://en.wikipedia.org/wiki/Type_I_and_type_II_errors?oldid=683980392 Contributors: Michael


Hardy, Fred Bauder, Delirium, Den fjttrade ankan~enwiki, BenKovitz, Conti, Jnc, Mazin07, Chris 73, Nurg, Giftlite, Smjg, Pashute,
Pgan002, Jdevine, Piotrus, James A. Donald, Pmanderson, Urhixidur, Poccil, Rich Farmbrough, Bender235, Kaisershatner, Elwikipedista~enwiki, El C, Nonpareility, Arcadian, BlueNovember, Bfg, Drf5n, Musiphil, Gary, Andrewpmk, Samohyl Jan, Cburnett,
Suruena, Zin~enwiki, BDD, InBalance, Thryduulf, Mindmatrix, Shreevatsa, Contele de Grozavesti, Junes, Btyner, Shanedidona, Elvey,
Rjwilmsi, Reinis, Billjeerys, Wragge, Jrtayloriv, TeaDrinker, Atif.hussain, Bgwhite, Roboto de Ajvol, Wavelength, Hairy Dude, Red
Slash, Rsrikanth05, Sjb90, Dbrs, Aaron Schulz, Vlad, Lcmortensen, DRosenbach, Arthur Rubin, Modify, Ketil3, Ethan Mitchell, Bo
Jacoby, Snalwibma, SmackBot, Lantianer, NZUlysses, Jtneill, BiT, Xaosux, The Gnome, The X, Taelus, Emufarmers, OrangeDog,
Afasmit, Nbarth, Zven, Muboshgu, Danielkueh, Anthon.E, RolandR, G716, Jbergquist, Mwtoews, Spinality, Ohconfucius, Tim bates,
Evenios, Nijdam, Edesio, Still A Student, Dicklyon, Lifeartist, Varuag doos, Cpeter, Meltingpot, Cadaeib, Ginkgo100, Dthvt, Freelance
Intellectual, Jkchoe, Frank Lofaro Jr., Quang thai, CmdrObot, Jackzhp, Floridi~enwiki, Pfhenshaw, Cydebot, Groovy12, Clayoquot,
Sandlaus, Blaisorblade, YorkBW, Lindsay658, IP 84.5, Marek69, Sinclairway, Behco, Memphis-Ahn, XavierEverett, JAnDbot, Nthep,
Ph.eyes, Dr mindbender, Myownfanclub, Acroterion, CrizCraig, Jarekt, SHCarter, Rami R, Baccyak4H, Destynova, David Eppstein, A2computist, WLU, Raoulduke47, Epylar, CliC, Dima373, Ndabney, Ranman45, R'n'B, Mbhiii, Lilac Soul, J.delanoy, Pharaoh of the
Wizards, Adavidb, Stolkin, Mikael Hggstrm, SteveChervitzTrutane, KCinDC, Nadiatalent, Fuenfundachtzig, DMCer, Ratfox, Anas.sal,
Julienrl, UnitedStatesian, Jamelan, Temporaluser, Gabrielespnz, Yabti, Scarbrow, Thefellswooper, Reuqr, Bajaj.nikkey, Oda Mari, Anchor
Link Bot, Melcombe, Krefts, Mikefero, Telakin, Doncampbell30, SpookyVale, ClueBot, Rumping, Andy1618, Darobian, ImperfectlyInformed, JP.Martin-Flatin, Rturlapaty, Auntof6, Stxera, Excirial, Tomeasy, SpikeToronto, Sun Creator, ZuluPapa5, Patrick30, Cjtysor,
ThatProf, FinnMan, Qwfp, Zodon, Tayste, Addbot, Wickey-nl, Fgnievinski, OZJ, CanadianLinuxUser, Orlandoturner, AgadaUrbanit,
Chris.Heward, Tide rolls, , Teles, Legobot, Yobot, Pedmayn, Wjastle, Raimundo Pastor, AnomieBOT, Kingpin13, Bluerasberry,
Materialscientist, Jon187, Eumolpo, DynamoDegsy, MonojitSengupta, Peterdx, J04n, Omnipaedista, Kernel.package, Sqgl, JonDePlume,
FrescoBot, Buzhan, Lion-hearted, Nagdeep, RedBot, Tsunhimtse, FloorSugar, Raidon Kane, Duoduoduo, Iowawindow, TheLongTone,
Reenus, DARTH SIDIOUS 2, PPdd, Skelly 53, DASHBot, EmausBot, RA0808, Abcdefghijklmnop1234567, HenryXVII, Der Trumer
(DE), Alpha Quadrant (alt), Loard, Muv4zqlvc3, Ocaasi, Hudson Stern, SmesharikiAreTheBest, Glorytothenation, Mikhail Ryazanov,
ClueBot NG, Rainbyte, Mathstat, Peter James, Kkddkkdd, Liana, Accusativen hos Olsson, Snotbot, Frietjes, Alexhangartner, Widr,
Airspace2, MerlIwBot, Helpful Pixie Bot, Jpgill86, Amoriarty21, Curb Chain, BG19bot, CeraBot, Amyunimus, Lnedelescu, Xcopyandpasteskillz, EmilioDada, Khazar2, Dexbot, Sminthopsis84, Geomayne, Makecat-bot, 93, Tal.spackman, H.A.Phe, Azn2themax, Wikiuser13, Ginsuloft, RSelove, Soumava1, Bilorv, Bpwilbur, Findslowly, Engheta, Errorxer1, Isambard Kingdom, Qazplmqazplm, Hemant
Rupani, Johnnylioltu, Texyalen and Anonymous: 322

11.2

Images

File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Original


artist: ?
File:Fisher_iris_versicolor_sepalwidth.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Fisher_iris_versicolor_
sepalwidth.svg License: CC BY-SA 3.0 Contributors: en:Image:Fisher iris versicolor sepalwidth.png Original artist: en:User:Qwfp (original); Pbroks13 (talk) (redraw)
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-bysa-3.0 Contributors: ? Original artist: ?
File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: OpenClipart Original artist: OpenClipart
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ?
Original artist: ?

11.3

Content license

Creative Commons Attribution-Share Alike 3.0

Vous aimerez peut-être aussi