Vous êtes sur la page 1sur 6

http://www.experiment-resources.com/type-I-error.

html

EXPERIMENTAL ERRORS: TYPE I ERROR - TYPE II ERROR


Whilst many will not have heard of Type I error or Type II error, most people will be familiar with the terms false positive and false negative, mainly as a medical term. by Martyn Shuttleworth (2008) A patient might take an HIV test, promising a 99.9% accuracy rate. This means that 1 in every 1000 tests could give a false positive, i nforming a patient that they have the virus, when they do not. Conversely, the test could also show a false negative reading, giving an HIV positive patient the all-clear. This is why most medical tests require duplicate samples, to stack the odds up favorably. A one in one thousand chance becomes a 1 in 1 000 000 chance, if two independent samples are tested. With any scientific process, there is no such ideal as total proof or total rejection, and researchers must, by necessity, work upon probabilities. That means that, whatever level of proof was reached, there is still the possibility that the results may be wrong. This could take the form of a false rejection, or acceptance, of the null hypothesis.

HOW DOES THIS TRANSLATE TO SCIENCE


TYPE I ERROR
A Type I error is often referred to as a false positive, and is the process of incorrectly rejecting the null hypothesis in favor of the alternative. In the case above, the null hypothesis refers to the natural state of things, stating that the patient is not HIV positive. The alternative hypothesis states that the patient does carry the virus. A Type I error would indicate that the patient has the virus when they do not, a false rejection of the null.

TYPE II ERROR
A Type II error is the opposite of a Type I error and is the false acceptance of the null hypothesis. A Type II error, also known as a false negative, would imply that the patient is free of HIV when they are not, a dangerous diagnosis. In most fields of science, Type II errors are not seen to be as problematic as a Type I error. With the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is inferred from a non-rejected null. The Type I error is more serious, because you have wrongly rejected the null hypothesis. Medicine, however, is one exception; telling a patient that they are free of disease, when they are not, is potentially dangerous.

REPLICATION
This is the reason why scientific experiments must be replicatable, and other scientists must be able to follow the exact methodology. Even if the highest level of proof, where P < 0.01 (probability is less than 1%), is reached, out of every 100 experiments, there will be one false result. To a certain extent, duplicate or triplicate samples reduce the chance of error, but may still mask chance if the error causing variable is present in all samples. If however, other researchers, using the same equipment, replicate the experiment and find that the results are the same, the chances of 5 or 10 experiments giving false results is unbelievably small. This is how science regulates, and minimizes, the potential for Type I and Type II errors. Of course, in non-replicatable experiments and medical diagnosis, replication is not always possible, so the possibility of Type I and II errors is always a factor. One area that is guilty of ignoring Type I and II errors is the lawcourt, where the jury is not told that fingerprint and DNA tests may produce false results. There have been many documented miscarriages of justice involving these tests. Many courts will now not accept these tests alone, as proof of guilt, and require other evidence.

TYPE III ERRORS


Many statisticians are now adopting a third type of error, a type III, which is where the null hypothesis was rejected for the wrong reason. In an experiment, a researcher might postulate a hypothesis and perform research. After analyzing the results statistically, the null is rejected. The problem is, that there may be some relationship between the variables, but it could be for a different reason than stated in the hypothesis. An unknown process may underlie the relationship.

CONCLUSION
Both Type I errors and Type II errors are a factors that every scientist and researcher must take into account. Whilst replication can minimize the chances of an inaccurate result, this is one of the major reasons why research should be replicatable. Many scientists do not accept quasi-experiments, because they are difficult to replicate and analyze.

Let us know about an Error

Related articles: Experiment Error Random Error Systematic Error

Read more: http://www.experiment-resources.com/type-I-error.html#ixzz1HTgIFsJA

EXPERIMENTAL ERROR
Experimental error is unavoidable during the conduct of any experiment, mainly because of the falsifiability principle of the scientific method. by Siddharth Kalla (2009) Therefore it is important to take steps to minimize the errors and also to understand them in order to be better able to understand the results of the experiment. This entails a study of the type and degree of errors in experimentation. Statistical tests contain experimental errors that can be classified as either Type-I or TypeII errors. It is important to study both these effects in order to be able to manage error and report it, so that the conclusion of the experiment can be rightly interpreted. Type I Error - False Positive Type II Error - False Negative

TYPE I ERROR
The Type I error (-error, false positives) occurs when a the null hypothesis (H0) is rejected in favor of the research hypothesis (H1), when in reality the 'null' is correct. This can be understood in terms of medical tests. For example, suppose there is a test that is used to detect a disease in a person. If a Type I error occurs in the test, it means that the test will say the person is suffering from that disease even though he is healthy.

TYPE II ERROR
Type II errors (-errors, false negatives) on the other hand, imply that we reject the research hypothesis, when in fact it is correct. In the similar example of a medical test for a disease, if a Type-II error occurs, then it means that the test will not detect the disease in the person even though he is actually suffering from it.

HYPOTHESIS TESTING
Scientific Conclusion H0 Accepted Truth H0 Correct Conclusion! H1 Accepted Type 1 Error (false positive)

H1 Type 2 Error (false negative) Correct Conclusion!

In case of Type-I errors, the research hypothesis is accepted even though the null hypothesis is correct. Type-I errors are a false positive that lead to the rejection of the null hypothesis when in fact it may be true. When a Type-II error occurs, the research hypothesis is not detected as the correct conclusion and is therefore passed off. In terms of the null hypothesis, this kind of an error might lead to accepting the null hypothesis when in fact it is false. The significance level refers only to the Type-I error. This is because we ask the question What is the probability that the correlation we observed is purely by chance? and when this question yields an answer of below a significance level (typically 5% or 1%), we state that the result wasn't a chance process and that the parameters under study are indeed related.

REASON FOR ERRORS


Scientific experiments involve a different type of error analysis than a statistical experiment. In science, experimental errors may be caused due to human inaccuracies like a wrong experimental setup in a science experiment or choosing the wrong set of people for a social experiment. Systematic error refers to that error which is inherent in the system of experimentation. For example, if you want to calculate the value of acceleration due to gravity by swinging a pendulum, then your result will invariably be affected by air resistance, friction at the point of suspension and finite mass of the thread. Random errors occur because it is impossible to practically achieve infinite precision. Since the value is higher or lower in a random fashion, averaging several readings will reduce random errors.

SYSTEMATIC ERROR
Systematic error is a type of error that deviates by a fixed amount from the true value of measurement. by Siddharth Kalla (2009) As opposed to random errors, systematic errors are easier to correct. There are many types of systematic errors and a researcher needs to be aware of these in order to offset their influence.

Systematic error in physical sciences commonly occurs with the measuring instrument having a zero error. A zero error is when the initial value shown by the measuring instrument is a non-zero value when it should be zero. For example, a voltmeter might show a reading of 1 volt even when it is disconnected from any electromagnetic influence. This means the systematic error is 1 volt and all measurements shown by this voltmeter will be a volt higher than the true value. This type of error can be offset by simply deducing the value of the zero error. In this case, if the voltmeter shows a reading of 53 volt, then the actual value would be 52 volt. In this case, the systematic error is a constant value. Sometime the measuring instrument itself is faulty, which leads to a systematic error. For example, if your stopwatch shows 100 seconds for an actual time of 99 seconds, everything you measure with this stopwatch will be dilated, and a systematic error is induced in your measurements. In this case, the systematic error is proportional to the measurement. In many experiments, there are inherent systematic errors in the experiment itself, which means even if all the instruments were 100% perfect there would still be an error. For example, in an experiment to calculate acceleration due to gravity using the length and time period of a simple pendulum, the size of the pendulum bob, the air friction, the slight movement of support, etc. all affect the calculated value. These systematic errors are inherent to the experiment and need to be accounted for in an approximate manner. Many systematic errors cannot be gotten rid of by simply taking a large number of readings and averaging them out. For example, in the case of our faulty voltmeter, even if a hundred readings are taken, they will all be near 53 volt instead of the actual 52 volt. Therefore in such cases, calibration of the measuring instrument prior to starting the experiment is required, which will reveal if there is any systematic error or zero error in the measuring instrument. Systematic errors can also be produced by faulty human observations or changes in environment during the experiment, which are difficult to get rid of.

RANDOM ERROR
A random error, as the name suggests, is random in nature and very difficult to predict. It occurs because there are a very large number of parameters beyond the control of the experimenter that may interfere with the results of the experiment. by Siddharth Kalla (2009) Random errors are caused by sources that are not immediately obvious and it may take a long time trying to figure out the source.

Random error is also called as statistical error because it can be gotten rid of in a measurement by statistical means because it is random in nature. Unlike in the case of systematic errors, simple averaging out of various measurements of the same quantity can help offset random errors. Random errors can seldom be understood and are never fixed in nature - like being proportional to the measured quantity or being constant over many measurements. The reason why random errors can be taken care of by averaging is that they have a zero expected value, which means they are truly random and scattered around the mean value. This also means that the arithmetic mean of the errors is expected to be zero. There can be a number of possible sources of random errors and their source depends on the type of experiment and the types of measuring instruments being used. For example, a biologist studying the reproduction of a particular strain of bacterium might encounter random errors due to slight variation of temperature or light in the room. However, when the readings are spread over a period of time, she may get rid of these random variations by averaging out her results. A random error can also occur due to the measuring instrument and the way it is affected by changes in the surroundings. For example, a spring balance might show some variation in measurement due to fluctuations in temperature, conditions of loading and unloading, etc. A measuring instrument with a higher precision means there will be lesser fluctuations in its measurement. Random errors are present in all experiments and therefore the researcher should be prepared for them. Unlike systematic errors, random errors are not predictable, which makes them difficult to detect but easier to remove since they are statistical errors and can be removed by statistical methods like averaging.

Read more: http://www.experiment-resources.com/random-error.html#ixzz1HThc1gFf Read more: http://www.experiment-resources.com/systematic-error.html#ixzz1HThLsAql Read more: http://www.experiment-resources.com/experimentalerror.html#ixzz1HTgmrINS

Vous aimerez peut-être aussi