Vous êtes sur la page 1sur 10

Uncertainties

What is uncertain?
Virtually every measurement or observation in physics will be uncertain to some degree.

Why is everything uncertain?


You should normally use your common sense to work out possible causes of uncertainty. Examples might include: Temperature. Let us say you are measuring the length of something. You will probably do this by comparing to a ruler or callipers or some such. But virtually every type of material expands when it gets hotter. So your ruler will probably have expanded to a different length from when it was made. Wear. Let us imagine that you are measuring a voltage with a multimeter. The multimeter is made of magnets, pieces of metal and springs. All of these are made in a factory, usually by a stamp which cuts them from larger pieces of metal. Every time the stamp produces one piece of whatever, a thin layer of atoms is probably scraped off its surface. So each bit of metal produced by the stamp is probably marginally larger than the one that went before. This will affect the voltages measured by your multimeter. Purity. All your measuring equipment is made from materials produced by some chemical process. As any chemist will tell you, it is not possible to produce a totally pure sample of anything. There will always be at least a few atoms there which do not belong. Which will affect your apparatus. Thermal noise. All the atoms from which your apparatus are made are vibrating around due to their temperature (unless your apparatus is at absolute zero, which is impossible). At any given time, they will not be in their average places.

Cant you fix this?


You can always reduce the uncertainty, by (for example) controlling the temperature, using more expensive equipment, cooling everything down etc. But this comes at a price. Equipment needed to measure lengths to 1mm accuracy costs a few cents. Measuring to 0.1 mm needs equipment that cost a few dollars, but measuring to 1 micron needs equipment costing hundreds of dollars, and measuring to 10-18m, while possible, requires equipment costing hundreds of millions of dollars.

Why does the uncertainty matter?


Engineering answer:
When you build something, the bits need to fit together or work together. If you are making a car and the door parts come out a bit too big, the door will

no longer fit. If you are building a computer and the battery delivers too low a voltage, the microprocessor will not work. As an engineer, you will need to work out how much variation in the parts you can tolerate, and still produce a functional final product. Bearing in mind that parts with bigger uncertainties are almost certainly cheaper. If you are building components to sell to someone else, the lower the uncertainty in their production, the more desirable they will be to customers, and hence the more money you can ask for them. Example: in the 1950s, the Japanese had a reputation for producing poor quality goods. They started a campaign to decrease their manufacturing uncertainties, and were so successful that they could routinely produce parts with uncertainties of half or less that achieved by most of their competitors. Before long, cars made using these parts had achieved a reputation for reliability and performance that allowed them to steal the markets of most of their competitors.

Science Answer
Once upon a time, long ago, there were exciting science discoveries that could be made with crude equipment. Unfortunately, you have been born several centuries too late to make these discoveries. All the easy discoveries have been made. To discover something new, you will have to be pushing the boundaries. This could be measuring something that nobody has ever measured before, or measuring something that has been measured before with smaller uncertainties. Example: in the early 1990s astronomers were able to measure the speeds of stars with a precision of around 100m/s. By 1995 they had improved this to about 10 m/s, which allowed them to detect the wiggles caused by planets orbiting these stars. This opened up the whole study of extrasolar planets.

Probability distribution function


Imagine that you measure something repeatedly. Would you expect to get the same answer multiple times? Or if you are building a while bunch of objects, would you expect them to all come out the same?

Independent Measurements
If there is uncertainty in your measurement (as is almost always the case), then this will depend on how you make your measurement. If your uncertainty is due to your measuring equipment and you use the same equipment in all measurements, then your measurements are not independent and you probably will get the same answer (which doesnt mean it is right). If the uncertainty is because something depends on the temperature, then a whole bunch of measurements made on a hot summers day will probably come out about the same, but would be different from results obtained in winter. So once again, your measurements are not independent. Truly independent measurements should be made in conditions which allow all the various sources of uncertainty to vary.

Distribution function
If you get different measurements every time you make a reading how can you describe what you get? The most powerful way is to plot a distribution function. This is a histogram of the different measurements. You break up the range over which measurements occur into bins, and for each measurement, see which bin it falls into and add one to that bin. You then plot a graph of bin against the number of measurements that fell into that bin. In Mathematica, type all your measurements into an array (a list separated by commas and enclosed in curly brackets) and give it some name (in this case Ive called it data1) as follows: Data1 = {3.85693, 4.90552, 4.11224, 3.33525, 3.99793, 4.80498, 4.56065, 3.92403, 5.07501, 4.24359, 5.49469, 7.33931, 4.00581, 4.07483, 2.60917, 3.23301, 4.7652, 4.97337, 4.04656, 5.36802} And then plot it using the histogram command, as follows: Histogram[data1, {0.5}, AxesLabel -> {"Value", "Number of Measurements"}, AxesOrigin -> {0.0, 0.0}, PlotRange -> {{0.0, 9.0}, {0.0, 6}}] (.nb. The above works in Mathenatica version 7 in version 6 you need to first type Needs["Histograms`"] to load the histograms package. Version 6 is the one currently installed on the information commons PCs the macs have version 7). What do these distribution functions typically look like? It is usually assumed that they look like Gaussian (also known as normal or bell curve) functions. Here is what one might look like:

You can see that in this case, measurements are typically around 4, but some range as low as 2.5 and higher than 7. The theoretical distribution function is what youd get if you made an infinite number of independent measurements, and might look like:

You can see that in most cases, you have a typical value (in this case around 4.3) with lots of measurements fairly close to that, and a steadily decreasing number of measurements further away. If your measurements really were independent, then the true value of whatever you were trying to measure should be the middle of this histogram (4.3 in this case). You can work out this value by taking the mean of all the data points.

Systematic Uncertainties
Unfortunately, making truly independent observations (ones in which all the sources of variation are allowed to vary over their full range) is often impossible. For example, your equipment may be too expensive to allow you to buy lots of them so if there is an error in it, that error will be in all your data. Or there may be something wrong with your method, but you cannot think of another method. In this case, you may have systematic uncertainties. If this is the case, the centre of distribution may not be a good estimate of the true value.

Putting a number of uncertainty


One of the most revolutionary ideas in all of science is that you can put a precise number on uncertainty. How can you do this put a number on something which by definition you dont know?

Engineering Answer
Usually in engineering, uncertainties are measured by quoting the tolerance. This is the range within which the value is guaranteed to lie. So if you say that a battery will deliver 3.4 Volts with a tolerance of 0.1 volts, you are guaranteeing that the voltage will lie between 3.3 and 3.5 volts. Sometimes variations on this are used, such as guaranteeing that 99% of the products will lie within the tolerance rather than all of them.

Science Answer
In physics (and other sciences, and in social science research, and in newspaper opinion poles) uncertainties are measured by quoting the

standard uncertainty or standard error. This is actually defined by an international (ISO) standard. How is this defined? If you measure the same thing repeatedly, each measurement will differ. The standard uncertainty is the standard deviation of all these measurements. What is the standard deviation s? if you measure some parameter x a number n times. The first measurement is x1, the second is x2 and so on. You work out the mean value x by adding up all the measurements and then dividing by the number of measurements, i.e: x1 + x2 + x 3 + .... + x n 1 n x= = ! xi n n i =1 You can then work out the standard deviation s by taking each measurement, working out how far it is from the mean, squaring all these values, dividing by n and taking the square root. 1 n 2 s 2 = " ( xi ! x ) n i =1 You can measure these in Mathematica as follows: Type your data into an array (a list separated by commas and enclosed in curly brackets) and give it some name (in this case Ive called it data1) as follows: data1 = {3.85693, 4.90552, 4.11224, 3.33525, 3.99793, 4.80498, 4.56065, 3.92403, 5.07501, 4.24359, 5.49469, 7.33931, 4.00581, 4.07483, 2.60917, 3.23301, 4.7652, 4.97337, 4.04656, 5.36802} Then use the Mean or StandardDeviation commands: Mean[data1] 4.43631 StandardDeviation[data1] 0.999758

What does a standard uncertainty mean?


If the uncertainties follow a Gaussian distribution (also known as a Bell curve or normal distribution), then you expect 68% of the data points to line within one standard uncertainty of the true value, 95% to lie within two standard uncertainties, and 99.7% to lie within three standard uncertainties. In practice, many sets of data do not follow a Gaussian distribution particularly well. Often there are more weirdo way-out points than such a distribution would predict. So use this with caution.

Relative merits of the two ways of quoting uncertainty


Which way is better? The Engineering way is certainly more useful for a customer. But most real processes follow a Gaussian distribution i.e. most observations are close to the true answer, but a few can be very far away, as discussed above.

The tolerance in a case like this is a bit unclear most observations are pretty close to the mean x (within one standard deviation s), but you will get the occasional ones that are much further out. If you insist that you never ever get something outside the tolerance, you will need to set the tolerance at perhaps five standard uncertainties. But if you were prepared to have one in a hundred measurements outside your tolerance range, you could use two standard uncertainties. There is a debate within the engineering standards community about whether standard uncertainties may not be a better reflection of reality than tolerances. For the purposes of PHYS1101, always use standard uncertainties.

How to Measure the Uncertainty


Method 1: Repeat Measurements
Measure something repeatedly. Record all the measurements, and work out the standard deviation, using the equations above (or Excel or Mathematica, or many other programs). Warning this only works if all your measurements are independent i.e. the uncertainties are different each time. If, for example, you are measuring lengths using a shrunken inaccurate ruler, all your measurements will come out too large by the same amount. The best way to do this is to have different people make different measurements using different techniques, and then work out the standard deviation. Just having the same person measure the same thing in the same way repeatedly may not give you a fair estimate of the true uncertainty.

Method 2: First Principles


This method requires you to think of all the likely causes of uncertainty, and estimate how big they all are. You then use the error propagation equations (below) to work out what the final uncertainty should be. How can you estimate uncertainties? Here are some examples: If you are using a piece of equipment, the manufacturers manual should tell you the uncertainty in its measurements. If you are using a number from a book (such as a physical constant) you should be able to look up the uncertainty in this number. As a rule of thumb, you cannot measure anything with much better precision than the smallest markings on the scale. If your uncertainty comes because something varies with temperature, you might be able to look up how strong this effect is, and measure the temperature.

Uncertainty propagation.
Imagine that you are trying to determine some value X. What you actually measure are some different parameters A and B, and you plug them into an equation to work out X. If you know the uncertainties in A and B (A and B) how do you work out the uncertainty in X (X)? The following equations work if and only if the uncertainties in A and B are uncorrelated. If they are correlated, you need to do something much more complicated (beyond the scope of this course). sum or difference - use the absolute uncertainties: If X = A + B or X = A B then (X)2 = (A)2 + (B)2 product or fraction - use the relative uncertainties: If X = A B or X = A/B then #"x & 2 # "A & 2 #"B & 2 % ( =% ( +% ( $X ' $A' $B' Adding a constant If X = A + C, where C is a constant with negligible uncertainty, then X = A Multiplying by a Constant If X = CA, where C is a constant with negligible uncertainty, X = C A Raising to a Constant power If X = An, and the uncertainty in n is small enough to ignore, then "X " =n A X A

Logarithms If X = ln(A) (log to the base e), then: " "X = A A Exponents If X = eA, then: "X = "A X General Rule All the above equations, and many more, can be deduced from the following general rule. The general rule for calculating the uncertainty of any function of individually measured values X = f(A,B,C, . . . ) is 2 2 2 2 ("x ) = ("x,A ) + ("x,B ) + ("x,C ) +K where
$ #X ' "X ,A = & )"A % #A (

and so on. This uses partial differentiation, which you may not yet be familiar with. Dont worry if it makes no sense you can just use above simpler equations.

Using Uncertainty
Quoting Uncertainties
Sometimes the aim of your experiment is simply to measure some parameter for other people to use. In this case, you must always quote the uncertainty you measure. The best way to do this is explicitly i.e: A = 46.53 0.4 You should do this wherever possible in this course, and in all science courses at the ANU (and indeed in all your work as a scientist). A lazier way to point out the uncertainty in a result is to imply it by how many significant figures you quote. You should not quote significant figures that affect the number by much less than the uncertainty. There is no hard and fast rule for this if A = 46.53 0.4, you would be OK quoting A = 46.5 or A = 46.53, but not A = 50 or A = 46.5310784, as these would give a reader a false sense of your uncertainties.

Comparing with theory or other experiments


The most common reason for doing an experiment is to test a theory, or to see if someone elses experiment was correct. You will need to compare your

measured results (complete with uncertainties) either with a theoretical prediction or with someone elses results. You may be comparing a single number (such as the value of some constant) or a whole set of numbers (such as a spectrum).

Null Hypothesis
You can never prove a theory true. Even if your data agree very well, future, more precise data may at some later stage prove a theory wrong. What you can do is prove theories wrong. You should start by defining a null hypothesis that you want to disprove. This null hypothesis is typically that the theory is correct or The other persons data which Im trying to test is correct. You then try to prove this wrong.

Comparing your result with theory


If a theory is correct, then your data should lie within one standard uncertainty of it 68% of the time, within two standard uncertainties of it 95% of the time and within three standard uncertainties of it 99.7% of the time. So if your observation disagrees with the theory by three standard uncertainties, you can be pretty sure that the theory is wrong. If the discrepancy is only two standard uncertainties, the theory is most likely wrong, but one time in twenty even correct theories would give you a point this far off. If you are comparing your result with another experimental result (which will have its own uncertainty), your null hypothesis is that the difference between t he two values is zero. Use the error propagation equations above to combine the two uncertainties (the uncertainty in your result and the uncertainty in the one you are comparing it to) to get an uncertainty in this difference. Compare the measured difference to this uncertainty. If you have lots of data points, such as a graph, then you should plot the theoretical prediction on top of your data (see Mathematica plotting tutorial for how to do this). The theoretical line should: Pass within the uncertainties of around 68% of the data points Be above roughly as many data points as it is below. All the points that are above the model should not be in the same region (such as one end of the plot)

Reducing your uncertainties


Take more data
If your uncertainties are too large for your purpose, you can always make more measurements and average them. If (and only if) the different measurements are independent, then if you average n measurements, each with an uncertainty of , then the uncertainty in the mean mean is " " mean = n #1 If the measurements are not independent (because you have systematic uncertainties), the only way to improve things is to track down and eliminate these systematic uncertainties.

Strategies for Identifying Systematic Uncertainties


Make multiple measurements and compute the standard deviation. Compare this to what you expect the uncertainty to be. If the measured scatter is much larger, you have something systematic going on. Plot your data and see if it shows any unexpected patterns. Unexpected patterns are very powerful clues. Calibrate your instruments by measuring the same thing with different instruments or in different ways. This will tell you how good your instruments or techniques are.

Vous aimerez peut-être aussi