Vous êtes sur la page 1sur 10

Error in Measurement

Topic Index | Algebra Index | Regents Exam Prep Center

The
precision of a
measuring
Any measurement made with a measuring device is
approximate. instrument
If you measure the same object two different times, the two is determined by the
measurements may not be exactly the same. The difference smallest unit to which
it can measure. The
between two measurements is called a variation in the
precision is said to be
measurements.
the same as the
Another word for this variation - or uncertainty in smallest fractional or
decimal division on the
measurement - is "error." This "error" is not the same as scale of the measuring
a "mistake." It does not mean that you got the wrong instrument.
answer. The error in measurement is a mathematical way to
show the uncertainty in the measurement. It is the
difference between the result of the measurement and the
true value of what you were measuring.

Ways of Expressing Error in Measurement:


1. Greatest Possible Error:
Because no measurement is exact, measurements are always made to the "nearest
something", whether it is stated or not. The greatest possible error when measuring
is considered to be one half of that measuring unit. For example, you measure a
length to be 3.4 cm. Since the measurement was made to the nearest tenth, the
greatest possible error will be half of one tenth, or 0.05.
2. Tolerance intervals: Accuracy is a
Error in measurement may be represented by a tolerance measure of how close
interval (margin of error). Machines used in the result of the
manufacturing often set tolerance intervals, or ranges in measurement comes to
which product measurements will be tolerated or accepted the "true", "actual", or
before they are considered flawed. "accepted" value.
(How close is your
To determine the tolerance interval in a measurement, add answer to the accepted
and subtract one-half of the precision of the measuring value?)
instrument to the measurement.

For example, if a measurement made with a metric ruler is


Tolerance is the
5.6 cm and the ruler has a precision of 0.1 cm, then the greatest range of
variation that can be
tolerance interval in this measurement is 5.6 0.05 cm, or
allowed.
from 5.55 cm to 5.65 cm. Any measurements within this (How much error in the
range are "tolerated" or perceived as correct. answer is occurring or is
acceptable?)

3. Absolute Error and Relative Error:


Error in measurement may be represented by the actual amount of error, or by a
ratio comparing the error to the size of the measurement.

The absolute error of the measurement shows how large the error actually is, while
the relative error of the measurement shows how large the error is in relation to the
correct value.

Absolute errors do not always give an indication of how important the error may
be. If you are measuring a football field and the absolute error is 1 cm, the error is
virtually irrelevant. But, if you are measuring a small machine part (< 3cm), an
absolute error of 1 cm is very significant. While both situations show an absolute
error of 1 cm., the relevance of the error is very different. For this reason, it is more
useful to express error as a relative error. We will be working with relative error.
Absolute Error:
Absolute error is simply the amount of physical error in a measurement.

For example, if you know a length is 3.535 m + 0.004 m, then 0.004 m is an absolute error.
Absolute error is positive.
In plain English: The absolute error is the difference between the measured value and the
actual value. (The absolute error will have the same unit label as the measured quantity.)
Relative Error:
Relative error is the ratio of the absolute error of the measurement to the accepted
measurement. The relative error expresses the "relative size of the error" of the measurement
in relation to the measurement itself.
Should the accepted or true
When the accepted or true measurement is measurement NOT be known, the relative
known, the relative error is found using error is found using the measured value, which
is considered to be a measure of precision.

which is considered to be a measure of accuracy.

In plain English:

4. Percent of Error:
Error in measurement may also be expressed as a percent of error. The percent of
error is found by multiplying the relative error by 100%.

Ways to Improve Accuracy in Measurement


1. Make the measurement with an instrument that has the highest level of precision. The
smaller the unit, or fraction of a unit, on the measuring device, the more precisely the device
can measure. The precision of a measuring instrument is determined by the smallest unit to
which it can measure.
2. Know your tools! Apply correct techniques when using the measuring instrument and
reading the value measured. Avoid the error called "parallax" -- always take readings by
looking straight down (or ahead) at the measuring device. Looking at the measuring device
from a left or right angle will give an incorrect value.

3. Repeat the same measure several times to get a good average value.

4. Measure under controlled conditions. If the object you are measuring could change size
depending upon climatic conditions (swell or shrink), be sure to measure it under the same
conditions each time. This may apply to your measuring instruments as well.

Examples:

1. Skeeter, the dog, weighs exactly 36.5 pounds. When weighed on a defective
scale, he weighed 38 pounds. (a) What is the percent of error in measurement of the
defective scale to the nearest tenth? (b) If Millie, the cat, weighs 14 pounds on the
same defective scale, what is Millie's actual weight to the nearest tenth of a pound

Answer (a)

(b) Let x = Millie's actual weight


14 = x + .041x
x = 13.4 pounds

2. The actual length of this field is 500 feet. A


measuring instrument shows the length to be 508
feet.
Find:
a.) the absolute error in the measured length of the
field.
b.) the relative error in the measured length of the
field.
c.) the percentage error in the measured length of
the field

Answer:
a.) The absolute error in the length of the field is 8 feet.

b.) The relative error in the length of the field is

c.) The percentage error in the length of the field is

3. Find the absolute error, relative error and percent of error of the approximation
3.14
to the value , using the TI-83+/84+ entry of pi as the actual value.

References:

http://www.regentsprep.org/regents/math/algebra/am3/LError.htm
How To Read A Vernier Caliper

Show/Hide Sub-topics (O Level)

A quick guide on how to read a vernier caliper. A vernier caliper outputs measurement readings
in centimetres (cm) and it is precise up to 2 decimal places (E.g. 1.23 cm).

Note: The measurement-reading technique described in this post will be similar for vernier
calipers which output measurement readings in inches.

Measurement Reading Technique For Vernier Caliper

In order to read the measurement readings from vernier caliper properly, you need to remember
two things before we start. For example, if a vernier caliper output a measurement reading of
2.13 cm, this means that:

The main scale contributes the main number(s) and one decimal place to the reading
(E.g. 2.1 cm, whereby 2 is the main number and 0.1 is the one decimal place number)
The vernier scale contributes the second decimal place to the reading (E.g. 0.03 cm)

Lets examine the image of the vernier caliper readings above. We will just use a two steps
method to get the measurement reading from this:
To obtain the main scale reading: Look at the image above, 2.1 cm is to the immediate
left of the zero on the vernier scale. Hence, the main scale reading is 2.1 cm
To obtain the vernier scale reading: Look at the image above and look closely for an
alignment of the scale lines of the main scale and vernier scale. In the image above, the
aligned line correspond to 3. Hence, the vernier scale reading is 0.03 cm.

In order to obtain the final measurement reading, we will add the main scale reading and
vernier scale reading together. This will give 2.1 cm + 0.03 cm = 2.13 cm.

Lets go through another example to ensure that you understand the above steps:

Main scale reading: 10.0 cm (Immediate left of zero)

Vernier scale reading: 0.02 cm (Alignment of scale lines)

Measurement reading: 10.02 cm

References:

https://www.miniphysics.com/how-to-read-a-vernier-caliper.html

How To Read A Micrometer Screw Gauge

Show/Hide Sub-topics (O Level)


A quick guide on how to read a micrometer screw gauge. Similar to the way a vernier caliper is
read, a micrometer reading contains two parts:

the first part is contributed by the main scale on the sleeve


the second part is contributed by the rotating vernier scale on the thimble

A typical micrometer screw gauge

The above image shows a typical micrometer screw gauge and how to read it. Steps:

To obtain the first part of the measurement: Look at the image above, you will see a number 5
to the immediate left of the thimble. This means 5.0 mm. Notice that there is an extra line
below the datum line, this represents an additional 0.5 mm. So the first part of the
measurement is 5.0+0.5=5.5

mm.
To obtain the second part of the measurement: Look at the image above, the number 28 on the
rotating vernier scale coincides with the datum line on the sleeve. Hence, 0.28 mm is the second
part of the measurement.

You just have to add the first part and second part of the measurement to obtain the micrometer
reading: 5.5+0.28=5.78

mm.

To ensure that you understand the steps above, heres one more example:
First part of the measurement: 2.5 mm

Second part of the measurement: 0.38 mm

Final measurement: 2.88 mm

Now, we shall try with zero error. If you are not familiar on how to handle zero error for
micrometer screw gauge, I suggest that you read up on Measurement of Length.

The reading on the bottom is the measurement obtained and the reading at the top is the zero
error. Find the actual measurement. (Meaning: get rid of the zero error in the measurement or
take into account the zero error)

Measurement with zero error: 1.76 mm

Zero error: + 0.01 mm (positive because the zero marking on the thimble is below the datum
line)

Measurement without zero error: 1.76(+0.01)=1.75

mm

The subtraction logic is similar to the method explained in How to read a vernier caliper. You
can take a look and comment below, if you encounter any difficulties.

References:

https://www.miniphysics.com/how-to-read-a-micrometer-screw-gauge.html

Vous aimerez peut-être aussi