Vous êtes sur la page 1sur 19

Errors in Measurement

Any measurement carried out is subjected to some errors. These


errors make it difficult to ascertain the true value of the measured
quantity. The nature of the error may be checked by repeating the
measurement a number of times and looking at the spread of values.

If the spread in the data is small the measurement is repeatable


and can be considered as good. But if it is compared with the values
of a standard instrument there is bound to be variations.

He response of a detector to the variation in the measured


quantity may be linear or non-linear. Generally the tendency is to
look for a linear response. If it was not linear the practice was to
make it linear by manipulation.

The term error is defined as :


Error = Instrument reading True Value

Systematic Error

Systematic errors may arise due to shortcomings of the instrument


or the sensor. An instrument may have a zero error, or its output may be
varying in a non-linear fashion with the input.

Systematic errors can also arise due to improper design of the


measuring scheme. It may arise due to loading effect, improper selection
of the sensor.

Systematic errors can also be due to environmental effects.

The major feature of systematic error is that the sources of errors are
recognizable and can be reduced to a great extent by carefully designing
the measuring system and selecting its components.

Placing the instrument in a controlled environment and proper


calibration of it also helps in the reduction of systematic errors.

Certain errors are inherent in instrument system. These may be caused


due to poor design/ construction of the instrument. e.g. errors in divisions
of graduated scale, inequality of balance arms, irregular spring tension
etc.

Other environmental types of error are due to variation of conditions


external to measuring device e.g. temperature, barometric pressure,
humidity, wind forces, magnetic or electric fields etc.

Also errors are caused by act of measurement on the physical


system being tested called loading errors e.g. an orifice meter may disturb
the flow conditions and consequently the flow rate shown may not be
same which would have been there in absence of orifice meter.

Random Errors

The causes of random errors are not known, so becomes all the
more difficult to eliminate them.

They can only be reduced and the errors can be estimated by


using some statistical operations.

If we measure the same input variable a number of times,


keeping all other factors affecting the measurement same, the
following value would never repeat itself.

The deviations of the reading normally follow a particular


distribution and we may be able to reduce the errors by taking a
number of readings and averaging them out.

Inconsistencies are likely to affect measurement of small quantities as


the random errors which are of the order of measured quantity
become noticeable.
Presence of certain system defects e.g. large dimensional tolerances
and friction contribute to random errors such as backlash.
Random errors are also caused due to disturbances such as line
voltage fluctuations, external vibrations to instrument support etc.

Comparison of Systematic and random Errors

If enough readings are taken, the distribution of random errors


will become apparent. The successive readings will cluster about
a central value and will extend over a limited interval
surrounding the central value. Such situation can be analyzed
using statistical analysis to estimate the size of the error (or
likely range within which the true value lies).
In contrast the systematic errors cannot be treated using
statistical techniques, because such errors are fixed and do not
show a distribution. However they can be estimated by
comparison of the instrument to a more accurate standard. Also
knowing the design, the methods of manufacture, the method of
calibration and in general the experience with the type of
instrument helps in estimating the bias errors.
In practice, bias and precision errors occur simultaneously. The
combine effect of x is shown in following figures. In a particular
reading xm the total error is the sum of bias and precision errors
for that measurement.
Full classification of all possible errors as either bias or
precision error would be nearly impossible to make, since some
errors at times behave like bias errors and as precision errors in
other situations.

Causes of these Errors:

Bias or systematic errors :

Calibration errors.

Certain consistently recurring human errors.

Certain errors caused by defective equipment.

Loading errors.

Limitations of system resolution.

Precision or random errors :

Error caused by disturbances to the equipment.

Errors caused by fluctuating experimental conditions.

Errors derived from insufficient measuring system sensitivity.

Errors that are sometimes bias error & sometimes precision


error
o Errors from instrument backlash, friction and hysteresis.
o Errors from calibration drift and variation in test or environmental
conditions
o Errors resulting from variations in procedure or definition anong
experiment

Guaranteed Accuracy

The accuracy and precision of an instrument depends upon its


design, the material used and the workmanship that goes into
manufacturing of the instrument.

In order to assure the purchaser a certain quality of the instrument,


the manufacturer guarantees certain accuracy. In most instruments the
accuracy is guaranteed to be within certain percentage of full scale
reading.

Guarantee error:- The manufacturer specifies the deviation from


the specified value of a particular quantity. The limits of these deviations
from the specified value are defined as limiting errors or Guarantee
errors.

The manufacture of a certain instrument may specify that the


instrument is accurate within + 1% of full scale deflection. This implies
that the full scale reading is guaranteed to be within + 1% of perfectly
accurate reading. However for the reading which is less than full scale,
the limiting error increases.

For example, if a resistor is specified by manufacturer as 4.7 with a


tolerance of + 5%. Then the actual value of the resistance will be within
following limits.
R = 4.7+ (5% of 4.7)
= 4.7 + 0.235

Thus the actual value of resistance is guaranteed to be within the


limits 4.935 & 4.465.

Aa = As + dA

Where Aa = Actual value


As = Specified or rated value
dA = Limiting error or tolerance
Relative Limiting Error: This is also called fractional error. It is the
ratio of the error to the specified magnitude of a quantity.
Thus

e = dA
As
Where e = relative limiting error

From the above equation we can write


dA = e . As
and

Aa = As + dA

From the above equation, we can write,


Aa = As + e . As
Or

= As (1+ e)

The percentage relative limiting error is expressed as


%e = e x 100
The relative limiting error can also be expressed as
e = Actual Value (Aa) - Specified Value (As)
Specified Value As

Mean and Median

The average value of a set of measurements of a constant


quantity can be expressed as either the mean value or the median
value. As the number of measurements increases, the difference
between the mean value and median values becomes very small.
However, for any set of n measurements x 1, x2,xn of a constant
quantity, the most likely true value is the mean given by:

The median is an approximation to the mean that can be written


down without having to sum the measurements. The median is the
middle value when the measurements in the data set are written down
in ascending order of magnitude. For a set of n measurements x 1, x2,
,xn of a constant quantity, written down in ascending order of
magnitude, the median value is given by:

For Eg.

Here the mean is 409 and median is 408.

Standard Deviation and Variance

Expressing the spread of measurements simply as the range


between the largest and smallest value is not in fact a very good way
of examining how the measurement values are distributed about the
mean value. A much better way of expressing the distribution is to
calculate the variance or standard deviation of the measurements. The
starting point for calculating these parameters is to calculate the
deviation (error) di of each measurement xi from the mean value xmean:

di = xi xmean

The variance (V) is then given by:

The standard deviation is simply the square root of the variance.

Combination of Component Errors in Overall


systems

In any experimentation the exactness of measured value is


affected by the deviations caused by various associated errors, viz.
systematic, random and gross errors. The gross errors can be further
divided into systematic and random components. The systematic
errors and systematic components of gross errors can be eliminated
by a suitable correction factor. The remaining random errors
constitute the chief source of uncertainty in experiments. Usually the
random errors follow a normal distribution.

In any experiment, a number of different measurements of


different quantities may be carried out to determine a certain
parameter. These different measurements would involve
uncertainties. The uncertainty of the combined effect of the
uncertainty of different variables is required to be ascertained.

Consider the following equation in most general form


y = f(x1, x2,., xi,.xn)

(2.1)

Where y is a parameter that depends on independent variables


x1, x2,., xi,.xn

Frequency Distribution

Graphical techniques are


a very useful way of
analyzing the way in which
random measurement errors
are distributed. The simplest
way of doing this is to draw a
histogram, in which bands of
equal width across the range
of measurement values are
defined and the number of
measurements within each
band is counted.

As it is the actual value of measurement error that is usually of most


concern, it is often more useful to draw a histogram of the deviations of
the measurements from the mean value rather than to draw a histogram of
the measurements themselves.

The starting point for this is to calculate the deviation of each


measurement away from the calculated mean value. Then a histogram of
deviations can be drawn by defining deviation bands of equal width and
counting the number of deviation values in each band. This histogram has
exactly the same shape as the histogram of the raw measurements except
that the scaling of the horizontal axis has to be redefined in terms of the
deviation values.

As the number of measurements increases, smaller bands can be


defined for the histogram, which retains its basic shape but then consists
of a larger number of smaller steps on each side of the peak. In the limit,

as the number of measurements approaches infinity, the histogram


becomes a smooth curve known as a frequency distribution curve.

The ordinate of this curve is the frequency of occurrence of each


deviation value, F(D), and the abscissa is the magnitude of deviation, D.

If the height of the frequency distribution curve is normalized such


that the area under it is unity, then the curve in this form is known as a
probability curve, and the height FD at any particular deviation
magnitude D is known as the probability density function (p.d.f.).

The condition that the area under the curve is unity can be expressed
mathematically as:

The probability that the error in any one particular measurement lies
between two levels D1 and D2 can be calculated by measuring the area
under the curve contained between two vertical lines drawn through D1
and D2, as shown by the right-hand hatched area. This can be expressed
mathematically as:

Of particular importance for assessing the maximum error likely in any


one measurement is the cumulative distribution function (c.d.f.). This is
defined as the probability of observing a value less than or equal to D 0,
and is expressed mathematically as:

Gaussian

distribution

The shape of a Gaussian curve is such that the frequency of


small deviations from the mean value is much greater than the
frequency of large deviations.

This coincides with the usual expectation in measurements


subject to random errors that the number of measurements with a
small error is much larger than the number of measurements with a
large error.

A Gaussian curve is formally defined as a normalized frequency


distribution that is symmetrical about the line of zero error and in
which the frequency and magnitude of quantities are related by the
expression:

The shape of a Gaussian curve is strongly influenced by the


value of , with the width of the curve decreasing as becomes
smaller. As a smaller corresponds with the typical deviations of the
measurements from the mean value becoming smaller, this confirms
the earlier observation that the mean value of a set of measurements
gets closer to the true value as decreases.

Numerical Examples
Type1

Type 2

Type 3