Vous êtes sur la page 1sur 4

MEASURING FORECASTING ERROR (Adapted)

(The students are advised to refer to Chapter 3 of J.E.Hankes Book for details.)

Because quantitative forecasting techniques frequently involve time series data, a


mathematical notation is developed to refer to each specific time period. The letter Y will be
used to denote a time series variable unless there is more than one variable involved. The
time period associated with an observation is shown as a subscript. Thus Y1 refers to the value
of the time series at time period t. The quarterly data for the Outboard Marine Corporation
presented in Example 3.5 (see p. 73) would be denoted Y1 = 147.6,Y2 = 251.8, Y3 = 273.1,

... , Y52 = 281.4.

Mathematical notation must also be developed for distinguishing between an actual value of
time series and the forecast value. A^ (hat) will be placed above a value to indicate that it is
being forecast. The forecast value for Yt is Yt^. The accuracy of a forecasting technique is
frequently judged by comparing the original series Y1, Y2, ... with the series of forecast values
Y^1, Y^ 2, ....
Basic Forecasting Notation
Basic forecasting notation is summarized as follows.
Yt = value of time series at period t
t = forecast value of Yt
et = Yt - Yt^ = residual, or forecast error
Several methods have been devised to summarize the errors generated by a particular
forecasting technique. Most of these measures involve averaging some function of the
difference between an actual value and its forecast value. These differences between observed
values and forecast values are ofteri referred to as residuals.
A residual is difference between an actual value and its forecaste
Equation 3.6 is used to compute the error, or residual, for each forecast period.
et = Yt - Yt^

et = forecast error in time period t


Yt = actual value in time period t

Yt^ = forecast value for time period t

(3.6
)

One method for evaluating a forecasting technique uses the sum of the absolute errors. The
mean absolute deviation (MAD) measures forecast accuracy by averaging the magnitudes of
the forecast errors (absolute values of each "error). MAD is most useful when the analyst
wants to measure forecast error in the same units as the original series. Equation 3.7 shows
how MAD is computed.

(3.7
)

The mean squared error (M5) is another method for evaluating a forecasting technique.
Each error or residual is squared; these are then summed and divided by the number of
observations. This approach penalizes large forecasting errors because the errors are squared,
which is important; a technique that produces moderate errors may well be preferable to one
that usually has small errors but occasionally yields extremely large ones. The MSE is given
by Equation 3.8.
(3.8
)

Sometimes it is more useful to compute the forecasting errors ih terms of percentages rather
than amounts. The mean absolute percentage error (MAPE) is computed by finding the
absolute error in each period, dividing this by the actual observed value for that period, and
then averaging these absolute percehtag~ errors. This approach is useful when the size or
magnitude of the forecast variable is important in evaluating the accuracy of the forecast.
MAPE provides an indication of how large the forecast errors are in comparison to the
actual values of the seri~s,;The technique is especially useful when the Y1 values are large.
MAP E can also be used to compare the accuracy of the same or different techniques on
two entirely different series. Equation 3.9 shows how MAP E is computed.

(3.9
)

Sometimes it is necessary to determine whether a forecasting method is biased (consistently


forecasting low or high). The mean percentage error (MPE) is used in . these cases. It is
computed by finding the error in each period, dividing this by the actual value for that
period, and then averaging these percentage errors. If the forecasting approach is unbiased,
MP E will produce a number that is close to zero. If the result is a large negative percentage,
the forecasting method is consistently overestimating. If the result is a large positive
percentage, the forecasting method is consistently underestimating. MPE is given by .

(3.1
0)

Part of the decision to use a particular forecasting technique involves the determination of
whether the technique will produce forecast errors-that -are judged to be sufficiently small.
It is certainly realistic to expect a good forecasting technique to produce relatively small
forecast errors on a consistent basis.
The four measures of foreca3t accuracy just described are used

To compare the accuracy of two (or more) different techniques


To measure a particular technique's usefulness or reliability
To help search for an optimal technique
Example 3.6 illustrates how each of these error measurements is computed.

Example 3.6
Table 3-7 shows the data for the daily number of customers requiring repair work, Y1, and a
forecast of these data, Y1, for Gary's Chevron Station. The forecasting technique used the number
of customers serviced in the previous period as the forecast for the current period

Time Customers Forecast

Yt

58

2
3
4
5
6
7
8
9

54
60
55
62
62
65
6;3
70

Error

Y^t

Yt

et

e2

{et{ / Yt

et/Yt

58
54
60
55
62
62
65
63

-4
6
-5
7
a
3
-2
7

4
6
5
7
0
3
2
7

16
36
25
49
0
9
4
49

.074
.100
.091
.113
.000
.046
.032
.100

-.074
.100
-.091
.113
.000
.046
-.032
.100

Totals

12

34

188

.556

.162

This simple technique will be discussed in Chapter 4. The following computations were employed to
evaluate this model using MAD, MSE, MAPE, and MPE.

= 34/8 = 4.3

= 188/8 = 23.5

= 556/8 = .

= 162/8 = .

MAD indicates that each forecast deviated by an average of 4.3 customers. The MSE of 23.5 and the
MAPE of 6.95% would be compared to the MSE and MAPE for any other method used to forecast
these data. Finally, the small MPE of 2.03% indicates that the technique is not biased: Because the
value is close to zero, the technique does not consistently over- or underestimate the number of
customers serviced daily. l"-