Vous êtes sur la page 1sur 56

INVERSE SQUARE ROOT TRANSFORMATIONS OF ERRORS OF

MEASUREMENT OF THE RANDOM COMPONENT OF THE


MULTIPLICATIVE TIME SERIES MODEL.

AJIBADE, F. BRIGHT
REG.NO: NAU/Ph.D/2010557005P

SUPERVISED BY:

(1) PROF. EMERITUS. OGUM, G.E.O.


(2) PROF. MBEGBU, J.I

1
ABSTRACT
Considering that time series data(Xt) requiring the application
of multiplicative time series model can be classified as one
requiring logarithm (Log(Xt)), square-root ( ), inverse , inverse-
square-root , square and inverse-square transformation and
whereas the theoretical results obtained for a particular
classification is not applicable to any other, we therefore
investigate the distribution and properties of the left-
truncated error term (et ), of the multiplicative time series
model under inverse-square-root transformation with a view
to establish the region in terms of the standard deviation of
the untransformed error component () where the basic
assumption of normality and unit mean required to proceed
with a multiplicative time series decomposition is satisfied for
the inverse-square-root transformed error term ( ).The curve

2
ABSTRACT CONTD
shapes of the probability density function of ,
g(y) for different values of , and by the
Rolless theorem showed that g(y) is bell-
shaped with mode 1 when 0 < 0.15.
Furthermore, the established functional forms
of the mean ( ) and variance ( ) functions
confirmed the mean of to be 1 with ,
whenever 0 < 0.15. Hence 0 < 0.15 is
the recommended condition for successful
application of inverse square root
transformation in time series modeling.
Finally, real life data from time series data
library were used to illustrate the established
theoretical results.

3
1.1 INTRODUCTION
The multiplicative time series model is usually appropriate when the

variation in the seasonal pattern, or the variation around the trend-cycle,

appears to be proportional to the level of the time series. With economic

time series, multiplicative models are common.

4
COMPONENTS OF TIME SERIES

The components of time series are: Trend, Seasonal,


Cyclical and Irregular. These are used in the models:
Additive model: (1.5)

Multiplicative model: (1.6)

Pseudo-additive/Mixed model: (1.7)

where X t is the observed time series for t = 1, 2, . . ., n, Tt is the trend, S t , the seasonal component, C t , the cyclical
component and e t , the irregular/residual component.

5
DATA TRANSFORMATION
Transformation is a mathematical operation that changes the
measurement scale of a variable. Generally, there are a number
of reasons for data transformation such as easy visualization and
variance stabilization

Three major reasons for data transformation in time series are;

(i) Variance stabilization transformation

(ii) Ensuring a normally distributed data

(iii) Additivity of the seasonal effect

6
The most popular and common data transformation
technique is power transformation such as;
1 1 1
log X , X , , , X2, and .
e t t X t
t X
t X2
t

The logarithmic transformation converts the


multiplicative model to additive model
Y = log X = log M + log S + log e = M * + S * + e * (1.8)
t e t e t e t e t t t t

while other listed transformations leave the model still as


multiplicative.

7
Table 1: Transformations of the purely multiplicative model
Yt M t* S t* et* Model for Yt Assumption on S t* Assumption on et*
e t* ~ N 0, 12
s
log e X t log e M t log e S t log e e t Additive *
S 0
j 1

Xt Mt St et Multiplicative s

S s *
et* ~ N 1, 12
j 1

1/ X t 1/ M t 1/ S t 1/ e t Multiplicative s

S s *
et* ~ N 1, 12
j 1

X t2 M t2 S t2 e t2 Multiplicative s

S s *
et* ~ N 1, 12
j 1

1/ X t 1/ M t 1/ S t 1/ e t Multiplicative
s

S s *
et* ~ N 1, 12
j 1

1/ X t2 1/ M t2 1/ S t2 1/ e 2t Multiplicative s

S s *
et* ~ N 1, 12
j 1

8
METHODS OF DATA TRANSFORMATION
There are two major methods of data transformations,

namely; Bartlett and Box-Cox methods.

However for simplicity of application we shall adopt the Bartletts method as


applied by Akpanta and Iwueze (2009).
Table 1: Bartletts Transformation for some values of .

S/N Required Transformation


1 0 No transformation
2 1 X t
2
3 1 log e X t
4 3 1
2 Xt
5 2 1
X t
6 3 1
X t2
7 1 X 2
t

9
1.2 STATEMENT OF THE PROBLEM
For every transformation technique, there is
corresponding error term with known
distribution as proven by researchers in the
past. Critical look at the literature reveals that
the error term of inverse square root
transformation is still not in existence despite
the effort of researchers in the field. The study
set to determine condition for successful
inverse square root transformation of the error
component of the multiplicative time series
model and to determine its probability density
function.

10
1.3 Aim and Objectives of the Study
The purpose of this study is to investigate the effect of inverse-square-
root transformation on the error component of the multiplicative time
series model with a view to establishing the condition for the
successful application of inverse-square-root transformation.
The objectives are :
(i) To find the probability density function of the inverse-square-root
transformed error component, of the multiplicative time series model.
(ii) To establish the theoretical expressions for the mean and variance of
the inverse-square-root transformed error component.
(iii) To establish the relationship between the basic parameters of the
untransformed component, and the inverse-square-root transformed
component, with regard to the model assumptions, and identify the
condition for the existence of their relationships.
(iv) To use simulated and real data to validate the established results of
the study.

11
1.4 JUSTIFICATION OF THE STUDY
The value of in Table 2 classified all time series data into mutually
exclusive groups, and hence can only be appropriately transformed by
only one of the six transformations listed in Table 2.

Despite the fact that Iwueze(2007), Otuonye(2011), Nwosu(2013), Ohakwe


et al.,(2013) and Gabriel et a., (2014) had carried out similar studies
with respect to logarithm, square root, inverse, square and inverse
square transformations respectively, this work on inverse square root
transformation is still very necessary since results established for the
above listed five transformations cannot be applied in the analysis of
time series data requiring inverse square root transformation.

12
1.5 INVERSE-SQUARE-ROOT

3
TRANSFORMATION
When 2 in Table 2,we apply the inverse-square-root transformation to model (1.2),
which now becomes
1 1 1 1
Yt M t* St*et* (1.16)
Xt Mt St et

where * 1 1 1
Mt , St* and et* , et 0
Mt St et
Since et does not admit negative or zero values, the use of the left-truncated normal
distribution as the probability density function (p.d.f.) of et shall be explored.
1
In this study we would find the distribution of y et* and then answer the following
et
questions;
(i) is et ~ N (1, 12 ) ?

(ii) What is the relationship between 12 and ?


2

13
LITERATURE REVIEW

In the study, related works done by researchers such as


Bartlet 1947, Turkey (1957), Box and Cox, (1964); Winer (1968),
Montgomery (1991), Sakia (1992), Ruppert (1999), Osborne (2002),
Iwueze et al. (2008), Akpanta and Iwueze (2009), Fink (2009),
Akpanta and Iwueze (2009), Osborne (2010), Vidakovic (2012),
Otuonye et al., 2012, Nwosu et al., 2013, and Ohakwe et al.,2013 and
Gabriel et al 2014 were reviewed.
From the literature reviewed, it is clear that only the effect of
logarithm, square-root, inverse, square and and inverse-square
transformation on the distribution of the error component of the
multiplicative time series model have been exhaustively studied;
leaving out inverse-square-root transformations.
Consequently, the summary of results for the five rworks reviewed
and statistical properties of various transformations are shown in
Tables 2 and 3 respectively. 14
Figure 2.1: Components of the Multiplicative time Series Model Studied
Under the six most common transformations

Logarithmic Square Root Inverse Inverse Square Square Inverse Square


Root

M St e M St e M St e
t t t

M St e M St e M St e
t t t

Key

Stud studied

Und under Study

15
Table 1: Summary of Results for the four Works Reviewed

The value Condition for Relationship


et* Distribution of et*
of for each successful between and
specific transformation
1
transformation

= 1.0 log e et
(logarithm), Iwueze (2007) et* N 0 , 12 < 0.1 1

= 0.5 (square root), Otuonye et al (2011) et* N 1 , 12 0.30


1
1

2

= 2.0 1

(inverse), Nwosu et al (2012) et* N 1 , 12 0.1 1

= -1.0 et* et2 (Square), Ohakwe et al et* N 1 , 12 0.027 1


(2013)

=3 1
et al (2014) et* N 1 , 12 0.070 1
2

16
Table 2: The Summary of Statistical Properties of Various Transformations Reviewed
Transformation Probability Density Function E(X) Var. (X)

1 x 1 2
exp
2
Logarithmic
1 1
2
1
e 2
2
2 2 2 1 e 2 e
2 2
Iwueze(2007) f ( x) ,0 < E* X 1 Var X
*
1 Pr (1) 2
1
2 1 1







1
2 1

2 1


2 1

1


2 1 1

0
, < 0
Square root 0 , < 0 2
1
12
Otunoye et al 2x ( x 2 1) 2

1
1
2
f ( x) e 2 2
1 5e 2 2 2 1 e 2
1 5e 2
2
1
2 1 1 E( X ) 1 1 Pr (1) 2 Var ( X ) 1 1 1 Pr (1) 2
2
(2011) 1 8 2 16 1 1 8 2 16
1 2 1 1
, 0

Inverse Nwosu
2
0
2 1 Pr 2 (1)
1 Pr (1)
2 2
et al (2012)

f ( x)
1
, 1 < 1 01 2

e 2 x
E( X ) 1

1 Pr 2(1) Var ( X )
2

1
2


2 1 x
1 21 1 1

2
21 21
, > 0

2
Sqaure
1
Ohakwe, et 1 x 1
2
e 2
2

2 2 1

e 2
1
2


1
2
e 2


E( X ) 1 Var ( X ) 1 Pr (1) 2
2 1
al (2013) e 2 1
1
2 1
1 1
2 1
f ( x) ,0 x
2 1




2 1 1


0 < 0
1 2
Inverse Sqaure
1 21
3 2 1
2

1 2
E( X ) 1 1 Pr (1) 2
,0 < <
transformatio = 1 3
1 2
2 2 1
2 1 1 1
2

n, Ibeh, et al. 0 2 2 1 Pr 2(1) 2 3 2 1 Pr 2(1) 2



(2014) Var ( X )
1 1
1 2 1

17
3.0 METHODOLOGY
To achieve the stated objective, the under listed activities
were carried out;
1 1
Let X et and Y et*
et X

1
i. Obtaining the probability density of y e *t , denoted
et
as g(y)
ii. Plotting the probability density curves of g(y) and f *(x),
where f *(x) is the probability density function of the
left-truncated normal distribution
iii. Obtaining the region where g(y) satisfies the
normality(bell-shaped) conditions.
18
iv. Obtaining the interval with respect to where
mean median mode 1

v. Anderson Darlings test statistics for normality was


used to confirm the normality of the simulated
error series et and the inverse-square-root
transformed error term for some values of .

vi. Obtaining the functional expressions for the mean


1
and variance of y e *t , to validate some of the
et

results obtained using simulated data.


19
3.1 PROBABILITY DENSITY FUNCTION OF g(y)
The pdf of error term after inverse square root transformation g(y) was obtained using
the change of variable technique.
According to freund and Walpole (1986), the pdf of y denoted as g(y) and is given by
where f *(x) is given in (1.5) and
f * x | dy
dx
|
(1.17) dx 2
=
dy y3
Using the transformation of variable technique
hence g(y) is given as

2

- 1 1 -1
2 2 2 y 2
, 0 < y <

e
y 3 2 1- - 1

g y =

(1.18)


0, -< y 0

g(y) given in (1.18) is a proper pdf since it was proved that


g y dy = 1
0
20
PLOTS OF THE PROBABILITY DENSITY CURVES
Using the pdfs given in (1.5) and (1.18) curves shapes
for f*(x) and g(y) were plotted for values of (0.6,
0.5].
However for want of space we only show eight of
them [see figures 1 through 8].

16
14
12
10
g(y)
8
6
4
f*(x)
2
0
-2 0 0.5 1 1.5 2 2.5

Figure 1. Curve Shapes for = 0.06


21
10

g(y)
6

2
f*(x)
0
0 0.5 1 1.5 2 2.5 3 3.5
-2

Figure 2. Curve Shapes for = 0.085

9
8
7
6
5 g(y)
4
3
2
1
f*(x)
0
0 0.5 1 1.5 2 2.5 3 3.5
-1

Figure 3. Curve Shapes for = 0.095


22
6

4 g(y)
3

1
f*(x)
0
0 0.5 1 1.5 2 2.5 3 3.5
-1

Figure 4. Curve Shapes for = 0.15

4
3.5
3
g(y)
2.5
2
1.5
1
0.5 f*(x)
0
0 0.5 1 1.5 2 2.5 3 3.5
-0.5

Figure 5. Curve Shapes for = 0.25

23
3.5

2.5 g(y)
2

1.5

0.5 f*(x)
0
0 0.5 1 1.5 2 2.5 3 3.5
-0.5

Figure 6. Curve Shapes for = 0.3

2.5

2
g(y)
1.5

0.5 f*(x)

0
0 0.5 1 1.5 2 2.5 3 3.5
-0.5

Figure 7. Curve Shapes for = 0.4

24
2.5

g(y)
1.5

0.5
f*(x)
0
0 0.5 1 1.5 2 2.5 3 3.5
-0.5

Figure 8. Curve Shapes for = 0.5

25
3.2 NORMALITY REGION FOR g(Y)
From figures 1 to 8, it is clear that the curve g(y) has one maximum
point ymax (mode) and maximum value g(ymax) for all values of .
To obtain the values of that satisfy the symmetric (bell-shaped)
condition of Mean Median Mode, we invoked the Rolles
Theorem and proceeded to obtain the maximum point (mode) for
a given value of .

ROLLES THEOREM
Rolles Theorem (Smith and Minton(2008)) says if f(x) is a
continuous variable on the interval [a, b] and differentiable on the
interval (a, b) with f(a) = f(b), then there exist a number
such that .
26
To obtain the values of that satisfy the symmetric (bell-
shaped) conditions, we invoked the Theorem and we
proceeded to obtain the maximum point for a given value of
, which is the solution of the equation
(1.19)
Differentiating (1.18) with respect to y, yields
2 1 _ y 2 (1.20)
3 2 4 2
- = 0 3 y + 2y - 2 = 0
2y 8 y4
Which when solved yields
-1+ 1+ 6 2 (1.21)
y=
3 2

Hence the maximum value of y denoted as ymax is given by


y =
-1+ 1+ 6 2 (1.22).
max
3 2

The numerical computations of ymax for various values of is


given in Table 3, where the bell-shaped condition implies
ymax 1 27
+ +
Table 3: COMPUTATION OF =
, . , .

+ + + +
= =

0.010 0.99992502 0.00007500 0.155 0.94470721 0.05529300
0.015 0.99970031 0.00030000 0.160 0.94163225 0.05836800
0.020 0.99932659 0.00067300 0.165 0.93852446 0.06147600
0.025 0.99880501 0.00119500 0.170 0.93538739 0.06461300
0.030 0.99813720 0.00186300 0.175 0.93222440 0.06777600
0.035 0.99732519 0.00267500 0.180 0.92903869 0.07096100
0.040 0.99637147 0.00362900 0.185 0.92583333 0.07416700
0.045 0.99527886 0.00472100 0.190 0.92261120 0.07738900
0.050 0.99405059 0.00594900 0.195 0.91937505 0.08062500
0.055 0.99269018 0.00731000 0.200 0.91612748 0.08387300
0.060 0.99120149 0.00879900 0.205 0.91287093 0.08712900
0.065 0.98958860 0.01041100 0.210 0.90960772 0.09039200
0.070 0.98785584 0.01214400 0.215 0.90634001 0.09366000
0.075 0.98600775 0.01399200 0.220 0.90306986 0.09693000
0.080 0.98404899 0.01595100 0.225 0.89979918 0.10020100
0.085 0.98198438 0.01801600 0.230 0.89652976 0.10347000
0.090 0.97981881 0.02018100 0.235 0.89326328 0.10673700
0.095 0.97755725 0.02244300 0.240 0.89000132 0.10999900
0.100 0.97520469 0.02479500 0.245 0.88674534 0.11325500
0.105 0.97276613 0.02723400 0.250 0.88349669 0.11650300
0.110 0.97024653 0.02975300 0.255 0.88025665 0.11974300
0.115 0.96765082 0.03234900 0.260 0.87702640 0.12297400
0.120 0.96498387 0.03501600 0.265 0.87380702 0.12619300
0.125 0.96225045 0.03775000 0.270 0.87059952 0.12940000
0.130 0.95945523 0.04054500 0.275 0.86740484 0.13259500
0.135 0.95660279 0.04339700 0.280 0.86422383 0.13577600
0.140 0.95369754 0.04630200 0.285 0.86105729 0.13894300
0.145 0.95074378 0.04925600 0.290 0.85790594 0.14209400
0.150 0.94774567 0.05225400 0.295 0.85477043 0.14523000
0.300 0.85165139 0.14834900
28
The summary of the values for ymax 1.0, with
respect to various number of decimal places is
given in Table 4.
Table 4: Conditions For ,
where
Decimal places
3 0 < < 0.015
2 0 < < 0.045
1 0 < < 0.145

Thus g(y) is symmetrical about 1 with Mode 1


Mean correct to two decimal places when
0< < 0.045 and correct to
one decimal place when 0 < < 0.145 . This very
finding has narrowed down the viable region to
0 < < 0.145 since we would adopt the
correctness of our study to one decimal place.
29
Furthermore, in order ascertain the interval,
(0 , 0.15], we make a 3-D plot of the
untransformed (f(x)) and the inverse-square-
root transformed (f(y)) distributions for values
of = 0.01, 0.02, 0.03, . . . , 0.48, 0.49, 0.50
and for fixed values of X = Y = 0.00, 0.1, 0.2,,
1.1 . The plot is given in Figure 9. The aim of
the plot is to determine the point at which
normality exist among the variables. Based on
the property of Gaussian distribution, the
point at which bell-shape is spotted on the
graph is the point at which normality exist.
Based on the plot, normality was found to
exist for 0.15

30
3.3 USE OF SIMULATED ERROR VALUES
Here we applied a normality test (Anderson
Darling Test for Normality) to artificial data
generated from N 1, 2 density function for the
variable et , and subsequently used to generate its
inverse-square-root transform, et* 1 for 0.05
et

0.15 .

31
Figure 9: 3-D Plots of f(x) and f(y)
for fixed values of , X and Y

32

et N 1, 2 , 0.06 et*
1
et

, et N 1, 2 , 0.06

Table 4: Simulation Results when = 0.06


et N 1, 2 , 0.06 et*
1
et

, et N 1, 2 , 0.06

AD p-
Mean StD Variance Median Skewness Kurtosis AD p-value Mean StDev Variance Median Skewness Kurtosis value
1 0.06 0.0036 0.9927 -0.01 0.16 .235 .788 1.0013 0.0303 0.000918 1.0037 0.32 0.57 .206 .867
1 0.06 0.0036 1.0009 0.01 -0.05 .183 .908 1.0013 0.0302 0.000914 0.9995 0.24 0.01 .298 .580
1 0.06 0.0036 1.0002 0 0.2 .195 .889 1.0013 0.0303 0.000916 0.9999 0.29 0.37 .275 .654

1 0.06 0.0036 1.0029 0 0.22 .234 .790 1.0013 0.0303 0.000917 0.9985 0.3 0.32 .334 .505

1 0.06 0.0036 1.0037 0 -0.03 .178 .918 1.0013 0.0302 0.000915 0.9982 0.26 0.02 .312 .546

1 0.06 0.0036 1.0045 0.1 0.05 .435 .294 1.0013 0.0301 0.000908 0.9978 0.18 0.24 .364 .433

1 0.06 0.0036 1.0037 0 -0.03 .178 .918 1.0013 0.0302 0.000915 0.9982 0.26 0.02 .312 .546

1 0.06 0.0036 1.0013 0.07 -0.04 .137 .976 1.0013 0.0302 0.00091 0.9993 0.19 -0.04 .213 .851

1 0.06 0.0036 0.9941 0.05 0.1 .196 .888 1.0013 0.0302 0.000911 1.003 0.22 0.04 .302 .569

1 0.06 0.0036 1.0017 -0.1 0.1 .250 .739 1.0014 0.0304 0.000924 0.9991 0.37 0.2 .453 .266

1 0.06 0.0036 1.0004 0.01 0.06 .200 .880 1.0013 0.0302 0.000915 0.9998 0.26 0.11 .314 .540

1 0.06 0.0036 1.0045 0.1 0.05 .435 .294 1.0013 0.0301 0.000908 0.9978 0.18 0.24 .364 .433

1 0.06 0.0036 0.9991 -0.01 -0.05 .183 .908 1.0013 0.0303 0.000916 1.0005 0.28 0.14 .214 .846

1 0.06 0.0036 0.9983 0.1 0.1 .250 .739 1.0013 0.0301 0.000908 1.0009 0.19 0.27 .206 .866

1 0.06 0.0036 1.001 0.18 0.05 .209 .859 1.0013 0.03 0.000901 0.9995 0.08 -0.08 .241 .767

1 0.06 0.0036 1.0028 0.03 0 .195 .889 1.0013 0.0302 0.000913 0.9986 0.24 0.1 .284 .625

1 0.06 0.0036 1.0031 0.05 -0.12 .141 .972 1.0013 0.0302 0.000911 0.9985 0.2 -0.07 .208 .862

1 0.06 0.0036 0.9975 0.27 0.18 .310 .550 1.0013 0.0299 0.000894 1.0012 0 0.1 .232 .795

1 0.06 0.0036 1.0006 -0.14 -0.47 .262 .699 1.0014 0.0304 0.000924 0.9997 0.35 -0.29 .385 .387

1 0.06 0.0036 0.9983 0.03 -0.04 .182 .911 1.0013 0.0302 0.000913 1.0009 0.23 -0.06 .318 .531

1 0.06 0.0036 0.9958 0.02 0.27 .150 .962 1.0013 0.0303 0.000916 1.0021 0.29 0.41 .218 .835

1 0.06 0.0036 0.9938 0.25 0.04 .290 .606 1.0013 0.0299 0.000896 1.0031 0.01 -0.01 .185 .906

1 0.06 0.0036 0.9931 0.16 0.04 .450 .270 1.0013 0.03 0.000903 1.0035 0.11 0.15 .336 .503

1 0.06 0.0036 0.995 0.09 -0.1 .199 .882 1.0013 0.0301 0.000907 1.0025 0.15 -0.16 .390 .376

1 0.06 0.0036 0.9987 0.01 -0.13 .216 .841 1.0013 0.0302 0.000914 1.0006 0.24 -0.1 .315 .538

1 0.06 0.0036 0.9942 0.19 -0.14 .311 .546 1.0013 0.03 0.000899 1.0029 0.05 -0.2 .165 .940

33
Table 5: Simulation Results when = 0.08

et N 1, 2
, 0.08 et*
1
et

, et N 1, 2 , 0.08
Mean StD Variance Median Skewness Kurtosis AD p-value Mean StD Variance Median Skewness Kurtosis AD p-value

1 0.08 0.0064 0.9903 -0.01 0.16 .235 .788 1.0024 0.0407 0.00166 1.0049 0.43 0.81 .239 .773

1 0.08 0.0064 1.0013 0.01 -0.05 .183 .908 1.0024 0.0406 0.00165 0.9994 0.33 0.08 .369 .421

1 0.08 0.0064 1.0003 0 0.2 .195 .889 1.0024 0.0407 0.00165 0.9999 0.4 0.5 .340 .490

1 0.08 0.0064 1.0039 0 0.22 .234 .790 1.0024 0.0407 0.00165 0.9981 0.4 0.42 .407 .343

1 0.08 0.0064 1.0049 0 -0.03 .178 .918 1.0024 0.0406 0.00165 0.9975 0.34 0.09 .393 .370

1 0.08 0.0064 1.0059 0.1 0.05 .435 .294 1.0024 0.0404 0.00163 0.997 0.29 0.37 .382 .392

1 0.08 0.0064 1.0049 0 -0.03 .178 .918 1.0024 0.0406 0.00165 0.9975 0.34 0.09 .393 .370

1 0.08 0.0064 1.0018 0.07 -0.04 .137 .976 1.0024 0.0404 0.00164 0.9991 0.28 0.01 .275 .655

1 0.08 0.0064 0.9921 0.05 0.1 .196 .888 1.0024 0.0405 0.00164 1.004 0.3 0.08 .373 .412

1 0.08 0.0064 1.0023 -0.1 0.1 .250 .739 1.0024 0.0409 0.00167 0.9989 0.46 0.29 .558 .145

1 0.08 0.0064 1.0005 0.01 0.06 .200 .880 1.0024 0.0406 0.00165 0.9997 0.35 0.18 .394 .369

1 0.08 0.0064 1.0059 0.1 0.05 .435 .294 1.0024 0.0404 0.00163 0.997 0.29 0.37 .382 .392

1 0.08 0.0064 0.9987 -0.01 -0.05 .183 .908 1.0024 0.0406 0.00165 1.0006 0.37 0.27 .260 .704

1 0.08 0.0064 0.9977 0.1 0.1 .250 .739 1.0024 0.0404 0.00163 1.0012 0.29 0.41 .229 .805

1 0.08 0.0064 1.0013 0.18 0.05 .209 .859 1.0024 0.0402 0.00161 0.9993 0.16 -0.07 .290 .605

1 0.08 0.0064 1.0038 0.03 0 .195 .889 1.0024 0.0405 0.00164 0.9981 0.33 0.19 .354 .456

1 0.08 0.0064 1.0041 0.05 -0.12 .141 .972 1.0024 0.0405 0.00164 0.9979 0.29 -0.01 .267 .680

1 0.08 0.0064 0.9967 0.27 0.18 .310 .550 1.0024 0.04 0.0016 1.0017 0.1 0.13 .246 .753

1 0.08 0.0064 1.0009 -0.14 -0.47 .262 .699 1.0024 0.0409 0.00167 0.9996 0.42 -0.2 460 .256

1 0.08 0.0064 0.9977 0.03 -0.04 .182 .911 1.0024 0.0405 0.00164 1.0012 0.31 -0.02 .400 .357

1 0.08 0.0064 0.9945 0.02 0.27 .150 .962 1.0024 0.0406 0.00165 1.0028 0.39 0.55 .282 .631

1 0.08 0.0064 0.9918 0.25 0.04 .290 .606 1.0024 0.04 0.0016 1.0041 0.1 0.01 .192 .894

1 0.08 0.0064 0.9907 0.16 0.04 .450 .270 1.0024 0.0402 0.00162 1.0047 0.21 0.25 .340 .492

1 0.08 0.0064 0.9934 0.09 -0.1 .306 .559 1.0024 0.0404 0.00163 1.0033 0.23 -0.14 .458 .259

1 0.08 0.0064 0.9983 0.01 -0.13 .199 .882 1.0024 0.0406 0.00165 1.0009 0.32 -0.05 .395 .366

34
Table 6: Simulation Results when = 0.1


et N 1, 2
, 0.1 et*
1
et

, et N 1, 2 , 0.1

Mean StD Variance Median Skewness Kurtosis AD p-value Mean StD Variance Median Skewness Kurtosis AD p-value
1 0.1 0.01 0.9878 -0.01 0.16 .235 .788 1.0038 0.0514 0.00265 1.0061 0.56 1.13 .298 .582
1 0.1 0.01 1.0016 0.01 -0.05 .183 .908 1.0038 0.0511 0.00262 0.9992 0.42 0.19 .457 .260
1 0.1 0.01 1.0003 0 0.2 .195 .889 1.0038 0.0513 0.00263 0.9998 0.51 0.68 .428 .306
1 0.1 0.01 1.0049 0 0.22 .234 .790 1.0038 0.0513 0.00264 0.9976 0.5 0.57 .502 .201
1 0.1 0.01 1.0062 0 -0.03 .178 .918 1.0038 0.0512 0.00262 0.9969 0.43 0.19 .495 .211
1 0.1 0.01 1.0074 0.1 0.05 .435 .294 1.0038 0.0509 0.00259 0.9963 0.39 0.54 .424 313
1 0.1 0.01 1.0062 0 -0.03 .178 .918 1.0038 0.0512 0.00262 0.9969 0.43 0.19 .495 .211
1 0.1 0.01 1.0022 0.07 -0.04 .137 .976 1.0038 0.0509 0.00259 0.9989 0.37 0.09 .357 .450
1 0.1 0.01 0.9902 0.05 0.1 .196 .888 1.0038 0.051 0.0026 1.005 0.39 0.15 .464 .251
1 0.1 0.01 1.0029 -0.1 0.1 .250 .739 1.0038 0.0516 0.00267 0.9986 0.56 0.41 .685 .071
1 0.1 0.01 1.0007 0.01 0.06 .200 .880 1.0038 0.0512 0.00262 0.9997 0.44 0.28 .495 .210
1 0.1 0.01 1.0074 0.1 0.05 .435 .294 1.0038 0.0509 0.00259 0.9963 0.39 0.54 .424 .313
1 0.1 0.01 0.9984 -0.01 -0.05 .183 .908 1.0038 0.0513 0.00263 1.0008 0.47 0.45 .326 .516
1 0.1 0.01 0.9971 0.1 0.1 .250 .739 1.0038 0.0509 0.00259 1.0014 0.4 0.59 .272 .664
1 0.1 0.01 1.0016 0.18 0.05 .209 .859 1.0037 0.0505 0.00255 0.9992 0.25 -0.05 .359 .445
1 0.1 0.01 1.0047 0.03 0 .195 .889 1.0038 0.0511 0.00261 0.9977 0.43 0.32 .446 .277
1 0.1 0.01 1.0052 0.05 -0.12 .141 .972 1.0038 0.051 0.0026 0.9974 0.37 0.08 346 .477
1 0.1 0.01 0.9959 0.27 0.18 .310 .550 1.0037 0.0502 0.00252 1.0021 0.19 0.19 .278 .642
1 0.1 0.01 1.0011 -0.14 -0.47 .262 .699 1.0038 0.0516 0.00266 0.9995 0.5 -0.08 .554 .150
1 0.1 0.01 0.9971 0.03 -0.04 .182 .911 1.0038 0.0511 0.00261 1.0014 0.4 0.05 .499 .205
1 0.1 0.01 0.9931 0.02 0.27 .150 .962 1.0038 0.0513 0.00263 1.0035 0.5 0.74 .368 .424
1 0.1 0.01 0.9897 0.25 0.04 .290 .606 1.0037 0.0503 0.00253 1.0052 0.19 0.06 .221 .827
1 0.1 0.01 0.9884 0.16 0.04 .450 .270 1.0037 0.0506 0.00256 1.0058 0.31 0.39 .366 .428
1 0.1 0.01 0.9917 0.09 -0.1 .306 .559 1.0038 0.0508 0.00258 1.0042 0.32 -0.1 .547 .156
1 0.1 0.01 0.9979 0.01 -0.13 .199 .882 1.0038 0.0511 0.00261 1.0011 0.41 0.02 .497 .207

35
Table 7: Simulation Results when = 0.15


et N 1, 2
, 0.15 et*
1
et

, et N 1, 2 , 0.15

p-
Mean StD Variance Median Skewness Kurtosis AD p-value Mean StDev Variance Median Skewness Kurtosis AD value
1 0.15 0.0225 0.9818 -0.01 0.16 .235 .788 1.0089 0.0803 0.00645 1.0092 0.94 2.44 .582 .126
1 0.15 0.0225 1.0024 0.01 -0.05 .183 .908 1.0088 0.0791 0.00626 0.9988 0.67 0.65 .761 .046
1 0.15 0.0225 1.0005 0 0.2 .195 .889 1.0088 0.0798 0.00637 0.9997 0.81 1.4 .756 .047
1 0.15 0.0225 1.0073 0 0.22 .234 .790 1.0088 0.0798 0.00636 0.9964 0.79 1.15 .857 .027
1 0.15 0.0225 1.0093 0 -0.03 .178 .918 1.0088 0.0792 0.00628 0.9954 0.68 0.61 .842 .029
1 0.15 0.0225 1.0111 0.1 0.05 .435 .294 1.0087 0.0788 0.0062 0.9945 0.69 1.16 .646 .089
1 0.15 0.0225 1.0093 0 -0.03 .178 .918 1.0088 0.0792 0.00628 0.9954 0.68 0.61 .842 .029
1 0.15 0.0225 1.0034 0.07 -0.04 .137 .976 1.0087 0.0786 0.00618 0.9983 0.6 0.42 .656 .085
1 0.15 0.0225 0.9853 0.05 0.1 .196 .888 1.0087 0.0788 0.00621 1.0075 0.63 0.47 .785 .040
1 0.15 0.0225 1.0043 -0.1 0.1 .250 .739 1.0089 0.0804 0.00646 0.9979 0.82 0.9 1.109 .005
1 0.15 0.0225 1.001 0.01 0.06 .200 .880 1.0088 0.0793 0.00628 0.9995 0.69 0.67 .860 .026
1 0.15 0.0225 1.0111 0.1 0.05 .435 .294 1.0087 0.0788 0.0062 0.9945 0.69 1.16 .646 .089
1 0.15 0.0225 0.9976 -0.01 -0.05 .183 .908 1.0088 0.0796 0.00633 1.0012 0.75 1.11 .596 .119
1 0.15 0.0225 0.9957 0.1 0.1 .250 .739 1.0087 0.0788 0.00621 1.0022 0.71 1.31 .486 .221
1 0.15 0.0225 1.0025 0.18 0.05 .209 .859 1.0086 0.0775 0.00601 0.9988 0.46 0.13 .620 .104
1 0.15 0.0225 1.007 0.03 0 195 889 1.0088 0.0791 0.00626 0.9965 0.69 0.87 .779 .042
1 0.15 0.0225 1.0077 0.05 -0.12 141 .972 1.0087 0.0787 0.00619 0.9962 0.61 0.44 .635 .095
1 0.15 0.0225 0.9938 0.27 0.18 .310 .550 1.0085 0.077 0.00593 1.0031 0.45 0.53 .450 .271
1 0.15 0.0225 1.0016 -0.14 -0.47 .262 .699 1.0089 0.0799 0.00639 0.9992 0.7 0.32 .880 .023
1 0.15 0.0225 0.9957 0.03 -0.04 .182 .911 1.0087 0.0789 0.00622 1.0022 0.62 0.36 .838 .030
1 0.15 0.0225 0.9896 0.02 0.27 .500 .962 1.0088 0.0798 0.00636 1.0052 0.82 1.54 .701 .065
1 0.15 0.0225 0.9846 0.25 0.04 .290 .606 1.0085 0.077 0.00593 1.0078 0.43 0.29 .398 .361
1 0.15 0.0225 0.9826 0.16 0.04 .450 .270 1.0086 0.0781 0.00609 1.0088 0.6 0.97 .545 .157
1 0.15 0.0225 0.9876 0.09 -0.1 .306 .559 1.0087 0.0782 0.00611 1.0063 0.52 0.1 .868 .025
1 0.15 0.0225 0.9968 0.01 -0.13 .199 .882 1.0088 0.079 0.00624 1.0016 0.62 0.29 .860 .026

36
Table 8: Simulation Results when = 0.2


et N 1, 2 , 0.2 et*
1
et

, et N 1, 2 , 0.2

Mean StD Variance Median Skewness Kurtosis AD p-value Mean StDev Variance Median Skewness Kurtosis AD p-value
1 0.2 0.04 0.9757 -0.01 0.16 .235 .788 1.0167 0.1147 0.0132 1.0124 1.51 5.22 1.176 <0.05
1 0.2 0.04 1.0032 0.01 -0.05 .183 .908 1.0162 0.1107 0.0123 0.9984 0.97 1.48 1.220 <0.05
1 0.2 0.04 1.0007 0 0.2 .195 .889 1.0165 0.1127 0.0127 0.9997 1.2 2.76 1.315 <0.05
1 0.2 0.04 1.0097 0 0.22 .234 .790 1.0164 0.1124 0.0126 0.9952 1.14 2.2 1.435 <0.05
1 0.2 0.04 1.0124 0 -0.03 .178 .918 1.0163 0.1109 0.0123 0.9939 0.97 1.4 1.353 <0.05
1 0.2 0.04 1.0148 0.1 0.05 .435 .294 1.0161 0.1105 0.0122 0.9927 1.05 2.24 1.097 .007
1 0.2 0.04 1.0124 0 -0.03 .178 .918 1.0163 0.1109 0.0123 0.9939 0.97 1.4 1.353 <0.05
1 0.2 0.04 1.0045 0.07 -0.04 .137 .976 1.0161 0.1095 0.012 0.9978 0.87 1.01 1.117 .006
1 0.2 0.04 0.9803 0.05 0.1 .196 .888 1.0161 0.11 0.0121 1.01 0.9 1.05 1.276 <0.05
1 0.2 0.04 1.0057 -0.1 0.1 .250 ..739 1.0166 0.1133 0.0128 0.9971 1.12 1.7 1.734 <0.05
1 0.2 0.04 1.0013 0.01 0.06 .200 .880 1.0163 0.111 0.0123 0.9994 0.98 1.33 1.418 <0.05
1 0.2 0.04 1.0149 0.1 0.05 .435 .294 1.0161 0.1105 0.0122 0.9927 1.05 2.24 1.097 .007
1 0.2 0.04 0.9968 -0.01 -0.05 .183 .908 1.0164 0.112 0.0125 1.0016 1.11 2.32 1.072 .008
1 0.2 0.04 0.9943 0.1 0.1 .250 .739 1.0162 0.1107 0.0123 1.0029 1.1 2.66 .915 0.019
1 0.2 0.04 1.0033 0.18 0.05 .209 .859 1.0157 0.1072 0.0115 0.9984 0.7 0.49 1.026 0.010
1 0.2 0.04 1.0094 0.03 0 .195 .889 1.0162 0.1109 0.0123 0.9953 1.03 1.97 1.293 <0.05
1 0.2 0.04 1.0103 0.05 -0.12 .141 .972 1.0161 0.1097 0.012 0.9949 0.88 1.1 1.084 0.007
1 0.2 0.04 0.9917 0.27 0.18 .310 .550 1.0156 0.1066 0.0114 1.0042 0.75 1.27 .768 0.045
1 0.2 0.04 1.0021 -0.14 -0.47 .260 .699 1.0165 0.1119 0.0125 0.9989 0.95 0.97 1.371 <0.05
1 0.2 0.04 0.9942 0.03 -0.04 .182 .911 1.0162 0.11 0.0121 1.0029 0.88 0.93 1.331 <0.05
1 0.2 0.04 0.9862 0.02 0.27 .150 .962 1.0165 0.1128 0.0127 1.007 1.24 3.16 1.267 <0.05
1 0.2 0.04 0.9795 0.25 0.04 .290 .606 1.0156 0.1064 0.0113 1.0104 0.69 0.72 .745 0.051
1 0.2 0.04 0.9768 0.16 0.04 .450 .270 1.0159 0.109 0.0119 1.0118 0.96 2.08 .933 .017
1 0.2 0.04 0.9835 0.09 -0.1 .306 .559 1.0159 0.1084 0.0118 1.0084 0.75 0.48 1.348 <0.05
1 0.2 0.04 0.9958 0.01 -0.13 .199 .882 1.0162 0.1101 0.0121 1.0021 0.86 0.73 1.402 <0.05

37
Using the p-values of the Anderson Darling test
of normality in Tables 4 through 8, we can
confidently say that g(y) is normality distributed
in the viable region, 0 < < 0.15 . It is also clear that
the means of both the transformed and the
untransformed are approximately 1.0 to one
decimal place.

38
3.4: FUNCTIONAL EXPRESSION FOR THE
MEAN AND VARIANCE OF g(y)
Firstly the mean denoted by E(y) and defined as
2

- 1 1 -1
2 1 2 2
y
2

E y = y g y dy 2

e
0
2 1 - - 1 0 y


was established to be
1 1
15 2 2 3 2 1 5 3 2 2
= 1 + 1 + 21 < 2 + (1.23)
1 1 1
16 2 1 16 1 8 2 1

after a tedious algebraic process, where every


subsequent term of (1.23) contains the factor
- 1
.
e 2
2

Similarly the second crude moment denoted


by E(y2) and defined as
2

- 1 1 -1
1 2 2
y
2

E y2 y g y dy
2 2



e
0
2 1 - - 1 0 y


39
was also established to be
1
1
8 3 2 1 3
8 2 2
2 = 1 1 2 2 + 1 2 + 21 < 2 1 + (1.24)
2 1 4 1 2 1

Where all other subsequent terms also have the


- 1
common factor e 2 2

However from simulation results given in


- 1
Table 9, e 2 2
0.0 0.17

40
- 1
Table 9: Computation of e 2
2

- 1
2
e 2
0.01 0.0000000
0.02 0.0000000
0.03 0.0000000
0.04 0.0000000
0.05 0.0000000
0.06 0.0000000
0.07 0.0000000
0.08 0.0000000
0.09 0.0000000
0.10 0.0000000
0.11 0.0000000
0.12 0.0000000
0.13 0.0000000
0.14 0.0000000
0.15 0.0000000
0.16 0.0000000
0.17 0.0000000
0.18 0.0000002
0.19 0.0000010
0.20 0.0000037
0.21 0.0000119
0.22 0.0000326
0.23 0.0000785
41
Considering that our region of successful
- 1
transformation is 0 < < 0.15 and where as , 2
e 2 0.0 0.17

therefore every term in (1.23) and (1.24) that has


- 1
the factor, = 0, hence the functional
e 2
2

expression for the first and second crude


moments are;
Ey = 1 +
3 2 1+ Pr 2 <
16 1- -
1

1
1
2 , 0.17 (1.25)


E y 2
=1+
2
2 1- -
1

1+ Pr 2 <
1
1
2 , 0.17 (1.26)

Hence the variance, 2


1 , defined as E(y2) [E(y)]2
is given as
2

Var y = 12 =1+
2
2 1- -
1

1 + Pr 2 <
1
1
2 - 1 +

3 2
16 1- -
1

1 + Pr 2
1 < 1
2


0.17 (1.27)
42
The computations of Variance of et = x, mean and
Variance y = e = 1 using (1.7), (1.25) and (1.26)
*
t
et

respectively are shown in Table 10. Also included


in Table 10 is the variance ratio, Var x = e t

Var y = e
*
t

- 1
Table 10: Computation of e 2 2

- 1 E(Y) Var( y = e *
t ) Var( x = e t ) Var x = e t
e 2
2
Var y = e *
t

0.01 0.0000000 1.00004 0.000025 0.000100 4.000225


0.02 0.0000000 1.00015 0.000100 0.000400 4.000900
0.03 0.0000000 1.00034 0.000225 0.000900 4.002026
0.04 0.0000000 1.00060 0.000400 0.001600 4.003603
0.05 0.0000000 1.00094 0.000624 0.002500 4.005633
0.06 0.0000000 1.00135 0.000898 0.003600 4.008116
0.07 0.0000000 1.00184 0.001222 0.004900 4.011055
0.08 0.0000000 1.00240 0.001594 0.006400 4.014452
0.09 0.0000000 1.00304 0.002016 0.008100 4.018308
0.10 0.0000000 1.00375 0.002486 0.010000 4.022627
0.11 0.0000000 1.00454 0.003004 0.012100 4.027412
0.12 0.0000000 1.00540 0.003571 0.014400 4.032665
0.13 0.0000000 1.00634 0.004185 0.016900 4.038390
0.14 0.0000000 1.00735 0.004846 0.019600 4.044592
0.15 0.0000000 1.00844 0.005554 0.022500 4.051274
0.16 0.0000000 1.00960 0.006308 0.025600 4.058442
0.17 0.0000000 1.01084 0.007108 0.028900 4.066100
0.18 0.0000002 1.01215 0.007952 0.032400 4.074253
0.19 0.0000010 1.01354 0.008842 0.036100 4.082909
0.20 0.0000037 1.01500 0.009775 0.040000 4.092072
0.21 0.0000119 1.01654 0.010752 0.044100 4.101749
0.22 0.0000326 1.01815 0.011771 0.048400 4.111948
0.23 0.0000785 1.01984 0.012831 0.052900 4.122675

43
It can be seen in Table 10 that the mean of the
inverse-square-root transformed error
component is approximately unity while its
variance is 4 times the variance of the
untransformed distribution or that the variance
of the untransformed error component is one-
quarter the variance of the inverse-square-root
transformed distribution.
44
3.5: NUMERICAL ILLUSTRATION USING A REAL-LIFE DATA
Here, to validate the results obtained using simulated
data, real life data was used to justify the result of the
analysis.
we used data on the Monthly interest rates Government
Bond Yield 2-year securities, Reserve Bank of Australia,
Jan 1976 Dec 1993 is used for this study.
The data were got from the Time Series Data Library
exported from datamarket.com The data is presented in
Table 4.1 while its time series plot is shown in Figure 4.1.
The periodic means and standard deviation and their
natural logarithms are presented in Table 4.2
Table 4.1: Buys-Ballot Table for The Monthly Interest
Rates Government Bond Yield 2-Year Securities,
Reserve Bank of Australia. Jan 1976 Dec 1993.
45
Here we
(i) Justified the Choice of the multiplicative model for data
decomposition to obtain the residual series (error component e ) of
t

the original data X t )


(ii) Calculated of the mean and variance of e ) and tested for its
t

normality using Anderson-Darling test of normality.


(iii) Justified the suitability of inverse-square-root transformation of
the original data X
t

(iv) Applied Inverse-square-root transformation on Xt to obtain Yt


and decomposed Yt to obtain the error component e *
t

(v) Fitting the data to appropriate model.


(vi) Calculate the mean and standard deviation of e and its
*
t

assessment for normality using Anderson-Darling test of


normality.
and finally
(vii)Comparison of e t and e
*
t

The details of the procedures are contained in the Thesis.


46
Table 4.8 Summary of the descriptive statistics of and from the real
life - data and from the functional expression obtained in chapter 3.
Error Mean Median Standard KS test p-value of Decision
component Deviation Statistic the AD test

0.9992 0.9719 0.1444 0.099 <0.01 Reject Normality


At 1% level of 3.72 4.0
significance

1.0004 1.0092 0.0749 0.065 0.034 Do not reject


normality at 1% level
of significance

AD = Anderson-darling

From the analysis, we have shown that the original data whose
error component was not normally distributed was normalized
after the inverse-square-root transformation. Furthermore, we
also established that the mean of the error component before
and after inverse-square-root transformation remains the same
(a unit).
47
It was also established that the standard deviation = 0.1341 of the error
component after the inverse-square-root transformation is within the
established interval 0 < <0.15of successful inverse-square-root transformation.
Finally the ratio of the standard deviation of the untransformed error
component to that of the inverse-square-root transformed error component is
approximately 2.0 which implied that the ratio of the variance of the
untransformed error component to that of the inverse-square-root
transformed error component is approximately 4.0 as was numerically
established in Chapter two.

4.1 SUMMARY AND CONCLUSION


In this study, we investigated the distribution and properties of the left-
truncated N 1, error term, et , of the multiplicative time series model under
2

inverse-square-root transformation with a view to establish the condition for


the transformed error term, e *t, to be normally distributed with mean, 1. It was
found that the normality of e *tis attained for 0.15and the functional forms of
E e and Var e confirmed
*
t
*
t the mean of e *tto be 1 with Var e 14 Var e , whenever 0.145.
*
t t

Hence 0.145 is the recommended condition for successful inverse square root
transformation.

48
4.2: CONTRIBUTION TO KNOWLEDGE
The fundamental contribution of this study to knowledge in the use
of data transformation are ; (i) the establishment of the distribution
and properties of the Left-truncated N 1, error term under inverse
2

square root transformation and (ii) the establishment of the interval


0,0.15 for a successful and valid inverse-square-root transformation

in modeling using the multiplicative time series model.


4.3 RECOMMENDATION
We recommend that for successful and valid inverse square root
transformation in time series the standard deviation of the error
component should be in the interval (0, 0.15).
4.4 LIMITATION
Considering that the error component of a multiplicative time
series model can assume a distribution that is nonnormal,
therefore the limitation of this this study is that we did not
investigate whether the results obtained when the error
components are assumed to be normally distributed are also
applicable when it is non-normally distributed.

49
4.4 SUGGESTIONS FOR FURTHER RESEARCH
The following areas are suggested for further
research;
(i) The effect of the inverse-square-root
transformation on the error component of the
multiplicative error model whose distribution is
assumed to be non-normal such as Gamma,
Weibull, inverted-gamma and mixtures of them.

(ii) The effect of the inverse-square transformation


on the error
component of the multiplicative time series model

50
REFERENCES
Akpanta, A. C. and I. S. Iwueze (2009), On Applying the Bartlett Transformation
Method to Time Series Data, Journal of Mathematical Sciences, Vol. 20, No. 3, pp.
227-243.

Bartlett, M.S. (1947), The Use of Transformations, Biometrika, Vol. 3, pp. 39-52.

Box, G. E. P. and Cox, D. R. (1964), An Analysis of Transformations (with


discussion). J. Roy. Statist. Soc., B. 26, pp. 211-252.

Box, G.E.P., Jenkins, G. M., and Reinsel, G.C., (1994), Time Series Analysis,
Forecasting and Control, 3rd ed., Englewood Cliffs, NJ: Prentice Hall, New Jersey.

Buys-Ballot, C.H.D. (1847), Leo Claemert Periodiques ed Temperature, Kemint et


Fils, Utrecht.

Chartfield C. (2004), The Analysis of Time Series: An Introduction. Chapman and


Hall/CRC Press, Boca Raton.

Cohen, G. (1990), A Course in Modern Analysis and its Applications, Australian


Mathematical Society Lecture Series, Cambridge University Press,
51
Crabtree, B.F., Ray, S.C., Schmidt, D.M., OConnor, P.J., and Schmidt, D.D.
(1990), The Individual Overtime: Time Series Applications in Health Care Research,
Journal of Clinical Epidemiology, Vol. 43, No. 3, pp. 241 260.

De Vries, W.R. and Wu, S.M. (1978), Evaluation of Process Control Effectiveness
and Diagnosis of Variation in Paper Basis Weight via Multivariate Time Series
Analysis, IEEE Transactions on Automatic Control, Vol. 23, No. 4, pp. 702 708.

Dunham, M. H. (2002), Data Mining: Introductory and Advanced Topics, Prentice-


Hall Inc, New Jersey.

Eads, D., Glocer, K., Perkins, S., and Theiler, J. (2005), Grammar-guided Feature
Extraction for Time Series Classification, Proceedings of the 9th Annual Conference
on Neural Information Processing Systems (NIPS05).

Fink, E. L. (2009), The FAQs on Data Transformation, Communication


Monographs, Vol., 76, No. 4, pp. 379 397.

Hogg, R. V. and Craig, A. T. (1978), Introduction to Mathematical Statistics, 6th


edition, Macmillan Publishing Co. Inc, New York.

Graybill, F. A. (1976), The Theory and Applications of the Linear Model, Duxury
Press, London 52
Iwu H.C., Iwueze I. S and Nwogu E. C (2009), Trend Analysis of Transformations
of the Multiplicative Time Series Model, Journal of the Nigerian Statistical
Association, Vol. 21, No. 5, pp. 40 54.

Iwueze Iheanyi S. (2007), Some Implications of Truncating the N (1,2)


Distribution to the Left at Zero, Journal of Applied Sciences, Vol. 7, No. 2, pp. 189-
195.

Iwueze I.S, Nwogu E.C., Ohakwe J and Ajaraogu J.C, (2011), Uses of the Buys
Ballot Table in Time Series Analysis, Applied Mathematics, Vol. 2,
pp. 633 645, DOI: 10.4236/am.2011.25084.

Kendal, M.G. and J.K. Ord (1990). Time Series, 3rd ed., Charles Griffin, London.

Kruskal, J.B. (1968), Statistical Analysis, Special Problems of Transformations of


Data, International Encyclopedia of the Social Sciences, Vol. 15, pp. 182 193,
Macmillan, New York.

Ljung, G.M. and Box, G.E.P. (1978), On a Measure of Lack of Fit in Time Series
Models, Biometrika, Vol. 65, pp. 297 303.

Montgomery D. C. (2001). Design and Analysis of Experiments, 5th ed., Wiley,


New York.
53
Nwosu C. R, Iwueze I.S. and Ohakwe J. (2013), Condition for Successful Inverse
Transformation of the Error Component of the Multiplicative Time Series Model,
Asian Journal of Applied sciences, Vol. 6, No.1, pp.1-15, DOI: 10.3923/ajaps.2013.1.15

Ohakwe, J, Iwuoha O., and Otuonye, E.L. (2013), Condition for Successful square
Transformation in Time Series Modeling, Applied Mathematics, 2013, Vol. 4, pp.
680 687.

Otuony E. L. (2012), The Effect of Square Root Transformation on the Error


Component of the Multiplicative Time Series Model, An unpublished Ph.D Thesis
submitted to Abia State University Uturu, Abia State.

Osborne, J. W. (2002), Notes on the use of Data Transformations. Practical


Assessment, Research, and Evaluation., 8, Available online at
http://pareonline.net/getvn.asp?v=8&n=6

Percival, D.B. and Walden A. T. (2000), Wavelet Methods for Time Series
Analysis, Cambridge University Press Cambridge.

Priestley, M.B. (1981), Spectral Analysis and Time Series. Vol. 1: Univariate
Series; Vol. 2: Multivariate Series, Prediction and Control, Academic Press, New
York.
54
Ruppert, D. (1999), Transformations of Data. Int. Encyc. Social and behavioral
Sciences, Elsevier, New York

Sakia, R.M. (1992), The Box-Cox transformation technique: A Review, The


Statistician, Vo. 41, pp. 169-178.

Tarek A. (2013), Postgraduate Degree Report in Data Mining Programme in the


University of Anglia, January 08, 2013

https://docs.google.com/file/d/0B2bldjoHWBdZaGI3UjdrNVNrSUE/edit(p
ostgra duate(Postgraduate)

Theoeni, H. (1967), Transformation of Variables used in the Analysis of


Experimental and Observational Data: A Review, Technical Report No. 7, Lowa
State University, Ames.

Turkey J. W. (1957, On the Comparative Anatomy of Transformations, Annals of


Mathematical Statistics, Vol. 28, No 3, pp. 525 -540.

Turkey J. W. (1977), Exploratory Data Analysis, Addison-Wesley, California

55
Vidakovic B. (2012), Handbook of Computational Statistics, pp. 203 242,
Springer Berlin Heidelberg.

Watthanacheewakul L (2012), Transformations with Right Skew Data.


Proceedings of the World Congress on Engineering, Vol. 1, London.

Wei, W.W.S., (1989), Time Series Analysis: Univariate and Multivariate Methods.
Addison Wesley, California.

Winer, B.J. (1968), The Error. Psychometrika, Vol. 33, pp. 391 403.

Wold, H. (1938), A Study in the Analysis of Stationary Time Series, 2nd ed.,
Almqrist and Witsett, Stockholm

Xing, Z., Pei, J., and Keogh, E. (2010), A Brief Survey on Sequence
Classification, ACM SIGKDD Explorations Newsletter, Vol. 12, No.1, pp. s048.

56