Académique Documents
Professionnel Documents
Culture Documents
Henrik Hult
Spring 2003
Contents
Chapter 1
Chapter 2
Chapter 3
11
Chapter 4
14
Chapter 5
19
Chapter 6
6.5, 6.6
23
Chapter 7
7.1, 7.5
25
Chapter 8
28
Chapter 10
10.5
31
1A (h) =
Diracs delta function
+ if t = 0,
(t) =
0
if t =
6 0,
1
0
if h A,
if h
/ A.
and
f (t)(t)dt = f (0).
Chapter 1
Problem 1.1. a) First note that
E[(Y c)2 ] = E[Y 2 2Y c + c2 ] = E[Y 2 ] 2cE[Y ] + c2
= E[Y 2 ] 2c + c2 .
Find the extreme point by differentiating,
d
(E[Y 2 ] 2c + c2 ) = 2 + 2c = 0
dc
c = .
d
2
2
Since, dc
2 (E[Y ] 2c + c ) = 2 > 0 this is a min-point.
b) We have
0
otherwise.
Since X (t) and X (t + h, t) do not depend on t, {Xt : t Z} is (weakly) stationary.
b) For the mean we have
X (t) = E[Z1 ] cos(ct) + E[Z2 ] sin(ct) = 0,
and for the autocovariance
X (t + h, t) = Cov(Xt+h , Xt )
= Cov(Z1 cos(c(t + h)) + Z2 sin(c(t + h)), Z1 cos(ct) + Z2 sin(ct))
= cos(c(t + h)) cos(ct) Cov(Z1 , Z1 ) + cos(c(t + h)) sin(ct) Cov(Z1 , Z2 )
+ sin(c(t + h)) cos(ct) Cov(Z1 , Z2 ) + sin(c(t + h)) sin(ct) Cov(Z2 , Z2 )
= 2 (cos(c(t + h)) cos(ct) + sin(c(t + h)) sin(ct))
= 2 cos(ch)
where the last equality follows since cos( ) = cos cos + sin sin . Since
X (t) and X (t + h, t) do not depend on t, {Xt : t Z} is (weakly) stationary.
c) For the mean we have
X (t) = E[Zt ] cos(ct) + E[Zt1 ] sin(ct) = 0,
3
1 + 2 if h = 0,
1.64 if h = 0,
=
=
if |h| = 2.
0.8 if |h| = 2.
4
Hence the ACVF depends only on h and we write X (h) = X (t + h, h). The ACF
is then
X (h)
1
if h = 0,
(h) =
=
0.8/1.64
0.49
if |h| = 2.
X (0)
b) We have
1
1
Var
(X1 + X2 + X3 + X4 ) =
Var(X1 + X2 + X3 + X4 )
4
16
1
=
Var(X1 ) + Var(X2 ) + Var(X3 ) + Var(X4 ) + 2 Cov(X1 , X3 )
16
+ 2 Cov(X2 , X4 )
=
1
1.64 + 0.8
1
4X (0) + 4X (2) = X (0) + X (2) =
= 0.61.
16
4
4
1.64 0.8
1
Var
(X1 + X2 + X3 + X4 ) =
= 0.21.
4
4
Because of the negative covariance at lag 2 the variance in c) is considerably smaller.
Problem 1.8. a) First we show that {Xt : t Z} is WN (0, 1). For t even we have
E[Xt ] = E[Zt ] = 0 and for t odd
2
Zt1 1
1
2
E[Xt ] = E
= E[Zt1
1] = 0.
2
2
Next we compute the ACVF. If t is even we have X (t, t) = E[Zt2 ] = 1 and if t is
odd
"
2 #
2
1
Zt1
1
1
4
2
X (t, t) = E
= E[Zt1
2Zt1
+ 1] = (3 2 + 1) = 1.
2
2
2
If t is even we have
X (t + 1, t) = E
1
Zt2 1
Zt = E[Zt3 Zt ] = 0,
2
2
and if t is odd
Z2 1
Z
1
X (t + 1, t) = E Zt+1 t1
= E[Zt+1 ]E t1
= 0.
2
2
1 if h = 0,
X (t + h, h) =
0 otherwise.
Thus {Xt : t Z} is WN (0, 1). If t is odd Xt and Xt1 is obviously dependent so
{Xt : t Z} is not IID (0, 1).
b) If n is odd
E[Xn+1 | X1 , . . . , Xn ] = E[Zn+1 | Z0 , Z2 , Z4 . . . , Zn1 ] = E[Zn+1 ] = 0.
If n is even
E[Xn+1 | X1 , . . . , Xn ] = E
Z2 1
X2 1
Zn2 1
| Z0 , Z2 , Z4 , . . . , Zn = n
= n
.
2
2
2
q
X
1
aj mtj =
(c0 + c1 (t j))
2q + 1 j=q
j=q
q
q
X
X
c
1
1
t (2q + 1)
=
c0 (2q + 1) + c1
(t j) = c0 +
j
2q + 1
2q
+
1
j=q
j=q
q
q
X
c1 X
= c0 + c1 t
j+
j
2q + 1 j=1
j=1
= c0 + c1 t = mt
b) We have
E [At ] = E
q
X
j=q
q
X
aj Ztj =
q
X
aj E [Ztj ] = 0
and
j=q
aj Ztj =
j=q
q
X
j=q
q
X
1
(2q + 1)
2 =
j=q
2
2q + 1
We see that the variance Var(At ) is small for large q. Hence, the process At will be
close to its mean (which is zero) for large q.
Problem 1.15. a) Put
Zt = 12 Xt = (1 B)(1 B 12 )Xt = (1 B)(Xt Xt12 )
= Xt Xt12 Xt1 + Xt13
= a + bt + st + Yt a b(t 12) st12 Yt12 a b(t 1) st1 Yt1
+ a + b(t 13) + st13 + Yt13
= Yt Yt1 Yt12 + Yt13 .
We have Z (t) = E[Zt ] = 0 and
Z (t + h, t) = Cov (Zt+h , Zt )
= Cov (Yt+h Yt+h1 Yt+h12 + Yt+h13 , Yt Yt1 Yt12 + Yt13 )
= Y (h) Y (h + 1) Y (h + 12) + Y (h + 13) Y (h 1) + Y (h)
+ Y (h + 11) Y (h + 12) Y (h 12) + Y (h 11)
+ Y (h) Y (h + 1) + Y (h 13) Y (h 12) Y (h 1) + Y (h)
= 4Y (h) 2Y (h + 1) 2Y (h 1) + Y (h + 11) + Y (h 11)
2Y (h + 12) 2Y (h 12) + Y (h + 13) + Y (h 13).
Since Z (t) and Z (t + h, t) do not depend on t, {Zt : t Z} is (weakly) stationary.
b) We have Xt = (a + bt)st + Yt . Hence,
Zt = 212 Xt = (1 B 12 )(1 B 12 )Xt = (1 B 12 )(Xt Xt12 )
= Xt Xt12 Xt12 + Xt24 = Xt 2Xt12 + Xt24
= (a + bt)st + Yt 2(a + b(t 12)st12 + Yt12 ) + (a + b(t 24))st24 + Yt24
= a(st 2st12 + st24 ) + b(tst 2(t 12)st12 + (t 24)st24 )
+ Yt 2Yt12 + Yt24
= Yt 2Yt12 + Yt24 .
6
Chapter 2
n+h = aXn + b of Xn+h by
Problem 2.1. We find the best linear predictor X
n+h ] = 0 and E[(Xn+h X
n+h )Xn ] = 0. We
finding a and b such that E[Xn+h X
have
n+h ] = E[Xn+h aXn b] = E[Xn+h ] aE[Xn ] b = (1 a) b
E[Xn+h X
and
n+h )Xn ] = E[(Xn+h aXn b)Xn ]
E[(Xn+h X
= E[Xn+h Xn ] aE[Xn2 ] bE[Xn ]
= E[Xn+h Xn ] E[Xn+h ]E[Xn ] + E[Xn+h ]E[Xn ]
= Cov(Xn+h , Xn ) + 2 a Cov(Xn , Xn ) + 2 b
= (h) + 2 a (0) + 2 b,
which implies that
b = (1 a) ,
a=
(h) + 2 b
.
(0) + 2
0
otherwise.
If we let 2 = 1/(1+2 ) and choose such that 2 = 0.4, then we get X (h) = (h).
Hence, we choose so that /(1 + 2 ) = 0.4, which implies that = 1/2 or = 2.
Problem 2.8. Assume that there exists a stationary solution {Xt : t Z} to
Xt = Xt1 + Zt ,
8
t = 0, 1, . . .
n
X
i Zti ,
i=0
n
X
i Zti .
i=0
We have that
Var
n
X
i=0
!
i
Zti
n
X
2i Var (Zti ) =
i=0
n
X
2 = (n + 1) 2 .
i=0
Xt = (Xt1 ) + Zt ,
{Zt : t Z} WN 0, 2 ,
|h|
X
2
1
1
1
2
h
1+2
=
1
+
2
1
n
1 2
n
1
1 2
h=1
1
2
2
1 1+
2
2
=
1
=
=
.
n 1
1 2
n 1 1 2
n(1 )2
2
n+1 )2 ]
S(a0 , a1 , . . . , an ) = E[(Xn+1 X
= E[(Xn+1 a0 a1 Xn an X1 )2 ]
= a20 2a0 E[Xn+1 a1 Xn an X1 ]
+ E[(Xn+1 a1 Xn an X1 )2 ]
= a20 + E[(Xn+1 a1 Xn an X1 )2 ].
Differentiation with respect to ai gives
S
= 2a0 ,
a0
S
= 2E[((Xn+1 a1 Xn an X1 )Xn+1i ],
ai
9
i = 1, . . . , n.
Putting the partial derivatives equal to zero we get that S(a0 , a1 , . . . , an ) is minimized if
a0 = 0
for each k = 1, . . . , n.
ai = i ,
ai = 0,
if 1 i p
if i > p
Since there is best linear predictor is unique this is the one. The mean square error
is
2
n+1 )2 ] = E[Zn+1
E[(Xn+1 X
] = 2 .
10
Chapter 3
Problem 3.1. We write the ARMA processes as (B)Xt = (B)Zt . The process
{Xt : t Z} is causal if and only if (z) 6= 0 for each |z| 1 and invertible if and
only if (z) 6= 0 for each |z| 1.
a) (z) = 1 + 0.2z 0.48z 2 = 0 is solved by z1 = 5/3 and z2 = 5/4.
Hence {Xt : t Z} is causal.
(z) = 1. Hence {Xt : t Z} is invertible.
b) (z) = 1 + 1.9z + 0.88z 2 = 0 is solved by z1 = 10/11 and z2 = 5/4.
Hence {Xt : t Z} is not causal.
k 1.
h = 0,
1
0.8h , h = 2k, k = 1, 2, . . .
(h) =
0
otherwise.
The PACF can be computed as (0) = 1, (h) = hh where hh comes from that
the best linear predictor of Xh+1 has the form
h+1 =
X
h
X
hi Xh+1i .
i=1
11
2
Problem 3.7. First we show that {Wt : t Z} is WN 0, w
.
X
X
j
E[Wt ] = E
() Xtj =
()j E[Xtj ] = 0,
j=0
j=0
since E[Xtj ] = 0 for each j. Next we compute the ACVF of {Wt : t Z} for
h 0.
X
X
W (t + h, t) = E[Wt+h Wt ] = E ()j Xt+hj
()k Xtk
j=0
k=0
j=0 k=0
()j ()k X (h j + k)
j=0 k=0
=
()(j+k) 2 (1 + 2 )1{jk} (h) + 2 1{jk+1} (h) + 2 1{jk1} (h)
=
j=0 k=0
()(j+jh) 2 (1 + 2 ) +
j=h
()(j+jh+1) 2
j=h1,j0
()(j+jh1) 2
j=h+1
= 2 (1 + 2 )()h
()2(jh) + 2 ()(h1)
j=h
X
(h+1)
+ 2 ()
()2(j(h1))
j=h1,j0
()2(j(h+1))
j=h+1
2
2
+ 2 ()(h1) 2
+ 2 2 1{0} (h)
1
1
2
+ 2 ()(h+1) 2
1
2
= 2 ()h 2
1 + 2 2 1 + 2 2 1{0} (h)
1
= 2 2 1{0} (h)
= 2 (1 + 2 )()h
12
2
2
Hence, {Wt : t Z} is WN 0, w
with w
= 2 2 . To continue we have that
Wt =
()j Xtj =
j=0
j Xtj ,
j=0
P
P
with j = ()j and j=0 |j | = j=0 j < so {Xt : t Z} is invertible and
P
solves (B)Xt = (B)Wt with (z) = j=0 j z j = (z)/(z). This implies that
we must have
X
j=0
j z j =
X
(z)
z j
1
=
.
1
+
z/
(z)
j=0
h
X
hi Xh+1i .
i=1
i = 1, 2.
This gives us
3 , X1 ) = Cov(X3 21 X2 22 X1 , X1 )
Cov(X3 X
= Cov(X3 , X1 ) 21 Cov(X2 , X1 ) 22 Cov(X1 , X1 )
= (2) 21 (1) 22 (0) = 0
and
3 , X2 ) = Cov(X3 21 X2 22 X1 , X2 )
Cov(X3 X
= (1) 21 (0) 22 (1) = 0.
Since we have an MA(1) process it has ACVF
2
(1 + 2 ), h = 0,
2 ,
|h| = 1,
(h) =
0,
otherwise.
Thus, we have to solve the equations
21 (1) + 22 (0) = 0
(1 22 )(1) 21 (0) = 0.
Solving this system of equations we find
22 =
2
.
+ 2 + 1
13
Chapter 4
P
Problem 4.4. By Corollary 4.1.1 we know that a function (h) with |h|< |(h)|
is ACVF for some stationary process if and only if it is an even function and
f () =
1 X ih
e
(h) 0,
2
for (, ].
h=
3
1 X ih
e
(h)
2
h=3
1
=
2
1
=
2
1
=
2
Z
eih dFZ ()
Z (h) =
(,]
Z
Z
Z (h) = X (h) + Y (h) =
eih dFX () +
eih dFY ()
(,]
(,]
Z
ih
=
e (dFX () + dFY ())
(,]
Hence we have that dFZ () = dFX () + dFY (), which implies that
Z
Z
FZ () =
dFZ () =
(dFX () + dFY ()) = FX () + FY ().
(,]
(,]
0,
otherwise.
By Problem 2.2 the process St = A cos(t/3) + B sin(t/3) has ACVF S (h) =
2 cos(h/3). Since the processes are uncorrelated, Problem 4.5 gives that X (h) =
S (h) + Y (h). Moreover,
Z
2 ih/3
(e
+ eih/3 ) =
eih dFS (),
2 cos(h/3) =
2
14
where
dFS () =
2
2
( /3) d + ( + /3) d
2
2
This implies
< /3,
0,
2 /2, /3 < /3,
FS () =
2
,
/3.
Furthermore we have that
1 X ih
1 i
e
e Y (1) + Y (0) + ei Y (1)
Y (h) =
2
2
fY () =
h=
2
1 2
=
1 + 2.52 + 2.5 2 ei + ei =
(7.25 + 5 cos()).
2
2
This implies that
Z
FY () =
fY ()d =
2
2
(7.25 + 5 cos())d =
7.25 + 5 sin()
2
2
(7.25( + ) + 5 sin()).
2
X (0) =
i0
fX ()d = 100
6 +0.01
6 +0.01
d + 100
6 0.01
d = 100 0.04 = 4.
6 0.01
ei fX ()d
Z 6 +0.01
i
e d + 100
6 0.01
i 6 +0.01
6 +0.01
ei d
6 0.01
i 6 +0.01
e
e
= 100
+ 100
i 0.01
i 0.01
6
6
100 i( +0.01)
i(
+0.01)
e 6
=
e 6
+ ei( 6 +0.01) ei( 6 +0.01)
i
+ 0.01
= 200 sin + 0.01 + sin
6
6
X
k=
15
k Xtk ,
k eik = 1 ei12 .
k=
Hence,
2
fY () = 1 e12i fX () = (1 e12i )(1 e12i )fX ()
= 2(1 cos(12))fX ().
The power transfer function |(ei )|2 is plotted in Figure 4.9(b) and the resulting
spectral density fY () is plotted in Figure 4.9(c).
c) The variance of Yt is Y (0) which is computed by
Z
Y (0) =
fY ()d
6 +0.01
Z
= 200
Z
(1 cos(12))d + 200
6 0.01
6 +0.01
(1 cos(12))d
6 0.01
sin(12) i 6 +0.01
sin(12) i 6 +0.01 h
+
= 200
12
12
6 0.01
6 0.01
1
= 200 0.04 sin(0.12) = 0.0192.
3
Problem 4.10. a) Let (z) = 1 z and (z) = 1 z. Then Xt =
(B)
(B) Zt
and
(ei ) 2
(ei ) 2 2
fX () =
fZ () =
.
(ei )
(ei ) 2
For {Wt : t Z} we get
2
2
2
1 i
(e
i
i 2 2
e
1
1 ei 2
) (e )
=
.
fW () =
i ) (ei ) 2
(e
1 1 ei 2 |1 ei |2 2
1 1 ei = 1 ei 2 = e
ei 2 = 1 ei 12
2
2
2
1
1
2
2
= 2 1 ei = 2 1 ei .
2
1
i 2
1 ei 2
2 2
2 1 e
= 2
fW () = 1
2
2
i | |1 ei | 2
2
2 |1 e
16
100
80
60
40
20
0
0
/6
/3
/2
2/3
5/6
2/3
5/6
(a) fX ()
4
3
2
1
0
0
/6
/3
/2
2
(b) ei
1.5
0.5
/60.01
/6
/6+0.01
(c) fY ()
17
which is constant.
b) Since {Wt : t Z} has constant spectral density it is white noise and
Z
2 2
2
2
w
= W (0) =
fW ()d = 2
2 = 2 2 .
2
18
Chapter 5
Problem 5.1. We begin by writing the Yule-Walker equations. {Yt : t Z}
satisfies
Yt 1 Yt1 2 Yt2 = Zt ,
{Zt : t Z} WN(0, 2 ).
1 (k 1) + 2 (k 2) =
(k)
(0) 2
k = 1, 2,
k = 0.
1
(1)
(0) (1)
, =
, 2 =
2 =
2
(2)
(1) (0)
2 and 2 by
2 and
we have 2 = 2 and 2 (0) T 2 . We replace 2 by
for . That is, we solve
solve to get an estimate
=
2
2
T
2.
2 = (0)
Hence
1
(0)
(1)
(1)
(1) (0)
(2)
(0)2 (1)2
1
(0)
(1)
(1)
(2)
=
.
(1)2
(0)
(2)
(0)2 (1)2
=
1
2 2 =
We get that
(
(0) (2))
(1)
1 =
= 1.32
(0)2 (1)2
(0)
(2) (1)2
= 0.634
2 =
(0)2 (1)2
289.18
0.0021 0.0017
0.0060 0.0048
2 1
2 /n =
=
0.0017 0.0021
0.0048 0.0060
100
So we have approximately 1 N (1 , 0.0060) and 2 N (2 , 0.0060) and the
confidence intervals are
19
Problem 5.3. a) {Xt : t Z} is causal if (z) 6= 0 for |z| 1 so let us check for
which values of this can happen. (z) = 1 z 2 z 2 so putting this equal to
zero implies
z
1
1 5
1+ 5
2
z + 2 = 0 z1 =
and z2 =
2
2
Furthermore |z1 | > 1 if || < ( 5 1)/2 = 0.61 and |z2 | > 1 if || < (1 + 5)/2 =
1.61. Hence, the process is causal if || < 0.61.
b) The Yule-Walker equations are
2
k = 0,
(k) (k 1) 2 (k 2) =
0 k 1.
Rewriting the first 3 equations and using (k) = (k) gives
(0) (1) 2 (2) = 2
(1) (0) 2 (1) = 0
(2) (1) 2 (0) = 0.
Multiplying the third equation by 2 and adding the first gives
3 (1) (1) 4 (0) + (0) = 2
(1) (0) 2 (1) = 0.
We solve the second equation to obtain
s
1
=
2(1)
1
+ 1.
4(1)2
2 = 3 (1)
Problem 5.4. a) Let
us construct a test to see if the assumption that {Xt :
2
t Z}
is
WN
0,
95% confidence interval for (k) is then I(k) = (k) 0.025 / 200. This gives us
I(1) = 0.427 0.139
I(2) = 0.475 0.139
I(3) = 0.169 0.139.
Clearly 0
/ I(k) for any of the observed k = 1, 2, 3 and we conclude that it is not
reasonable to assume that {Xt : t Z} is white noise.
b) We estimate the mean by
= x200 = 3.82. The Yule-Walker estimates is given
by
=R
1 2 ,
1 2 ),
2 = (0)(1 2 T R
2
where
1
2
2 =
, R
(0) (1)
(1) (0)
20
2 =
,
(1)
(2)
h=
2
2
n
0.0050
0.0021
0.0021
0.0050
and hence 1 AN(1 , 0.0050) and 2 AN(2 , 0.0050). We get the 95% confidence intervals
e) If the data were generated from an AR(2) process, then the PACF would be
(0) = 1,
(1) = (1) = 0.427,
(2) = 2 = 0.358 and
(h) = 0 for h 3.
Problem 5.11. To obtain the maximum likelihood estimator we compute as if the
process were Gaussian. Then the innovations
1 = X1 N (0, 0 ),
X1 X
2 = X2 X1 N (0, 1 ),
X2 X
1 )2 ], 1 = 2 r1 = E[(X2 X
2 )2 ]. This implies
where 0 = 2 r0 = E[(X1 X
2
2
2
2 ) ] = (0)2(1)+2 (0)
0 = E[X1 ] = (0), r0 = 1/(1 ) and 1 = E[(X2 X
and hence
r1 =
(0)(1 + 2 ) 2(1)
1 + 2 22
=
= 1.
2
1 2
Here we have used that (1) = 2 /(1 2 ). Since the distribution of the innova j is
tions is normal the density for Xj X
fXj X j
x2
1
exp 2
=p
2 rj1
2 2 rj1
(x1 x
1 )2
(x2 x
2 )2
1
L(, ) =
fXj X j
+
exp 2
2
r0
r1
(2 2 )2 r0 r1
j=1
2
x1
(x2 x1 )2
1
1
+
.
=p
exp 2
2
r0
r1
(2 2 )2 r0 r1
2
2
Y
=p
21
1
1 2
2 4
2
2
= log(4 /(1 )) 2 x1 (1 ) + (x2 x1 )2
2
2
1
1
2
= log(2) log( ) + log(1 2 ) 2 x21 (1 2 ) + (x2 x1 )2 .
2
2
Differentiating yields
1
1
l(, 2 )
= 2 + 4 x21 (1 2 ) + (x2 x1 )2 ,
2
2
l(, 2 )
1 2
x1 x2
=
+ 2 .
2
2 1
2 =
(x21 x22 )2
2x1 x2
and = 2
2
2
2(x1 + x2 )
x1 + x22
22
Chapter 6
Problem 6.5. The best linear predictor of Yn+1 in terms of 1, X0 , Y1 , . . . , Yn i.e.
Yn+1 = a0 + cX0 + a1 Y1 + + an Yn ,
must satisfy the orthogonality relations
Cov(Yn+1 Yn+1 , 1) = 0
Cov(Yn+1 Yn+1 , X0 ) = 0
Cov(Yn+1 Yn+1 , Yj ) = 0,
j = 1, . . . , n.
3
X
j Xn+hj .
j=1
g(h) =
We may suggest a solution of the form g(h) = a+b1h +c2h , h > 3 where 1 and
2 are the solutions to (z) = 0 and g(2) = Xn2 , g(1) = Xn1 and g(0) = Xn .
Let us first find the roots 1 and 2 .
4
1
16
(z) = 1 0.8z + 0.25z 2 = 1 z + z 2 = 0 z 2 z + 4 = 0.
5
4
5
23
p
We get that z = 8/5 (8/5)2 4 = (8 6i)/5. Then 11 = 5/(8 + 6i) = =
0.4 0.3i and 21 = 0.4 + 0.3i. Next we find the constants a, b and c by solving
Xn2 = g(2) = a + b12 + c22 ,
Xn1 = g(1) = a + b11 + c21 ,
Xn = g(0) = a + b + c.
Note that (0.4 0.3i)2 = 0.07 0.24i and (0.4 + 0.3i)2 = 0.07 + 0.24i so we get the
equations
Xn2 = a + b(0.07 0.24i) + c(0.07 + 0.24i),
Xn1 = a + b(0.4 0.3i) + c(0.4 + 0.3i),
Xn = a + b + c.
Let a = a1 + a2 i, b = b1 + b2 i and c = c1 + c2 i. Then we split the equations into a
real part and an imaginary part and get
Xn2 = a1 + 0.07b1 + 0.24b2 + 0.07c1 0.24c2 ,
Xn1 = a1 + 0.4b1 + 0.3b2 + 0.4c1 0.4c2 ,
Xn = a1 + b1 + c1 ,
0 = a2 + 0.07b2 0.24b1 + 0.07c2 + 0.24c1 ,
0 = a2 + 0.4b2 0.3b1 + 4c2 + 0.3c1 ,
0 = a2 + b2 + c2 .
We can write this
1 0
1 0
1 0
0 1
0 1
0 1
as a matrix equation by
0.07 0.24
0.4
0.3
1
0
0.24 0.07
0.3 0.4
0
1
0.07
0.4
1
0.24
0.3
0
0.24
0.3
0
0.07
0.4
1
a1
a2
b1
b2
c1
c2
Xn2
Xn1
Xn
0
0
0
24
Chapter 7
Problem 7.1. The problem is not very well formulated; we replace the condition
Y (h) 0 as h by the condition that Y (h) is strictly decreasing.
The process is stationary if
t = E[(X1,t , X2,t )T ] = (1 , 2 )T and (t + h, t) does
not depend on t. We may assume that {Yt } has mean zero so that
E[X1,t ] = E[Yt ] = 0
E[X2,t ] = E[Ytd ] = 0,
and the covariance function is
Y (h)
Y (h + d)
=
.
Y (h d)
Y (h)
E[Yt+h Yt ]
E[Yt+hd Yt ]
E[Yt+h Ytd ]
E[Yt+hd Ytd ]
Since neither
t or (t + h, t) depend on t, the process is stationary. We assume
that Y (h) 0 as h . Then we have that the cross-correlation
12 (h) = p
12 (h)
11 (0)22 (0)
Y (h + d)
= Y (h + d).
Y (0)
(h) =
11 (h) 12 (h)
21 (h) 22 (h)
by
Pnh
n )(Xt X
n )T 0 h n 1
(Xt+h X
T
(h)
n + 1 h < 0.
p
Then we get 12 (h) = 12 (h)/ 11 (0)
22 (0). According to Theorem 7.3.1 in Brockwell and Davis we have, for h 6= k, that
12 (h)
n
approx. N (0, )
n
21 (h)
(h)
=
1
n
t=1
where
11 = 22 =
12 = 21 =
X
j=
11 (j)22 (j)
11 (j)22 (j + k h).
j=
Since {X1,t } and {X2,t } are MA(1) processes we know that their ACFs are
1
h=0
X1 (h) =
0.8/(1 + 0.82 ) h = 1
1
h=0
X2 (h) =
0.6/(1 + 0.62 ) h = 1
25
Hence
j=
0.8
0.6
0.8
0.6
+1+
0.57.
1 + 0.82 1 + 0.62
1 + 0.82 1 + 0.62
For the covariance we see that 11 (j) 6= 0 if j = 1, 0, 1 and 22 (j + k h) 6= 0 if
j + k h = 1, 0, 1. Hence, the covariance is
=
if k h = 1
j=
j=
j=
if k h = 2
if k h = 2.
j=
Problem 7.5. We have {Xt : t Z} is a causal process if det ( (z)) 6= 0 for all
|z| 1, due to Brockwell-Davis page 242. Further more we have that if {Xt : t Z}
is a causal process, then
Xt =
j Ztj ,
j=0
where
j = j +
k jk
k=1
0 = I
j = 0 for j > q
j = 0 for j > p
j = 0 for j < 0
and
(h) =
h+j Tj ,
h = 0, 1, 2, . . .
j=0
z 1 1
1 0
det((z)) = det(I z1 ) = det
0 1
2 0 1
z
z
1
1 2
2
2
= (2 z)
= det
0
1 z2
4
Which implies that |z1 | = |z2 | = 2 > 1 and hence {Xt : t Z} is a causal process.
We have that j = j + 1 j1 and
0 = 0 + 1 1 = 0 = I
1 = 1 + 1 0 = T1 + 1
n+1 = 1 n
for n 1.
26
From the last equation we get that n+1 = n1 1 = n1 (T1 + 1 ) and from the
definition of 1
2
1
1 5 4
1 n
n
T
1 = n
1 + 1 =
.
0 1
2
4 4 5
Assume that h 0, then
(h) =
h+j Tj = h +
j=0
= h +
h+j Tj
j=1
h+j1
T1 + 1
1
T
j1
T1 + 1
1
j=1
= h + h1
2 T
j1 T1 + 1
j1
j=0
1
1 j 1 5 4
1 0
= h +
0 1 4 4 5 2j j 1
j=0
X
1
5 + 8j + 5j 2 4 + 5j
h1
= h + 1
4 + 5j
5
4 j=0 22j
94/27 17/9
= h + h1
.
17/9 5/3
h1
1
2j
We have that
(
h =
I,
h1
1
h=0
T1
+ 1 , h > 0
(0) =
1 0
0 1
94/27
17/9
17/9
5/3
121/27 17/9
17/9
8/3
94/27 17/9
(h) = 1h1 T1 + 1 + h1
17/9 5/3
1 2 1
1 1 1
94/27 17/9
= 1h1
+
17/9 5/3
2 1 2
2 0 1
1
1 h1
199/27 41/9
= h
.
0
1
26/9 11/3
2
27
Chapter 8
Problem 8.7. First we would like to show that
1
Zt+1
Xt+1 =
0
Zt
(8.1)
is a solution to
Xt+1 =
0 1
0 0
Xt +
Zt+1 .
(8.2)
Let
A=
0
0
1
0
and B =
A =
0
0
0
0
Zt+1
1
Zt + Zt+1
1
,
=
Zt+1 =
Zt +
=
Zt
0
Zt+1
0
and hence (8.1) is a solution to equation (8.2). Next we prove that (8.1) is a unique
solution to (8.2). Let X0t+1 be another solution to equation (8.2) and consider the
difference
Xt+1 X0t+1 = AXt + BZt+1 AX0t BZt+1 = A (Xt X0t )
1
E[Zt ]
0
X (t) =
=
0
E[Zt1 ]
0
and
11 (t + h, t) 12 (t + h, t)
21 (t + h, t) 22 (t + h, t)
X (t + h, t) =
2
1 + 1{0} (h) + 1{1,1} (h) 1{0} (h) + 2 1{1} (h)
2
=
,
1{0} (h) + 2 1{1} (h)
2 1{0} (h)
i.e. neither of them depend on t. Now we see that
1
Zt
Yt = [1 0]Xt = [1 0]
= [1
0
Zt1
which is the MA(1) process.
28
Zt
Zt1
= Zt + Zt1 ,
Problem 8.9. Let Yt consist of Yt,1 and Yt,2 , then we can write
Yt,1
G1 Xt,1 + Wt,1
G1 Xt,1
Wt,1
Yt =
=
=
+
Yt,1
G2 Xt,2 + Wt,2
G2 Xt,2
Wt,2
G1 0
Xt,1
Wt,1
=
+
.
Xt,2
Wt,2
0 G2
Set
G=
G1
0
0
G2
Xt =
Xt,1
Xt,1
and Wt =
Wt,1
Wt,2
Xt+1,1
F1 Xt,1 + Vt,1
F1 Xt,1
Vt,1
Xt+1 =
=
=
+
Xt+1,1
F2 Xt,2 + Vt,2
F2 Xt,2
Vt,2
F1 0
Xt,1
Vt,1
=
+
0 F2
Xt,2
Vt,2
and set
F =
F1
0
0
F2
and Vt =
Vt,1
Vt,2
2
=
2
+ w
which is equivalent to
2
v2 = 0.
2
+ w
2
Multiplying with + w
we get
2 2
2 v2 w
v = 0,
r
p
2 2
v2 v4 + 4w
1 2
v4
v
2
2
= v
+ w v =
.
2
4
2
Since 0 we have the positive root which is the solution we wanted.
Problem 8.14. We have that
t+1 = t + v2
2t
2
t + w
2
and since v2 = 2 /( + w
) substracting yields
2
2t
t+1 = t +
2
2
+ w
t + w
2
2
t t + w
2t
+ w
2
=
2
2
t + w
+ w
2
2
w
t w
=
2
2
t + w
+ w
2
= w
.
2
2
t + w
+ w
29
2
(t+1 )(t ) = w
2
2
t + w
+ w
(t ).
2
Now, note that the function f (x) = x/(x + w
) is increasing in x. Indeed, f 0 (x) =
2
2 2
w /(x + w ) > 0. Thus we get that for t > both terms are > 0 and for t <
both terms are < 0. Hence, (t+1 )(t ) 0.
2
w
(1 + 2 ),
2
+ v2
2w
+1=0
2
w
p
2
2
2 + 2 )2
2w
+ v2 v4 + 4v2 w
(2w
v
1
=
.
4
2
4w
2w
w
To show that = 2 +
, recall the steady-state solution
w
v2 +
2
v4 + 4v2 w
,
2
which gives
p
2
2
2w
+ v2 v4 + 4v2 w
=
2
2w
p
p
2
2
2
2
2
2w + v v4 + 4v2 w
2w
+ v2 + v4 + 4v2 w
=
p
2 2 2 + 2 +
2
2w
v4 + 4v2 w
w
v
=
4
2
2
4
2
4w
+ 4v2 w
+ v4 v4 4v2 w
4w
w
=
.
2 (2 2 + 2)
2 ( 2 + )
2 +
2w
4w
w
w
w
30
Chapter 10
Problem 10.5. First a remark on existence of such a process: We assume for
simplicity that p = 1. A necessary and sufficient condition for the existence of a
causal, stationary solution to the ARCH(1) equations with E[Zt4 ] < is that 12 <
1/3. If p > 1 existence of a causal, stationary solution is much more complicated.
Let us now proceed with the solution to the problem.
We have
!
p
p
p
2
2
X
X
X
Z
e
e 2 ht
Z2
ti
2
= t 0 +
i Zti
= t = t = Yt ,
e2t 1 +
i Yti = e2t 1 +
i
0
0
0
0
i=1
i=1
i=1
hence Yt = Zt2 /0 satisfies the given equation. Let us now compute its ACVF. We
assume h 1, then
"
!
#
p
X
2
E[Yt Yth ] = E et 1 +
i Yti Yth
i=1
"
=
E[e2t ]E
Yth +
p
X
#
i Yti Yth
i=1
= E[Yth ] +
p
X
i E[Yti Yth ].
i=1
p
X
i Y (h i) + 2Y
i=1
and then
Y (h)
p
X
i Y (h i) = Y + 2Y
e2t
Y = E[Yt ] = E
1+
p
X
!#
i Yti
p
X
=1+
i 1 .
i Y (h i) =
i=1
i E[Yt ] = 1 + Y
i=1
i=1
Y (h)
i=1
i=1
We can compute Y as
"
p
X
Pp
1
Pp
i=1
i=1
p
X
i .
i=1
(1
1
i=1
Pp i
i=1 i )2
= 0.
p
X
i Y (h i) = 0,
h 1,
i=1
which corresponds to the Yule-Walker equations for the ACF for an AR(p) process
Wt = 1 Wt1 + + p Wtp + Zt .
31