Vous êtes sur la page 1sur 5

Chapter 3

The AR(1) Process


The archetypical time-series process is the Gaussian rst-order autoregressive process, or the AR(1) process. This process can exhibit dynamics in that the distribution of future values can depend on the current value.1 This is useful for describing many variables in economics and nance interest rates for example where its obvious that what is going to happen tomorrow depends on the state-of-the-world today. This note provides a very terse description of the AR(1) process and its unconditional and conditional distribution. For a more formal, in-depth treatment of the AR(1) process (and many more-sophisticated time-series processes) see the textbook Time Series Analysis by James Hamilton, Princeton University Press, 1994. What is an AR(1)? Let time be indexed with t and let t {, .... 1, 0, 1, 2, ....}. Let t N(0, 1) be an i.i.d. shock. This means that, no matter what the date is, the distribution of t remains the same: the standard normal distribution. The random variable zt follows an AR(1) if we can write it as zt = (1 ) + zt1 + t , (3.1)

where , and are xed scalars, or the parameters of the process. Equation (3.1) is what well call the recursive representation of the AR(1): a formulation that recurs in the same form at each date t. An alternative representation is called the innite-order moving average representation. To see this, lag equation (3.1) by one period and substitute the result back into equation (3.1): zt = (1 ) + [(1 ) + zt2 + t1 ] + t .
k1 k1

(3.2)

Then, do the same for zt2 . Keep going. After doing this k + 1 times youll get: zt = (1 )
1

j + k ztk +
j=0 j=0

j tj

If the distribution of future values depends on current and past values we would say that the process is greater than rst-order, and wed label it AR(n), for n > 1.

34

CHAPTER 3. THE AR(1) PROCESS

35

Assume that 1 < < 1. This assumption is called the stationarity assumption. Let k :

zt = +
j=0

j tj

(3.3)

This is the innite-order moving average. It shows us that the AR(1) variable, zt , can be written as an innite sum of past shocks, where more distant shocks get smaller and smaller weights (the coecients in equation (3.3) are geometrically declining since || < 1). The case of = 1 is called the unit root case, where zt has innite memory in that the eect of past shocks never dies out.

3.1

Conditional Distribution

What do we mean by the conditional distribution of the process? It is the distribution of zt conditional on knowing zt1 .2 Well use the notation t1 to denote conditionally distributed. This one is pretty easy. All you have to know is that linear functions of normally-distributed random variables are themselves normally distributed. Look at equation (3.1). Conditional on knowing zt1 , zt is a linear function of the standard normal variable, t . So, the conditional distribution of zt is a normal distribution. All we have to do to characterize this distribution is work out its mean and its variance (since the normal distribution is completely characterized by its mean and its variance). Using Et1 () and Var t1 () to denote conditional mean and variance, Var t1 (zt ) = 2 Therefore, zt t1 N (1 ) + zt1 , 2 Et1 (zt ) = (1 ) + zt1

3.2

Unconditional Distribution

What do we mean by unconditional distribution? We mean the distribution of zt , presuming no knowledge of zt1 (or zt2 or zt3 ...). Equivalently, the distribution of zt conditional on knowing ztk for large k. That is, the distribution of zt presuming that we were thinking about it a very long time ago. Or (at the risk of
2 Here, well focus on the one-period-ahead conditional distribution, or the distribution of zt given zt1 . It is a straightforward extension (using the recursive substitution trick we used to derive the innite-order moving average representation) to derive the multi-period analog, or the distribution of zt+k given zt1 .

CHAPTER 3. THE AR(1) PROCESS

36

being tedious), if today is date t, the unconditional distribution is the distribution of zt+k for large k, far o in the distant future. To gure this out, look at equation (3.3), the innite-order moving average. The left-hand-side is a linear function of normals. Therefore the unconditional distribution is also normal. Since each tk is an i.i.d. standard normal, and since || < 1, the mean and variance are easy to work out: E(zt ) =

Var (zt ) = Var (


j=0

tj ) =

2 j=0

2j Var (tj ) =

2 1 2

The unconditional distribution is therefore, zt N( , 2 ) 1 2

Note that the unconditional variance is larger than the conditional variance as long as = 0 (and 1 < < 1 so that things are well dened). If you think about it, this makes sense. If you tell me what zt1 is, then where zt ends up relative to zt1 depends on just one shock: t . Theres only so much that can happen (given normality of t and nite ). But the farther I look into the future or the closer I get to wondering about unconditional distribution the larger are the number of shocks which come into play. So, the set of possibilities loosely speaking, the variance grows. How fast it grows (and how fast it converges) depends on how large is.

3.3

Dynamics

By the dynamics of an AR(1) we basically mean the coecient of rst-order autocorrelation. This is the last moment well need. It is also an unconditional moment. It is a (scaled) measure of the linear dependence of todays value zt on yesterdays value zt1 . It is dened as Corr (zt ) = Cov (zt , zt1 ) Var (zt )

To compute the numerator simply sub-in the expression for zt Cov (zt , zt1 ) = Cov ((1 ) + zt1 + t , zt1 ) = Cov (zt1 , zt1 ) = Var (zt1 ) Since these are unconditional moments of a stationary time series process, we must have that Var (zt1 ) = Var (zt ). Therefore, Corr (zt ) =

CHAPTER 3. THE AR(1) PROCESS

37

3.4

Summary

The AR(1) is the simplest linear time-series process which captures dynamics: the notion that what we know about the future depends on realizations from today, yesterday, and so on. The dynamics of the AR(1) are summarized by its autocorrelation parameter, . If = 0 then there are no dynamics and zt is i.i.d., normal. If || < 1 then the process is said to be stationary and the distant past matters more the closer is to unity. If = 1 then the process is called a unit-root process and its memory is innite. If || > 1 then the process is explosive. Do some simulations. Youll see that the unit-root process tends to wander around aimlessly, never reverting back to some long-run value. The stationary process reverts back to it is often called a mean-reverting process and the speed at which it does so depends on how large is. The conditional distribution of an AR(1) is normal with mean (1 ) + zt1 and variance 2 . The unconditional distribution has mean and variance 2 /(1 2 ).3 Finally, its worth noting that if we sample geometric Brownian motion at nite intervals the process that we get is a special case: a logarithmic random walk. That is, suppose that dz(t) = dt + dz(t) . z(t) Then log z(t) = k + log z(t k) + k (t) ,

where (t) N(0, 1). You can see that this is a special case of equation (3.1), without mean reversion. If we wanted to have mean reversion, then wed have to start with the Ornstein-Uhlenbeck process instead of geometric Brownian motion.

3.5

Exercises

1. What is the conditional distribution of zt+1 given zt1 ? How about zt+k given zt1 for arbitrary k? If you can answer these things then you know how to compute (i) multi-period conditional means, or, equivalently, multi-period forecasts, (ii) multi-period conditional variances. 2. What is the autocorrelation of any linear function of an AR(1)? 3. Suppose that, instead of equation (3.1), we have zt = (1 ) + zt1 + zt1 t ,
3

1/2

(3.4)

The reason for parameterizing the intercept or drift term in equation (3.1) as (1 ) and not just was precisely so that would be the unconditional mean. Its just more convenient this way.

CHAPTER 3. THE AR(1) PROCESS

38

with t N(0, 1). What is the conditional distribution of zt given zt1 ? What is the unconditional mean and variance of zt ? Can you give some intuition for why the latter is dierent than the unconditional variance of the regular AR(1) in equation (3.1)?4

Note that (i) strictly speaking this process isnt well-dened since it doesnt rule out negative values for zt and, therefore, taking the square-root of a negative number, (ii) in continuous time its analog is well dened as long as the Feller condition is satised, something which ensures that the variance goes to zero fast enough as z goes to zero, (iii) in continuous time the unconditional distribution is gamma, (iv) in discrete time, as long as isnt too big relative to everything works out just ne, including the unconditional distribution being (approximately) gamma.

Vous aimerez peut-être aussi