Vous êtes sur la page 1sur 6

Summary of Lecture Notes

October 4, 2017

1 Deterministic Process
Deterministic (means lack of free will) is the opposite of random. It tells us that some future event
can be calculated exactly, without the involvement of randomness. For example, the conversion
between Celsius and Kelvin is deterministic, because the formula is not random.

Kelvin = Celsius + 273.15.

The process of calculating the output (in this example, inputting the Celsius and adding 273.15)
is called a deterministic process. Similarly, predicting the amount of money in a bank account is a
deterministic process. If you know the initial deposit, and the interest rate, you can determine the
amount in the account after one year exactly.

Current Amount = Initial Amount + Interest Rate/Year.


An other good example is a signal of a single sinusoid, such as

x(t) = A cos(2f0 t + 0 ),
where A is the amplitude, f0 is the frequency in cycles per second (or hertz), and 0 is the phase
in radians.
If something is deterministic that mean you have all of the data necessary to determine the outcome
with 100% certainty.

2 Stochastic/Random Process
Stochastic process also called random process cannot be described by a mathematical equation.
They are modeled in probabilistic terms. For example, rolling a dice is a random process. Stochastic
process generate random variables that can be described by following parameters.

2.1 Mean Value


If {x(n}) is a random variable, its mean can be found as
Z
= E{x[n]} = x[n]p(x[n])dx[n],

where p(x[n]) is the probability of x[n]. Expectation operator (properties)

1
Linearity
E{ax[n] + by[m]} = aE{x[n]} + bE{y[m]}

E{x[m]y[n]} =
6 E{x[m]}E{y[n]}
unless {x[m]} and {y[n]} are independent random processes.

If y[n] = g(x[n]) and the pdf of x[n] is p(x[n]) then


Z
E{y[n]} = g(x[n])p(x[n])dx[n] (1)

i.e. we do not need to know the probability-density-function (pdf) of {y[n]} to find its
expected values.

2.2 Convolution

Convolution of h(n) and x(n) defined as


L1
X
y(n) = h(n) x(n) = h(l)x(n l)
l=0
L1
X
= h(n l)x(l) (2)
l=0

2.3 Correlation and covariance (auto-correlation)

The auto-correlation is defined as

rxx (l) = E{x (n)x(n + l)},


= E{x (n l)x(n)} = E{x(n)x (n l)},
= (E{x(n l)x (n)}) ,

rxx (l) = (E{x (n)x(n l)}) .

The cross-correlation is defined as

rxy (l) = E{x (n)y(n + l)},


= E{x (n l)y(n)} = E{y(n)x (n l)},
= (E{x(n l)y (n)}) ,

ryx (l) = E{y (n)x(n l)}.

rxy (l) = E{y (n)x(n + l)},


= E{y (n l)x(n)} = E{y(n)x (n l)},
= (E{y(n l)x (n)}) ,

2
Application-1:

d(n)
Unknown Channel

x(n)

h0 h1 h2
o- +
e(n)

h3 h4 y(n)

Figure 1: Channel estimation model.

L1
X
y(n) = h(l)x(n l)
l=0

L1
!
X
e(n) = d(n) h(l)x(n l)
l=0

( L1
! L1
!)
2
X X

E{|e(n)| } = E d(n) h(l)x(n l) d (n) h (m)x (n m)
l=0 m=0
( L1 L1
X X
= E d(n)d (n) h(l)x(n l)d (n) h (m)x (n m)d(n)
l=0 m=0
L1
)
X L1
X
+ h(l)h (m)x(n l)x (n m)
l=0 m=0

Minimization with respect to h (i) yields

L1
( )
J
X

= E x (n i)d(n) + h(l)x(n l)x (n i)
h (i)
l=0
L1
( )
X

= E x (n)d(n + i) + h(l)x (n i)x(n l)
l=0
L1
( )
X
= E x (n)d(n + i) + h(l)x (n)x(n l + i)
l=0

L1
X

E{x (n)d(n + i)} = h(l)E{x(n)x (n + (i l))}
l=0
L1
X
rxd (i) = h(l)rxx (i l) (3)
l=0

3
The equations (3) in vector form can be written as

rxd (0) rxx (0) rxx (1) rxx ((N 1)) h(0)
rxd (1) rxx (1) rxx (0) rxx (N + 2)
h(1)
= (4)

.. .. .. .. .. ..
. . . . . .
rxd (N 1) rxx (N 1) rxx (N 2) rxx (0) h(N 1)

4
Application-2:

wo* w*1 w2*


y(n)
-+ d(n)
u(n)
w3* w*4
O
e(n)

Figure 2: Channel estimation model.

L1
X
y(n) = w (l)u(n l)
l=0

L1
!
X

e(n) = d(n) w (l)u(n l)
l=0

( L1
! L1
!)
2
X X

E{|e(n)| } = E d(n) w (l)u(n l) d (n) w(m)u (n m)
l=0 m=0
( L1 L1
X X
= E d(n)d (n) w (l)u(n l)d (n) w(m)u (n m)d(n)
l=0 m=0
L1
)
X L1
X
+ w (l)u(n l)u (n m)w(m)
l=0 m=0

The gradient of cost-function with respect to w (i) yields

L1
( )
X
i J = E u(n i)d (n) + u(n i)u (n m)w(m)
m=0
L1
( !)
X

= E u(n i) d (n) u (n m)w(m)
m=0
= E {u(n i)e (n)}

The optimum values of filter coefficients w(i)s can be found by solving


L1
( !)
X
E u(n i) d (n) u (n m)wo (m) = 0
m=0
E {u(n i)eo (n)} = 0, i = 0, 1, 2, . . . (5)

5
where eo (n) shows the value of error at optimum values of w(i). The expression (5) states
that the necessary and sufficient condition for the cost-function J to attain its minimum value
is that the corresponding value of estimation error eo (n) is orthogonal to each input sample
that enters into the estimation of the desired response at time n. This is called principle of
orthogonality.
At optimum point

L1
( !)
X
E u(n i) d (n) u (n m)wo (m) =0
m=0
L1
( )
X

E u(n i)d (n) u(n i)u (n m)wo (m) =0
m=0
L1
X
E {d (n)u(n i)} = E {u (n m)u(n i)wo (m)}
m=0
L1
X
rdu (i) = ruu (m i)wo (m) (6)
m=0

Set of these equations is called Wiener Hopf equations and are used to find the filter coefficients
and in vector form can be written as

rxd (0) ruu (0) ruu (1) ruu (N 1) wo (0)
rxd (1) ruu (1) ruu (0) ruu (N 2) wo (1)
= (7)

.. .. .. .. .. ..
. . . . . .
rxd (N + 1) rxx (N + 1) ruu (N + 2) ruu (0) wo (N 1)

Program to estimate filter coefficient values


clear all; clc
N = 8; h = [5 3 1 2 10] ;
x = 1/sqrt(2)*(sign(randn(N,1)) + j*sign(randn(N,1))) ;
d = conv(h,x) ;
Ld = length(d);
rxx = conj(xcorr(x,x)) ;
rdx = xcorr(d,x); rdx0 = rdx (Ld:Ld+N-1);
R = toeplitz(rxx (N:2*N-1)) ;
Eh1 = inv(R)*rdx0 ;

Vous aimerez peut-être aussi