Académique Documents
Professionnel Documents
Culture Documents
October 4, 2017
1 Deterministic Process
Deterministic (means lack of free will) is the opposite of random. It tells us that some future event
can be calculated exactly, without the involvement of randomness. For example, the conversion
between Celsius and Kelvin is deterministic, because the formula is not random.
The process of calculating the output (in this example, inputting the Celsius and adding 273.15)
is called a deterministic process. Similarly, predicting the amount of money in a bank account is a
deterministic process. If you know the initial deposit, and the interest rate, you can determine the
amount in the account after one year exactly.
x(t) = A cos(2f0 t + 0 ),
where A is the amplitude, f0 is the frequency in cycles per second (or hertz), and 0 is the phase
in radians.
If something is deterministic that mean you have all of the data necessary to determine the outcome
with 100% certainty.
2 Stochastic/Random Process
Stochastic process also called random process cannot be described by a mathematical equation.
They are modeled in probabilistic terms. For example, rolling a dice is a random process. Stochastic
process generate random variables that can be described by following parameters.
1
Linearity
E{ax[n] + by[m]} = aE{x[n]} + bE{y[m]}
E{x[m]y[n]} =
6 E{x[m]}E{y[n]}
unless {x[m]} and {y[n]} are independent random processes.
i.e. we do not need to know the probability-density-function (pdf) of {y[n]} to find its
expected values.
2.2 Convolution
2
Application-1:
d(n)
Unknown Channel
x(n)
h0 h1 h2
o- +
e(n)
h3 h4 y(n)
L1
X
y(n) = h(l)x(n l)
l=0
L1
!
X
e(n) = d(n) h(l)x(n l)
l=0
( L1
! L1
!)
2
X X
E{|e(n)| } = E d(n) h(l)x(n l) d (n) h (m)x (n m)
l=0 m=0
( L1 L1
X X
= E d(n)d (n) h(l)x(n l)d (n) h (m)x (n m)d(n)
l=0 m=0
L1
)
X L1
X
+ h(l)h (m)x(n l)x (n m)
l=0 m=0
L1
( )
J
X
= E x (n i)d(n) + h(l)x(n l)x (n i)
h (i)
l=0
L1
( )
X
= E x (n)d(n + i) + h(l)x (n i)x(n l)
l=0
L1
( )
X
= E x (n)d(n + i) + h(l)x (n)x(n l + i)
l=0
L1
X
E{x (n)d(n + i)} = h(l)E{x(n)x (n + (i l))}
l=0
L1
X
rxd (i) = h(l)rxx (i l) (3)
l=0
3
The equations (3) in vector form can be written as
rxd (0) rxx (0) rxx (1) rxx ((N 1)) h(0)
rxd (1) rxx (1) rxx (0) rxx (N + 2)
h(1)
= (4)
.. .. .. .. .. ..
. . . . . .
rxd (N 1) rxx (N 1) rxx (N 2) rxx (0) h(N 1)
4
Application-2:
L1
X
y(n) = w (l)u(n l)
l=0
L1
!
X
e(n) = d(n) w (l)u(n l)
l=0
( L1
! L1
!)
2
X X
E{|e(n)| } = E d(n) w (l)u(n l) d (n) w(m)u (n m)
l=0 m=0
( L1 L1
X X
= E d(n)d (n) w (l)u(n l)d (n) w(m)u (n m)d(n)
l=0 m=0
L1
)
X L1
X
+ w (l)u(n l)u (n m)w(m)
l=0 m=0
L1
( )
X
i J = E u(n i)d (n) + u(n i)u (n m)w(m)
m=0
L1
( !)
X
= E u(n i) d (n) u (n m)w(m)
m=0
= E {u(n i)e (n)}
5
where eo (n) shows the value of error at optimum values of w(i). The expression (5) states
that the necessary and sufficient condition for the cost-function J to attain its minimum value
is that the corresponding value of estimation error eo (n) is orthogonal to each input sample
that enters into the estimation of the desired response at time n. This is called principle of
orthogonality.
At optimum point
L1
( !)
X
E u(n i) d (n) u (n m)wo (m) =0
m=0
L1
( )
X
E u(n i)d (n) u(n i)u (n m)wo (m) =0
m=0
L1
X
E {d (n)u(n i)} = E {u (n m)u(n i)wo (m)}
m=0
L1
X
rdu (i) = ruu (m i)wo (m) (6)
m=0
Set of these equations is called Wiener Hopf equations and are used to find the filter coefficients
and in vector form can be written as
rxd (0) ruu (0) ruu (1) ruu (N 1) wo (0)
rxd (1) ruu (1) ruu (0) ruu (N 2) wo (1)
= (7)
.. .. .. .. .. ..
. . . . . .
rxd (N + 1) rxx (N + 1) ruu (N + 2) ruu (0) wo (N 1)