Académique Documents
Professionnel Documents
Culture Documents
k
a
k
e
( ) z ( ) C z
Linear
Equalizer
Quantizer hannel
k
e
k
a
k
n
I I
Noise
( ) ( ) 1 H z C z
( ) C z
( ) a
( ) b
2 2
The po er spectrum o the error can be ritten as
| ( ) ( ) 1| | ( ) |
po er spectrum o the data symbols
po er spectrum o the noise process
1
I ( ) , the I I contribution to
( )
e
e a n
a
n
S
S S H z C z S C z
S
S
C z
H z
y
+
=
=
y = the error vanishes.
I ( ) has a spectran null i.e., ( ) 0 or some at any
requency ithin the band idth o , the po er o noise
is in inity.
Even ithout a spectral null, i some re
k
H z H z z
a
y =
y quencies in ( )
are greatly attenuated then the equalizer ill greatly
enhance the noise po er.
H z
Decision Feedback Equalizer (DFE) is e ective means
or equalizing the channels that exhibit spectral nulls.
y
+
2
2
2.5 1
z
z z
2 z
z
Z
Z
1
z
1
z
FFF
Channel
Quantizer
0.5
0.25
F F
( )
f
y n
( )
b
y n
Decision eedback qualizer
1 2
The DFE employs a feedforward filter (FFF) to equalize
the anticausal part of the channel impulse response.
The channel - FFF cascade forms a causal system
with impulse response 1, , , . The feed h h
y
y
L
1 1 2 2
back
filter (FBF), with , works on
past decisions (assumed correct).
b b
w h w h ! ! L
The residual ISI at the FFF output (n) is cancelled
by subtracting FBF output (n) from (n).
f
b f
y
y y
y
In most o the communication systems the variation
o the channel characteristics over time is signi icant,
the equalizer should be able to adapt itsel to combat
the I I.
In such cas
y
y es daptive DFE (ADFE) is used.
FFF and FBF coe icients are trained LMS algorithm. y
LMS Algorithm
Self-learning: Filter coefficients adapt in response to
training signal.
W(z) 7
+
x(n)
y(n)
e(n)
d(n)
Filter update: Least Mean Squares (LMS) algorithm
1
z
FBF
FFF + +
x ( ) n
( ) d n
( )
f
y n
( )
b
y n
( ) y n
( ) v n
( ) y n
( ) e n
Training
Decision
directed
0 1
( ) [ ( ),..., ( )]
f f f t
p
n w n w n
! w
1
( ) [ ( ),..., ( )]
b b b t
q
n w n w n ! w
Basic ADFE
Acommon problem faced by the ADFE is that
Increasing data rate increases channel IR
increases the order of FFF and FBF
increases complexity, makes real time operation difficult
y
y = w
Filter coefficients updated over block to block,
constant within a block
y
The main operations-----filtering, output error
computation and weight updating
substantial computational savings when compared with the algorithm
which updates the filter coefficients sampleby-sample basis.
Block Adaptive filter
( ) ( ) ( ) : ilter output at the -th index,
here ( ) [ ( ), ( 1), .... , ( 1)] ,
, 0, 1, , 1.
t
t
y n j n n
n x n x n x n L
n jP r r P
y =
= +
= + = L
w x
x
( ) ( ) ( ) : output error at the -th index,
where ( ) : desired response, given during training
e n d n y n n
d n
y !
2
1
0
Filter coefficients are updated to minimize [ ( )]
progressively with . Update relation (BLMS) :
( 1) ( ) ( ) ( )
P
r
E e n
n
j j jP r e jP r Q
!
y
!
w w x
A ast implementation via FFT is possible to
produce ( ), 0,1, , 1, and ( 1) y jP r r P j
y
+ = + L w
2
: Step size, or convergence, 0
[ ]
: [ ( ) ( )], i.e., input correlation matrix
t
P tr
n n
Q Q y
R
R x x
S/P
Sub-block
of size
M=L+P-1
M point
FFT
M point IFFT
(Last P terms)
Delay
Compute
M point
IFFT
Set last (P-1)
elements zero
M point
IFFT
M point
FFT
Add (L-1) zeros
at the front
P/S
Q
Output
M point
FFT
x(n)
X(k)
w(j+1)
W(k)
X(k)
y(n)
d(n)
e(n)
Fast implementation of the proposed BLMS Algorithm.
Block ADFE Equations :
Q,M Q,L
,
,
(jQ Q-1) X ( ) D ( ),
(jQ Q-1) { (jQ Q-1)},
(jQ Q-1) (jQ Q-1) (jQ Q-1)
( 1) ( ) (jQ Q-1) ,
( 1) ( ) (jQ Q-1).
f b
Q M L
Q Q
Q Q Q
f f H
M M Q M Q
b b H
L L Q L Q
j j
f
j j X
j j D
y +
=
+ +
+ +
Q
Q
y w w
d y
e y d
w = w e
w = w e
,
( 1) ..... ( )
. .
here, = . . ,
. .
( ) ..... ( 1)
Q M
x jQ Q x jQ Q M
X
x jQ x jQ M
+ +
+
Implementation of modified block LMS based ADFE
1 2
, , 1 , 1
1
, 1
2
, 1
= [ ] with
( 2) ..... ( )
. .
= . . ,
. .
( 1) ..... ( 1)
( 1) ..... ( 1)
. .
= . .
. .
( ) ..... ( )
Q L Q Q Q L Q
Q Q
Q L Q
D D D
d jQ Q d jQ
D
d jQ d jQ Q
d jQ d jQ L Q
D
d jQ Q d jQ L
'
,
1
:
This consists o 3 main computations, namely,
(a) FFF output :
-- FFF output ( 1) ( )
-- sing overlap and save method
( 1) [ (
f f
Q Q M M i
f d
Q S S
jQ Q X j n Z
jQ Q F X
-
y
+ = V
+ =
Q
Eq alizatio a d eig t pdati g
y w
y J
.
)] , here
-1,
([ ( ) ] ) and
( [ ( ) ... ( 1) ] )).
f
S last Q
f f t t t
S M S M
d t
S S
S Q M
F j
X diag F x jQ Q S x jQ Q
-
= +
=
= + +
W
W w 0
,
(b) FBF output:
Unlike FFF, FBF output ( 1) ( )
contains unknown decisions given by ( ),
,..., 2.
To avoide causality problem, the computation of
( 1) is systematic
b b
Q Q L L
b
Q
jQ Q D j
d k
k jQ jQ Q
jQ Q
y !
!
y
y w
y
, 1
2 2
1
,2 1
ally decomposed into
two parts: one containing past and known decisions,
and the other involving purely the current and thus
unknown decisions.
( 1) ( )
( )
Q L Q
b b
Q L Q
b
Q Q
jQ Q D j
W j
y w
2 1
( 1), where
Q
jQ Q
d
1 1
1 1
,2 1
1 1
2
1 1
0 ( ) ..... ( ) 0 ... 0
0 0 ( ) ... ( ) ... 0
. . . . . . .
where, ( ) = ,
. . . . . . .
. . . . . . .
0 0 ... 0 ( ) ... ( )
( ) [ ( ) ( ) ... ( )].
Par
b b
Q
b b
Q
b
Q Q
b b
Q
b b b b
L Q Q Q L
w j w j
w j w j
W j
w j w j
j w j w j w j
! w
1 2
,2 1 , , 1
titioning ( ) [ ( ) ( )], the FBF output can be
written as,
b b b
Q Q Q Q Q Q
W j W j W j
!
2 2 1
, 1 1 ,
2
, 1 1
1
-- ( 1) ( ) ( ) ( 1)
( ) ( 1), where,
( 1) [ ( 1) ... ( )] contains
unknown decisions and ( 1) [ ( 1)... ( 1)]
cont
b b b
Q Q L Q L Q Q Q Q
b
Q Q Q
Q
Q
jQ Q D j W j jQ Q
W j jQ
jQ Q d jQ Q d jQ Q
jQ d jQ d jQ Q
!
!
!
y w d
d
d
d
2 2 2
, 1 1
1,1 1
,
1,2 2
, 1
1
ains Q-1 known decisions from previous sub-blocks.
-- Let FB2 output ( 1) ( ),
( 1) ( ) ( 1),
( 1) ( ) ( 1)and
-- FB1 output (
b b
Q Q L Q L Q
b b
Q Q Q Q
b b
Q Q Q Q
b
Q
jQ Q D j
jQ Q W j jQ Q
jQ Q W j jQ
jQ Q
!
!
!
y w
y d
y d
y
1,1 1,2
1) ( 1) ( 1).
b b
Q Q
jQ Q jQ Q ! y y
2
1,2
1,1
1,1
Let ( 1) ( 1) ( 1)
( 1)
Then, ( 1) ( 1) ( 1).
( 1) involves unkno n decisions.
An iterative procedure is suggested by hich
c f b
Q Q Q
b
Q
c b
Q Q Q
b
Q
jQ Q jQ Q jQ Q
jQ Q
jQ Q jQ Q jQ Q
jQ Q
Berberidis
+ = + + +
+ +
+ = + + +
+
y
y y y
y
y y y
y
1,1
First computes ( 1) using appropriately
chosen initial value or ( 1).
Then evaluates ( 1), hich is then used to compute
( 1) using ( 1) { ( 1)}.
T
b
Q
Q
Q
Q Q Q
jQ Q
jQ Q
jQ Q
jQ Q jQ Q f jQ Q
+
+
y +
+ + = +
y
y
d
y
d d y
1
his is again used to compute ( 1) and then
( 1) and the iteration is carried out urther.
b
Q
Q
jQ Q
jQ Q
+
+
y
y
It is sho n that this iteration converges to correct vector
( 1) in Q or less number o steps or any choice
o initial value.
A simple choice is to set the initial decision vector to
Q
jQ Q
y
+
y
d
1
,
zero vector (IS1).
In IS2 the initial value o ( 1) is chosen by setting
( 1) ( 1) and solving or ( 1)
using [ ( ) ] ( 1) ( 1).
Q
Q Q Q
c
Q Q Q Q Q
jQ Q
jQ Q jQ Q jQ Q
W j I jQ Q y jQ Q
y +
+ = + +
+ = +
d
d y d
d
Q Q Q
The error vector is now computed as
( 1) ( 1) ( 1) jQ Q jQ Q jQ Q
y
! e d y
Q-1
r=0
Q-1
r=0
(c) Weight updating :
( 1) ( ) (jQ+ r) e(jQ+ r)
( 1) ( ) (jQ+ r) e(jQ+ r) 2
j j
f f
M M M
b b
L L L
j j
j j
!
!
w w x
w w d
R \
The proposed realizations are about
aster than a sample based realization or
moderately large values o L, M and Q.
four times y
The channel is modeled with a second order FIR filter, having
impulse response 0.304 0.903 0.304.
The channel noise is modeled as AWG . The transmitted
symbols are chosen from an alphabet of 8 equispaced,
equiprobable discrete amplitude levels
The transmitted signal power was taken to be 6 dB.
To these symbols additive white Gaussian noise having a
variance of 0.1 is added. The lengths of the FFF and the FBF
were chosen as p=3 and q=3.
Step size =0.001.
Simulation Studies
The ADFE was first simulated by the proposed
scheme, choosing block length as 25.
The ADFE was operated in training mode for the
first 100 iterations and then, switched over to
the decision directed mode for the subsequent
500 iterations.
The FFF and FBF weights are updated separately
using weight updating equations.
The corresponding learning curve is obtained by
plotting the MSE versus the number of iterations
ext, the MSE curves were plotted for different input
block lengths of = 10, 25, 50 and 100
Increasing block length
large spread in the magnitudes o the data samples in the block
more pronounced quantization noise e ects via block ormatting
Steady state MSE increases ith N
( )
( )
n
x n
Q
U
=
+ P P
Where
The tap input vector is given by
( ) [ ( ), ( 1)...., ( 1)]
t
X n x n x n x n L !
The error signal is given by
( ) ( ) ( ) ( )
t
e n d n w n X n !
The filter weight vector is given by
0 1 1
( ) [ ( ), ( ), .... ( )]
t
L
w n w n w n w n
!
Here the adaptation constant is with in the range 0
to 2 for convergence and is an appropriate positive
number introduced to avoid divide-by-zero like situations
which may arise when the norm of the input signal
becomes very small.
U
the weight updating equation for the ADFE
using LMS algorithm can be modified and
written as,
( 1) ( ) ( ) ( ) ( ) W n W n n n e n Q J !
Where
( ) [ ( ),... ( 1), ( 1),... ( )]
t
n x n x n p v n v n q J !
0 1 1
( ) [ ( ), ( ),.... ( )]
f f f f t
p
W n w n w n w n
=
is a -th order FFF
coefficients
p
1 2
( ) [ ( ), ( ), .... ( )]
b b b b t
q
W n w n w n w n =
is a -th
order FBF
coefficients
q
( ) [ ( ) ( )]
f t bt t
W n W n W n !
The signal is given by a desired response
( ) d n
during the initial training phase and by ( ) y n
during the
subsequent
decision directed
phase
T h e o v e r a ll o u t p u t ( ) i s g i v e n b y
( ) ( ) ( )
t
y n
y n W n n J =
T h e o u t p u t e r r o r
( ) ( ) ( ) e n v n y n =
The eed or ard ilter output
( ) ( ) ( )
f f
y n w n x n =
T h e e e d b a c k i lt e r o u t p u t ,
( ) ( ) ( 1)
b b
y n w n v n =
Now the overall output ( )
which is the input to the decision device is,
( ) ( ) ( )
f b
y n
y n y n y n !
1) Initially transmit the known sequence.
2) Assume, initially both the FFF, FBF weights to be zero.
3) Find the output vector, which is the sum of the outputs of
FFF, FBF.
4) Estimate the tap weight vector at each instant of time using
normalized Modified block LMS algorithm.
5) Update the filter coefficients.
Computational Complexity
Number of computations required for step size evaluation
To evaluate the time varying step size recursively, the
proposed scheme requires
2 MAC operations to compute
2
( ) x n P P
1 addition for
2
( ) x n U P P
1 division for
2
( ) x n U + P P
at each index n.
Number of computations required for weight vector :
updating
( ) W n to ( 1) W n
Require (i)(L+1) MAC
operations .Of these, one
MAC operation is needed
to compute
2
( )
( )
en
xn U + P P
and a total of L MAC operations are
required to calculate
( 1) W n+
Number of computations required for evaluating filter output:
To compute the overall output
total of L MAC
operations are required.
Parameter
Operation
MAC Addition Division
Step size 2 1 1
Weight updating L+1 Nil Nil
Filter output L Nil Nil
Table : Number of operations required per iteration for evaluating step size,
weight updating and filter output using NLMS algorithm.
100 200 300 400 500 600 700 800 900 1000
-30
-25
-20
-15
-10
-5
0
5
10
Number of Iterati ons
M
S
E
(
d
B
)
L MS
NL MS
Figure : Learning curves for LMS and Normalized LMS
base ADFE
Simulation Results
Consider
Q
=0.001
The learning curve of the proposed ADFE shows good
convergence behaviour after 50 iterations, where as it takes
more than 100 iterations for the LMS based ADFE. The
steady state MSE is also within the acceptable range.
Realization of Signed modified Block LMS based ADFE
There are three signed versions of LMS algorithm namely
signed regressor LMS
sign-sign LMS
sign LMS algorithms.
These algorithms provide less computational complexity
compared to basic LMS algorithm
The proposed schemes are particularly suitable for
implementation of ADFE with less computational
complexity.
The signed LMS algorithms that make use of the signum
(polarity) of either the error or the input signal, or both,
have been derived from the LMS algorithm from the point
of view of simplicity in implementation.
In all these algorithms there is a significant reduction in
computing time, mainly pertaining to the time required
for multiplications
In sign sign algorithm, where the signum of the input is used
in addition to the signum of the error signal, thus requiring
only one-bit multiplication or logical E -OR function.
signed regressor LMS algorithm (SRLMS), in which
the polarity of the input signal is used to adjust the
tap weight.
The weight updating equations:
Signed- regressor LMS algorithm: w(n + 1) = w(n) + sgn {x(n)}e(n)
Sign-Sign LMS algorithm: w(n + 1) = w(n) + sgn{x(n)} sgn{e(n)}
Sign LMS algorithm: w(n + 1) = w(n) + x(n) sgn{e(n)}
where sgn {. } is well known signum function.
The error signal is given by, e(n) = d(n) - y(n)
The sequence d(n) is called desired response available
during initial training period and is an appropriate step size
to be chosen as 0 < < 2/trR for the convergence of the
algorithm.
Implementation Procedure
Initially during training mode the known sequence
( ) d n is transmitted and both the FFF and FBF are trained by the
appropriate sign based algorithms.
Then the output
( ) y n
which is the sum of both FFF
and FBF outputs is computed
The error sequence
( ) e n
is estimated and filter coefficients
are updated for each iteration.
S.No
Different Variants of
Sign LMS algorithms
Operation
Additions/
Subtraction
s
Shift Multiplicatio
n
1 The Sign L L Nil
2 The Signed-regressor L Nil 1
3 The Sign-Sign L Nil Nil
Table 5.1: No. of additions/subtractions, shifts and multiplications required for weight
updating using sign, signed-regressor, and the sign-sign LMS algorithms.
Computational Complexity
Figure : Learning curves for LMS and signed regressor
LMS(SRLMS) based ADFE
Figure 5.2: Learning curves for LMS and Sign LMS (SLMS)
based ADFE.
Figure 5.3: Learning curves for LMS and Sign-Sign LMS (SSLMS)
based ADFE.
Figure : MSE plots for signed regressor ADFE
for block lengths N=10, 25, 50, 100.
Figure : MSE plots for of sign ADFE for block lengths
N=10, 25, 50, 100.
Figure : MSE plots for of sign sign ADFE for block lengths
N=10, 25, 50, 100.
The proposed schemes were simulated as before to
study the effects of block formation of the equalizer
coefficients on the performance of the sign- LMS based
ADFE. For this, the same simulation model and
environment as used earlier for ADFE is considered.
The simulation results for different block
lengths(N=10,25,50 and 100),by allocating 8 bits to the
weight vectors of FFF and FBF, keeping the step size as
0.001.The simulation results for LMS based ADFE and
its three variants considered above are presented in
Figures.
Realization of Normalized Signed modified Block LMS based
ADFE
Here ADFE is implemented by combining modified
block LMS algorithm, normalized LMS algorithm and
signed versions of LMS algorithms.
The normalized signed regressor LMS algorithm
(NSRLMS) is a counterpart of the NLMS algorithm,
derived from the signed regresser LMS algorithm
(SRLMS), where the normalizing factor for the SRLMS
equals the sum of the absolute values of the input signal
vector components
The weight update equation of the normalized signed
regressor LMS algorithm (NSRLMS) can be obtained
by modifying the weight update equation of SRLMS
algorithm and can be written as
2
( 1) ( ) sgn{ ( )} ( )
( )
W n W n X n e n
x n
+ = +
P P
Here Data vector ( ) X n is given by
( ) [ ( ), ( 1)........ ( 1)]
t
X n x n x n x n L = +
_ a
( ) Sgn X n is given by
sgn{ ( )} [sgn{ ( )},sgn{ ( 1)}........sgn{ ( 1)}]
t
X n x n x n x n L !
The weight update equation of the normalized sign-sign LMS algorithm (NSSLMS)
can be obtained by modifying the weight update equation of SSLMS algorithm
and can be written as
2
sgn{ ( )} ( )
( )
W n n W e n X n
x n
+ = +
P P
Both feed forward and feedback filter coefficients are trained
by the weight update equations of NSRLMS, NSSLMS and
NSLMS algorithms. Initially the training is imparted by a pilot
sequence (Known transmitted sequence) during initial
training mode and by the output decision during the
subsequent decision directed mode. The input
to the FBF is during the initial training period and it is
during subsequent decision directed phase.
( ) d n
$
( ) y n
( ) v n
( ) d n
$
( ) y n
The feed forward filter output ( )
f
y n is
( ) ( ) ( )
f f
y n w n x n !
0 1
( ) [ ( ),....... ( )]
f f f t
p
W n w n w n
= where
The feed back filter output ( )
b
y n is
( ) ( ) ( 1)
b b
y n w n v n =
Now the overall output, which is the input to the
decision device, y(n) is,
( ) ( ) ( )
f b
y n y n y n = +
For the L-th order FFF and FBF, to update the coefficients
using LMS algorithm, L multiplications and L additions are
required. For error e(n) one addition is required. For the
product
Computational Complexity:
( ) e n Q one multiplication
is required.
for the output ( ) y n , L multiplications and L-1 additions are
required. So per output a total of (2L+1)
multiplications and 2L additions are
required. NLMS algorithm needs one
additional computation term
2
( ) x n
This extra computation involves only
two squaring operations (two
multiplications), one addition and one
subtraction, if we implement using
recursive structure
In the case of signed regressor LMS algorithm
only one multiplication is needed for obtaining
the product
( ) e n Q
In the case of other two LMS algorithms
[SSLMS,SLMS] no multiplications are required if
Q
is chosen as a power of two 2
l
Q
= as this multiplication
can be efficiently
implemented using
shift registers.
S.No.
Type of
Algorithm
Operation
Multiplic
ations
Additio
ns
Shifts
1 LMS 2L+1 2L Nil
2 NLMS 2L+3 2L+2 Nil
3 NSRLMS 1 2L+2 Nil
4 NSLMS Nil 2L+2 2L+2
5 NSSLMS Nil 2L+2 Nil
Table : Comparison of computational complexity
for different LMS based Algorithms.
It is observed that
the sign based
algorithms are
largely free from
multiplication
operation.
Results and Conclusions
The Mean squared error curves are compared for
ADFEs with LMS, Normalized Sign LMS(NSLMS),
Normalized Signed regressor LMS (NSRLMS),
Normalized Sign-sign LMS(NSSLMS) algorithms
The ensemble averaging was performed over 100
independent trials of the experiment.
Step size
=0.001 is considered.
Number of iteration were taken as 400. For first 100
samples the ADFE is on training mode and it is in decision
directed mode for the next 300 samples.
Figure : Learning curves for LMS and Normalized
signed-regressor LMS based ADFE.
Figure : Learning curves for LMS and Normalized
sign LMS based ADFE
Figure: Learning curves for LMS and Normalized
sign-sign LMS based ADFE
Fig: Comparision of Bit Error Rate(BER)plot of
Normalized Signed regressor LMS(NSRLMS) based
ADFE with LMS, Normalized LMS(NLMS)and
Sign LMS(SLMS) based ADFEs
Figure : Comparision of bit Error Rate(BER)plot of
Normalized Sign LMS (NSLMS) based ADFE with LMS,
Normalized LMS(NLMS)and Sign LMS(SLMS) based
ADFEs.
Fig:Comparision of Bit Error Rate(BER)plot of
Normalized sign-sign LMS(NSSLMS) based ADFE with
LMS, Normalized LMS(NLMS)and Sign LMS(SLMS)
based ADFEs.
Partial update Sign Normalized LMS based Adaptive
Decision Feedback Equalizer
Here only a part of the filter coefficients are updated at
each iteration, without reducing the order of the filter in a
manner which degrades algorithm performance as little as
possible.
Two types of partial update LMS algorithms
Periodic LMS algorithm
Sequential LMS algorithm
T.Aboulnasr et al. proposed M-Max-NLMS algorithm, where the
filter coefficients are obtained from the minimization of a modified
a posteriori error expression.
T.Schertler et al. proposed selective block update NLMS algorithm
which update the filter coefficients on a block basis.
Dogancay.k et al. proposed selective partial update NLMS
algorithm where the selection criterion is obtained from the solution
of a constrained optimization problem.
Werner.S et al. proposed data selective partial updating NLMS
algorithm which uses set membership filtering method.
Mahesh . G et al proposed stochastic partial update LMS
algorithm where filter coefficients are updated in random manner.
Proposed Implementation
Let us assume that the feed forward and feedback
filters are FIR of even length L.
Let the filter coefficients
( ) W n
For the instant
n
the filter coefficients are
separated as even and odd
indexed terms as
2 4 6
( ) [ ( ), ( ), ( ),....... ( )]
t
e L
W n w n w n w n w n =
1 3 5 1
( ) [ ( ), ( ), ( ),....... ( )]
t
o L
W n w n w n w n w n
=
( ) [ ( ), ( )]
e o
W n W n W n !
Let the input sequence of the filter ( ) X n is
( ) [ ( ), ( 1), ( 2),........ ( 1)]
t
X n x n x n x n x n L !
by separating this as even and odd
sequences as
( ) [ ( 1), ( 3)........ ( 1)]
t
e
X n x n x n x n L !
( ) [ ( ), ( 2)........ ( 2)]
t
o
X n x n x n x n L = +
The desired response ( ) d n is given by
( ) ( ) ( )
t
opt
d n W n X n !
where the optimum filter coefficients
( )
opt
W n
is given by
1, 2, ,
( ) [ ( ), ( ),.... ( )]
t
opt opt opt L opt
W n W n W n W n !
For odd
n
filter coefficients updated using partial
update LMS algorithm (PLMS) are given by
( 1) ( ) ( ) ( )
e e e
W n W n e n X n Q + = +
( 1) ( )
o o
W n W n + =
For even
n
the filter coefficients are
( 1) ( )
e e
W n W n + =
( 1) ( ) ( ) ( )
o o o
W n W n e n X n Q !
The error sequence ( ) e n
( ) ( ) ( ) e n d n y n !
is given by
The actual output of the filter is given by
( ) ( ) ( )
t
y n w n X n =
The coefficient error vectors are defined as
( ) ( ) ( )
e e e
V n W n W opt =
( ) ( ) ( )
o o o
V n W n W opt !
( ) ( ) ( ) V n W n W opt =
( ) [ ( ), ( )]
eo t
e o
V n V n V n !
The necessary and sufficient condition for
stability of the recursion is given by
max
2
0 Q
P
max
P where
is the maximum eigen value of the
input signal correlation matrix
The adaptive filter coefficients are updated by the, Partial
update Signed-regressor LMS algorithm (PSRLMS) as
( 1) ( ) sgn{ ( )} ( ) W n W n n e n Q J + = +
Using Partial update Sign-Sign LMS algorithm
(PSSLMS) as
( 1) ( ) sgn{ ( )}sgn{ ( )} W n W n n e n Q J + = +
and using Partial update Sign LMS algorithm
(PSLMS) as
( 1) ( ) ( ) sgn{ ( )} W n W n n e n QJ !
sgn{.} is well known signum function
{ ( )} [ { ( )}, { ( 1)}........ { ( 1)}] Sgn n Sgn n Sgn n Sgn n L J J J J = +
The weight updating equation using Partial update
normalized Signed-regressor LMS algorithm
(NPSRLMS) is written as
( 1) ( ) ( )sgn{ ( )} ( ) w n w n n n e n Q J + = +
( ) n Q
2
( ) x n U P P
is given by
and
2
( ) ( ) ( )
t
x n X n X n = P P