Vous êtes sur la page 1sur 218

MEMS Based MicroResonator Design & Simulation Based On

Comb-Drive Structure
Mr. Prashant Gupta
prashant_iit@ieee.org
Ideal Institute of Technology, Ghaziabad

Abstract:- Resonators serve as essential of resonant frequencies, although only a few may
components in Radio- Frequency (RF) electronics, be used in practical resonators. The vibrations
forming the backbone of filters and tuned inside them travel as waves, at an approximately
amplifiers. However, traditional solid state or constant velocity, bouncing back and forth
mechanic implementations of resonators and between the sides of the resonator. The oppositely
filters tend to be bulky and power hungry, limiting moving waves interfere with each other to create a
the versatility of communications, guidance, and pattern of standing waves in the resonator. If the
avionics systems. MicroElectro-Mechanical distance between the sides is , the length of a
Systems (MEMS) are promising replacements for round trip is . In order to cause resonance,
traditional RFcircuit components. the phase of a sinusoidal wave after a round trip
In this paper we discuss the MEMS resonator, has to be equal to the initial phase, so the waves
which is one of the versatile components in the RF will reinforce. So the condition for resonance in a
circuits, based on one of the promising resonator is that the round trip distance, , be
architecture known as Comb-Drive structure. equal to an integral number of wavelengths of
the wave:

Introduction:
A resonator is a device or system that
exhibits resonance or resonant behavior, that is, it If the velocity of a wave is , the frequency
naturally oscillates at some frequencies, called its
is so the resonance frequencies are:
resonant frequencies, with greater amplitude than
at others. The oscillations in a resonator can be
either electromagnetic or mechanical (including
acoustic). Resonators are used to either generate
waves of specific frequencies or to select specific So the resonant frequencies of resonators,
frequencies from a signal. called normal modes, are equally spaced multiples
(harmonics), of a lowest frequency called
A physical system can have as many resonant the fundamental frequency. The above analysis
frequencies as it has degrees of freedom; each assumes the medium inside the resonator is
degree of freedom can vibrate as a harmonic homogeneous, so the waves travel at a constant
oscillator. Systems with one degree of freedom, speed, and that the shape of the resonator is
such as a mass on a spring, pendulums, balance rectilinear. If the resonator is inhomogeneous or
wheels, and LC tuned circuits have one resonant has a non rectilinear shape, like a circular
frequency. Systems with two degrees of freedom, drumhead or a cylindrical microwave cavity, the
such as coupled pendulums and resonant resonant frequencies may not occur at equally
transformers can have two resonant frequencies. spaced multiples of the fundamental frequency.
The vibrations in them begin to travel through the They are then called overtones instead
coupled harmonic oscillators in waves, from one of harmonics. There may be several such series of
oscillator to the next. Resonators can be viewed as resonant frequencies in a single resonator,
being made of millions of coupled moving parts corresponding to different modes of vibration.
(such as atoms). Therefore they can have millions
1
MEMS Resonators:-
Mechanical resonators are highly sensitive probes
for physical or chemical parameters which alter vertical displacement y from its equilibrium
their potential or kinetic energy[1,2]. Silicon position, mass m and spring constant k = f / y, R is
resonant microsensors for measurement of the damping coefficient.
pressure, acceleration, and vapor concentration The angular resonant frequency is given by
have been demonstrated recently, polysilicon
micro-mechanical structures have been resonated
elcctrostatlcally parallel to the plane of the
substrate by means of one or more interdigitated
capacitors (electrostatic combs). Folded-Flexure comb drive Microresonator:-

Some advantages of this approach are In the design of Resonator, spring constant played
(1) less damping on the structure, leading to a vital role. Different types of spring designs have
higher quality factors, been applied in comb-drive actuators.
(2) linearity of the electrostatic-comb drive and
(3) flexibility in the design of the suspension for 1- Clamped–clamped beams,
the resonator 2-A crab-leg flexure and
3- A folded-beam flexure.
For example, folded-beam suspensions can be
fabricated without increased process complexity, In all these different types of spring design,
which is attractive for releasing residual strain and folded beam structure is widely used to design a
for achieving large-amplitude vibrations. Microresonator The folded-flexure electrostatic
comb drive micromechanical resonator shown in
There are different types of resonator. We only Figure 1 was first introduced by Tang [4, 5,6].
focus on vibrating resonators. This device has been well-researched and is
•Lateral movement commonly used for MEMS process
–Parallel to substrate characterization. The microresonator consists of a
–Ex.: Folded beam comb-structure movable central shuttle mass which is suspended
by folded-flexure springs on either side. The other
•Vertical movement ends of the folded-flexure springs are fixed to the
–Perpendicular to substrate lower layer. The microresonator can be thought
–Ex.: clamped-clamped beam (c-c beam) of, as a spring-mass damper system, the damping
–”free-free beam”(f-f beam) being provided by the air below and above the
movable part. By applying a voltage across the
fixed and movable comb fingers, an electrostatic
force is produced which sets the mass into motion
Example of simple resonators in the x-direction. The microresonator has been
used in building filters, oscillators and in resonant
Mass and spring. This resonator is used by many positioning systems. Figure 1 shows the overhead
physicists as the elemental simple mechanical view of a µresonator which utilizes interdigitated-
resonator, to explain the properties of more comb finger transduction in a typical bias and
complex resonances and resonators. excitation configuration. The resonator consists of
a finger-supporting shuttle mass suspended above
The governing homogeneous differential equation the substrate by folded flexures, which are
is anchored to the substrate at two central points.
The shuttle mass is free to move in the direction

2
indicated, parallel to the plane of the silicon
substrate. Folding the suspending beams as shown
provides two main advantages: first, post-
fabrication residual stress is relieved if all beams where Fe,ζ is the external force (in the x-mode
expand or contract by the same amount; and this force is generated by the comb drives), rn; is
second, spring stiffening nonlinearity in the theeffective mass, Bζ is the damping coefficient,
suspension is reduced, since the folding truss is and k; is the spring constant.
free to move in a direction perpendicular to the The fundamental frequency of the structure can be
resonator motion. The black areas are the places obtained from Rayleigh’s Quotient.
where the polysilicon structure is anchored to the
bottom layer. The fundamental resonance frequency of this
mechanical resonator is, again, determined largely
by material properties and by geometry, and is
given by the expression

where MP is the shuttle mass, Mt is the mass of


the folding trusses, Mb is the total mass of the
suspending beams, W and h are the cross-
Fig.1 Layout of the lateral folded-flexure comb sectional width and thickness, respectively, of the
drive microresonator suspending beams, and L is indicated in Fig.1
The expression for the damping coefficient is

Modeling the Oscillation Modes of the


Microresonator:-

The preferred direction of motion of the where µ is the viscosity of air, d is the fixed
microresonator is the x-direction. However, the spacer gap between the ground plane and the
microresonator structure can vibrate in other bottom surface of the comb fingers, δ is the
modes. There are the three translation modes penetration depth of airflow above the structure, g
along x, y and z, three rotational modes about x, y is the gap between comb fingers, and As, At, Ab,
and z, and oscillation modes due the movement of and Ac are layout areas for the shuttle, truss
the folded-flexure beams and the comb drive. beams, flexure beams, and comb finger sidewalls,
Each oscillation mode is described by a lumped respectively.
second-order equation of motion. For any
generalized displacement ζ, we can write:

3
and resonator fingers. α is a constant that models
Working Principle:- additional capacitance due to fringing
electricfields. For comb geometries, α =1.2 . Note
To bias and excite the device, a dc-bias voltage that, again, Cn/x is inversely proportional to the
VP is applied to the resonator and its underlying gap distance.
ground plane, while an ac excitation voltage is Linear equations for the spring constants are
applied to one (or more) drive electrodes. A derived using energy methods . A force (or
specific resonance mode may be emphasized by moment) is applied to the free end(s) of the spring
using multiple drive electrodes, placing them at in the direction of interest, and the displacement is
the displacement maxima of the desired mode, calculated symbolically (as a function of the
and applying properly phased drive signals to the design variables and the applied force). In these
electrodes. To avoid unnecessary notational calculations different boundary conditions are
complexity, however, we focus on the case of applied for the different modes of deformation of
fundamental-mode resonance in the present the spring.
discussion. We also assume that the electrodes are When forces (moments) are applied at the end-
concentrated at the center of the beam and that the points of the flexure, the total energy of
beam length is much greater than the electrode deformation, U, is calculated as:
lengths. This allows us to neglect beam
displacement variations across the lengths of the
electrodes due to the beam’s mode shape (i.e., we
may assume that x(y) ~ x for y near the center of
the beam). A more rigorous analysis which
accounts for all of these effects is certainly
possible, but obscures the main points. When an
ac excitation with frequency close to the where, Li is the length of the i’th beam in the
fundamental resonance frequency of the flexure, Mi is the bending momentransmitted
µresonator is applied, the µresonator begins to through beam i, E is the Young’s modulus of the
oscillate, creating a time-varying capacitance material of the beam (polysilicon, in our case) and
between the µresonator and the electrodes. Since Ii is the moment of inertia of beam i, about the
the dc-bias VPn = VP - Vn is effectively applied relevant axis, Ti is the torsion transmitted through
across the time-varying capacitance at port n, a beam i, G is the shear modulus, Ji is the torsion
motional output current arises at port n. constant of beam i, and ξ is the variable along the
length of the beam. The bending moment and the
For this resonator design, the transducer torsion is a linear function of the forces and
capacitors consist of overlap capacitance between moments applied to the end-points of the flexure.
the interdigitated shuttle and electrode fingers. As The displacement of an end-point of the flexure in
the shuttle moves, these capacitors vary linearly any direction ζ is given as:
with displacement. Thus, Cn/x is a constant, given
approximately by the expression

where, F ζ is the force applied in that direction at


that end-point . Similarly, angular displacements
can be related to applied moments.
Our aim here is to obtain the displacement in the
direction of interest as a function of the
where Ng is the number of finger gaps, h is the applied force in that direction. Applying the
film thickness, and d is the gap between electrode boundary conditions, we obtain a set of linear

4
equations in terms of the applied forces and
moments and the unknown displacement. Solving The displacement as a function of the driving
the set of equations yields a linear relationship voltage was measured while applying a dc voltage
between the displacement and applied force in the between the rotor (movable set) and a stator
direction of interest. The constant of (stationary set)
proportionality gives the spring constant as a
function of the physical dimensions of the flexure.
The effect of spring mass on resonance frequency
is incorporated in effective masses for each lateral
mode. Effective mass for each mode of interest is
calculated by normalizing the total maximum
kinetic energy of the spring by the maximum
shuttle velocity, Vmax.

where mi and Li are the mass and length of the


i’th beam in the flexure. Analytic expressions for
velocities, vi, along the flexure’s beams are
approximated from static deformation shapes, and Table 1: Design and style variables for the
are found from the spring constant derivations. microresonator. Upper and lower bounds are in
units of µm except N and V.
Design Variables:-
Fifteen design variables are identified for the Quality Factor (Q):-
µresonator. The design variables are listed in It describes how underdamped an oscillator or
Table I and shown in Fig.2 These include 13 resonator. Higher Q indicates a lower rate of
geometrical parameters (shown in Fig. 2), the energy loss relative to the stored energy.
number of fingers in the comb drive, N, and the
effective voltage, V, applied to the combdrive.

Where
x- x direction
m-Mass
k-Spring constant
B- Damping coefficient.

Fig.2 Dimensions of the microresonator elements.


(a) shuttle mass, (b) folded-flexure, comb drive
with N movable ’rotor’ fingers, (d) close-up view
of comb fingers.

5
Simulation Process:-
Steps for the IntelliSuite Simulator

1-Design the appropriate mask or masks for your


design in the IntelliMask

2- Fabricate the device using IntelliFab and


visualize it.

3- Perform Different types of Analysis (Static or


Frequency) with the help of TEM.

4- Get the results

Fig.4:MEMS microresonator process flow using


IntelliFab

Fig.3:MEMS microresonator mask structure using


IntelliMask

Fig.5:MEMS microresonator TEM structure using


TEM Analysis

6
*Capacitance Report
Number of conductors: 2
CAPACITANCE MATRIX, 1e-6 nanofarads*1e-
6
C11 9.334000
C12 -1.037000
C21 -1.037000
C22 2.767000

*Natural Frequency Report


*Unit Hz
*Mode Number 6
Mode 1 Frequency 23347.1 (Natural Frequency or
resonant Frequency)
Mode 2 Frequency 39248.8
Mode 3 Frequency 40138
Mode 4 Frequency 51.6151
Mode 5 Frequency 70.8529
Fig.6:MEMS microresonator Pressure
Distribution Resonator Simulation Results:-
With the help of Simulation process we get the
Resonant Frequency with different parameters.
We can also find out displacement, pressure
distribution, charge distribution, stress, linear
motion etc. Figures for pressure distribution and
charge distribution are shown in the figure.

Comb characteristics Resonant frequency (kHz)

S.No. No. Of Finger Finger Gap Calculated Measured


Fingers Length Width (µm)
(µm) (µm)
1 12 20 2 2 23.4 22.8
2 12 30 2 2 22.6 22.1
3 12 40 2 2 21.9 22
4 12 50 2 2 21.3 21.2
5 12 40 3 2 20.4 20.3
6 12 40 4 2 19.1 19.1

Table 2: Calculated and measured resonant


Fig.7:MEMS microresonator charge Distribution
frequencies of a set of combdrive structures

7
Conclusion and Future Work:- Acknowledgements:
This research work had been carrying out at
In this project we design and simulate a CARE, IIT Delhi under the supervision of Prof.
microresonator based on comb-drive structure Sudhir Chandra CARE, IIT Delhi. I am also
which is introduced by Tang. We design it and grateful to my college Director Dr. G. P. Govil
calculate resonance frequency for different and my Head of the Department Mr. N.P. Gupta
geometry parameters. for his kind hearted support and motivation during
the research work.
There are two types of constraints in comb drive
structure (1-Geometric and 2-Functional) which References:
we have not discuss here left for the future work.
The project work can be extended in a number of
directions. Manufacturing variations need to be 1. S. M. Sze, Semiconductor Sensors, John
incorporated for accurate synthesis results. Wiley & Sons Inc., New York, 1994
2. Ljubisa Ristic, “Sensor Technology and
Fabrication for MEMS resonator is also a big Devices”, Artech House ISBN 0-89006-532-2,
issue which we are not discuss in our work and 1994
left for the future work. 3. G.K. Fedder and T. Mukherjee, "Automated
Optimal Synthesis of Microresonators," Proc 9th Intl.
The spring constant can also be designed by Conf on Solid-State Sensors and Actuators
different styles also left for future work. After (Transducers ’97), Chicago, IL, June 16-19, 1997.
design and calculating the resonance frequency 4. W.C. Tang, T.-C. H. Nguyen, M. W. Judy, and R.
T. Howe, "Electrostatic Comb Drive of Lateral
for different shapes we go for simulation process Polysilicon Resonators," Sensors and Actuators A, 21
and simulate them and get the results which we (1990) 328-31.
shown in the table. 5. X. Zhang and W. C. Tang, "Viscous Air
Damping in Laterally Driven Microresonators,"
From all these work, I would like to conclude Sensors and Materials, v. 7, no. 6, 1995, pp.415-430.
some points which are following. 6. W C Tang, T-C H Nguyen and R T Howe,
Laterally driven polysilicon resonant
To achieve high resonance frequency microstructures, IEEE MicroElectro Mechamal
System Workshop, Salt Luke City, UT,US A ,
–Total spring constant should increase Feb 20-22, 1989, pp 53-59
7. C.T.C. Nguyen, MTT-S 1999
–Or dynamic mass should decrease (http://www.eecs.umich.edu/~ctnguyen/mtt99.p
-(Difficult, since a given number of fingers df)
are needed for electrostatic actuation 8. Andrew Potter, “Fabrication and Modeling
of Piezoelectric RF MEMS Resonators”,
–k and m depend on material choice, layout, Department of Physics and Division Engineering
dimensions – Brown University
•k expresses the spring constant relative to mass 9. Roger T. Howe, “Applications of Silicon
Micromaching to Resonator Fabrication”, 1994
–Frequency can increase by using a material with IEEE International Frequency Control
larger k ratio than Si Symposium
10. Clark T. C. Nguyen, “ Frequency-Selective
MEMS for Miniaturized Communication
Devices”, 1998 IEEE Aerospace Conference, vol
1 ,Snowmass, Colorado

8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Different Look-Ahead Algorithm for Pipelined


Implementation of Recursive Digital Filters
Krishna Raj, VivekanandYadav

Abstract—Look-ahead techniques can pipeline IIR digital filters


to attain high sampling rates. The existing Look-ahead such as
CLA and SLA scheme were special cases of the proposed DLA D (z) = =1+ (2)
scheme (new LA scheme) for pipelined implementation of
recursive Digital filters. It can also used to provide equivalent
and stable pipelined implementation with reduced pipeline delay Then the pipelined filter is attained by multiplying D (z) to
and hardware when compared with existing Look-Ahead both the denominator and numerator in H (z) [10].
schemes, comparison between DLA and SLA scheme.
= = (3)
Index Terms-- Clustered look-ahead (CLA) scattered look-ahead
(SLA), Look-Ahead (LA) and Distributed Look-Ahead (DLA).
For different LA algorithms, the pipelined IIR filter transfer
I INTRODUCTION functions (z) are in different forms. Three existing LA
ook-ahead techniques have been highly effective in algorithms are summarized here.
L attaining high sampling rate and computation speed for
low-cost VLSI implementation of recursive digital filters [1, A. Clustered Look-Ahead Algorithm
4, 6, 7, and 9]. There are several LA approaches. One is For the M-stage CLA pipelined IIR filters, the denominator
referred to as CLA algorithm or time domain approach [4, 6, of the transfer function can be expressed in the form of
7], which clustered the past output data to achieve pipelined
IIR filters. CLA cannot guarantee to be stable, SLA algorithm (4)
or the z-domain approach [1, 8], which uses equally separated
past output data and yields to stable pipelined IIR filters with Where M is the pipelined stage, is the coefficient of
linear increasing in hardware. Now, distributed look-ahead pipelined filter. The output data y (n) can be described by the
(DLA) algorithm is combines [2] the two above schemes to cluster of N past data y (n-M), y (n-M-1), _, and y (n-M-N+1).
reach stable design with reduced Pipeline delay and hardware [2]The augmented polynomial coefficients can be found by
complexity. iterative calculating as follows...
An M-stage LA pipelined recursive filter can be obtained
by multiplying the numerator and the denominator of the (5)
transfer function by an augmented polynomial, D (z). By
choosing proper order and coefficients of D (z), we obtain
Then M-stage pipelining of order recursive filter is
either the M-stage CLA pipelined filter or the M-stage SLA
obtained as (4), (6), (7), and (9).
pipelined filter.

H (z) =
II EXISTING LOOK-AHEAD ALGORITHMS
(6)
The transfer function of Nth-order recursive filter is The total multiplication complexity is (2N+M) and latch
described by complexity is linear in M. extra delay in producing output is
H (z) = = (1) M [11].

The LA algorithm finds the augmented polynomial D (z) B. SCATTERED LOOK-AHEAD ALGORITHM
where
For the M-stage SLA pipelined IIR filters, the denominator
of the transfer function is obtained as
Krishna Raj is Deptt. of Electronics Engg., HBTI, Kanpur-208002,
India, Email: kraj_biet@yahoo.com, (7)

Vivekanand yadav, M.Tech., from Deptt. of Electronics Engg.,


HBTI, Kanpur, Email: vivekanand.hbti@gmail.com.

SIP0103-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

The denominator of the resulted transfer function contains by substituting in (1) [2, 4, 6, and 7]. Similarly, an M-
N scattered terms , …, .[3]The coefficients stage SLA pipelined version of same order recursive filter
can be obtained by solve N (M-1) simultaneous equation. can be produced by substituting in (1) [1, 2, 8].It is
used for high speed modular implementation of stable 2-D
- , where, i=2,…, M- denominator separable IIR filters.
1,M+1,..,tM-1, tM+1,…..,NM-1.
In out In Out In out
Then an equivalent M-stage pipelining of same order
recursive filter can be obtained as [1, 8].
M M M
H (z)
D D D
= (8) D
D M
D D
The total multiplication complexity is (NM+N+1) and
latch complexity is square in M. The extra delay in producing D
output is (NM-N) [11]. If M is power of 2, then using
decomposition technique, the total multiplication and latch
complexity can be further reduced [1].The architecture is D M
shown Fig. 1(b). D
D
(a) (b) (c)
C. Distributed Look Ahead Pipelining Fig: (1) LA pipelined IIR filters (a) CLA realization (b) SLA realization (c)
DLA realization
Pipelining of the following filter transfer function
III COMPARATIVE ANALYSIS
H (z) = Table-1
Delay in Extra
Since must equal original H (z), can also be obtained Pipelining Multiplication
First Delay in
by multiplying. The original filter by an augmentation Methods Complexity
output output
polynomial D (z) both in the numerator and the denominator,
i.e., CLA L+M+N-1 M M

SLA NM+L NM NM-N

DLA M+ M
Where D (z) = 1+ …………. +

Initialize =-
Table-2
Iterate For i=2 to (M-1)
M=3 M=4 M=6 M=8
Method SLA A SLA DLA SLA DLA SLA DLA
No. of
According to the Distributed Look-Ahead (DLA) MUL
transformation, the M-stage pipelined filter transfer function /adder 6 5 6 5 8 6 8 7
No of
would have the following general form. Latch 10 8 14 10 22 14 30 18
Delay
in 1st
o/p 6 5 8 6 12 8 16 10

(9)
The coefficient of non-recursive portion of pipelined filter IV CONCLUSIONS
are unequally distributed and it can be implemented with
( ) multiplication and recursive portion by The denominator order using DLA , (M + ) is less than the
(L+1) multiplications, hence total multiplications ( order with SLA (NM), and the DLA transformed filter is
and latch complexity is linear in M.CLA and stable, and then the proposed scheme would offer considerable
SLA scheme are special class of DLA scheme. An M-stage hardware savings over SLA. Multiplication and Latch
pipelined version of an order Recursive filter is obtained complexity are less over SLA. Pipeline Delay and hardware

SIP0103-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

complexity are reduced than SLA. From below pole-zero plot


we see that if increase stages more stable the filter. We plot (a) Fig:4 (b)
the graph and we conclude that No. of Multiplier /Adder are
lesser in DLA than SLA because SLA is attained greater value
at each Stage than DLA. No. of Latches are also lesser in DLA
than SLA because in SLA attained values are very large
comparatively to DLA and similarly Delay producing in first
output is also lesser in DLA than SLA.

Examples
H (z) =
(b)
1
1

0.8

0.6
Fig: 5 (a) DLA (b) SLA
0.8

0.6 0.4
(Using table1 and tabe2)
Imaginary Part

0.4
0.2
Imaginary Part

0.2 2
0
2
0
-0.2
-0.2
-0.4
-0.4

-0.6

-0.8
-0.6

-0.8
V REFERENCES
-1 -1
-1 -0.5 0 0.5 1
Real Part -1 -0.5 0 0.5 1
Real Part

(a) (b) [1] K. K. Parhi and D. G. Messerschmitt, “Pipeline interleaving and


parallelism in recursive digital filters-Part I: Pipelining using
scattered look-ahead and decomposition,” IEEE Trans .on Acoustics,
1
1

0.8
Speech, and Signal Processing, vol. 37,no. 7 pp. this issue, pp. 1099-
0.8

0.6
0.6

0.4
1117,july 1989.
0.4

[2] A. K. Shaw and M. Imtiaz, "A general Look-Ahead algorithm for


Imaginary Part

0.2
Imaginary Part

0.2
2
2
0

-0.2
0
-0.2 pipelining IIR filters," in Proc. IEEE ISCAS, 1996, pp. 237-240.
-0.4

-0.6
-0.4

-0.6
[3] Y. C. Lim, "A new approach for deriving scattered coefficients of
-0.8

-1
-0.8

-1
pipelined IIR filters," IEEE Trans. Signal Processing, vol. 43, pp.
-1 -0.5 0
Real Part
0.5 1 -1 -0.5 0
Real Part
0.5 1
2405-2406, 1995.
[4] H.H. Loomis and B Sinha, “High-speed Recursive Digital Filter
(c) Fig:2 (d) Realization”, Circuits, Systems and Signal Processing, vo1.3, pp.
267-294, Sept., 1984.
[5] A. P. Chand, “Low Power CMOS Digital Design,” IEEE J. of
Solid-State Circuits, vol. 27, pp. 473-484, Apr., 1992.
0.8
1 1

0.8
[6] P.M. Kogge, The architecture of Pipelined Computers, New
0.6 0.6

0.4
York, Hemisphere Publishing Corporation, 1981.
[7] Y.C. Lim and B. Liu, “Pipelined Recursive Filter with Minimum
0.4
Imaginary Part

0.2
Imaginary Part

0.2
2
0

Order Augmentation”, IEEE Transactions on Signal Processing,


2
0
-0.2
-0.2
-0.4
-0.4

-0.6
-0.6
vo1.40, no. 7, pp. 1643-1651, July 1992.
[8] M. A. Soderstrand, K. Chopper and B. Sinha, “Comparison of
-0.8
-0.8
-1
-1
-1 -0.5 0 0.5 1
-1 -0.5 0
Real Part
0.5 1 Real Part
three new techniques for pipelining IIR digital flters,”23rd
ASILOMAR Conjerenceon Signals, Systems & Computers, Pacific
(a) Fig:3 (b) Grove, CA, pp. 439-443, Nov., 1984.
[9] H. B. Voelcker and E:E. Hartquist, “Digital Filtering via Block
Recursion”, IEEE Trans. Audio Electroacoust., Vol.AU-18, pp.169-
176, June, 1970.
1
1

0.8
[10] Yen-Liang chen,Chun-Yu chen,Kai-Yuan Jheng and An-
Yen(Andy)Wu,”A Universal Look-Ahead Algorithm For Pipelining
0.8
0.6
0.6

IIR Filters”IEEE Trans,2008.


0.4
0.4
Imaginary Part
Imaginary Part

0.2 0.2
3 2

-0.2
0 0

-0.2
[11] A. K. Shaw and M. Imtiaz, "New Look-Ahead Algorithm for
-0.4

-0.6
-0.4

-0.6
Pipelined Implementation of Recursive Digital Filters,” in Proc.
-0.8

-1
-0.8 IEEE ISCAS, 1996, pp. 3229-323.
-1
-1 -0.5 0 0.5 1
Real Part -1 -0.5 0 0.5 1
Real Part

Pole-zero plots for


Fig :( 2) CLA (a) M=3 (b) M=4 (c) M=5(d) M=6[only (d) stable]

SIP0103-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Fig :( 3) pole-zero plot for SLA (a) M=3 (b) M=4[both stable]
Fig :( 4) pole-zero plot for DLA (a) M=3 (b) M=4[both stable]

SIP0103-4
1

Study of MC-EZBC and H.264 Video Codec


Agha Asim1 Husain and Agha Imran2 Husain
1
Deptt of Electronics & Comm. Engg, ITS Engg College, 201301, India
2
Deptt of Computer Science & Engg, MRCE, 121004, India
Email: aghahusain@gmail.com, aghaimrank2k@yahoo.com

Abstract: This paper proposes a new aspect of comparing the embedded zero blocking coding --- MC-EZBC) [6] [7]. The
two video codecs on the basis of rate-distortion basis. Scalable second video codec is the Ad Hoc Model 2.0 (AH M 2.0)
coding provides a straight forward solution for video coding implementation of the H.264 standard [4][8] which extends
that can serve broad range of applications without the need for the JM 6.1 implementation[9] with a rate control
transcoding. Even though the latest international video -coding
algorithm[10].
standards do not provide ful ly scalable methods, only H.264
provides the best rate-distortion performance. Other than
H.264, the performance on rat e-distortion Motion Compensated
Embedded Zero Block Context (MC-EZBC) coder which is fully III. MATERIALS AND METHODS
scalable.
A. Encoding Process
Keywords—, MC-EZBC, ME/MC sub pixel accuracy, This section describes how the two codecs were configured
temporal level subband coding, YSNR. and used in order to obtai n the bit streams necessary for
performing the various measurements.
I. INTRODUCTION
THE MODERN VIDEO compression coding technologies has TABLE I
been significantly improved for last few years and has Sequences Used In Our Experiment
enabled broadcasting of digital video signal over various Name No. of frames Abbreviation
networks [1]. Also motion compensated wavelet based video Akiyo 300 AK
coding emerged as an important research topic to explore Foreman 300 FO
because of its ability to provi de better quality. MC-EZBC [2]
Hall 300 HA
[3] is one of the codec that encodes the motion information in
a non scalable manner, which results in a reduced coding
efficiency performance at low bit rates. However H.264 [4] is
a non scalable coding technique provides a good quality As input three progressive video sequences were used in
video at substantially lower bit rates than previous standards raw Y Cb Cr 4:2:0 formats. These were downloaded from the
like MPEG-2, H.263, or MPEG-4 Part 2 without increasing Hannover FTP server.
the complexity of design and cost.
In this paper we are performing the analysis on the An overview of the sequences is given i n the Table I. The
joint region of applicability betw een the MC-EZBC and resolutions used are the Common Intermediate Format
H.264 video codec. In MC-EZBC, by using a third and four (CIF, 352  288 ), thus resulting in 3 input video sequences.
level of temporal decomposition of the input video sequence These sequences were encoded by making use of constant bit
thereby obtaining a GOP structure of 8 and 16 frames, and rate coding (CBR). Ten different target bit rates were used:
effect of sub-pixel accurate Motion estimation and both very low and very high bit rates. The bit -rates taken are
compensation, a good comparison with H.264 is achieved in 100, 200, 300,…1000 kbps. At each bit rate, encoding was
terms of Coding Efficiency [5]. performed at 30 frames per second. The detailed settings for
The outline of the paper is as follows. After introducing the different encoding parameters can be found in Table II
the examined compression schemes in section II, an overview and Table III.
of the applied methodology is provided in Section III. Th e
obtained results are described in Section IV while the
conclusions are drawn in Section V. The code of MC-EZBC was downloaded from the MPEG
CVS server. Each input video sequence was encoded once
and then pulled several times in order to get decodable bit
II. Video codec overview streams for all target bit rates. The H.264 bitstreams are
The two video codec that were used in the tests are summed conforming to Baseline and Main Profile. The GOP structure
up in this section. Due to place constraints, the reader is is IBBBP and GOP length is 16.
referred to the references for further information on these
codecs. The first one is a scalable wavelet based video codec
developed by J. Woods et al. (motion compensated
TABLE II
Parameter Settings for the MC-EZBC Compressor
Parameter Value(CIF) Comment B.Quality measurement
-inname akiyo.yuv Name of input file The PSNR-Y is calculated as defined in [11]. In order to get a
containing a sequence of
PSNR value for an entire sequence, the average of the PSNR -
4:2:0
-statname akiyo_tpyrlev3 Name of output file Y values of the individual frames is calculated. This is not
_cif_mv0.stat containing some statistical only one way to get a value for an entire sequence. But
information generated another method could be, for instance, to take the minimum
during encoding of the individual PSNR-Y values (because a video sequence
-start 0 Index number of the first may be evaluated based on the worst part). PSNR is based on
frame (0 means first frame a distance between two images [derived from the metric3
in file)
-last 299 Index number of the last
mean square error (MSE)] and does not take into account any
frame property of the human visual system (HVS).
-size 352 288 176 Size of each input frame.
144 1. pixel width of the IV. EXPERIMENTAL RESULTS
luminance component
In the experiment, the performance of the codec is checked
2. pixel height of the
luminance component on Rate-Distortion basis. It is clear that due to the size of the
3. pixel width of the experiments and place constrai nts, not all results can be
chrominance component presented. A subclass of the re sults is given in Table IV and
4. pixel height of the Table V.
chrominance component
-frame rate 30 Number of input frames
The coding efficiency of MC -EZBC is compared with
per second
-tPyrLev 3 Levels of temporal H.264 with different sequences at different bit rates. MC -
subband decomposition EZBC is a fully scalable coding architecture whi ch utilizes
-searchrange 16 Maximum search range MCTF and wavelet filtering. The software available for
(in pixels) in first
download at the website of CIPR, RPI [7] is used for testing
temporal decomposition
level. The search range is of the video material. On the other hand H.264 has non
doubled with each scalable coding structure and t he entire tests were done on
decomposition LINUX based personal computer (AMD turion 64x2
-maxsearchrange 64 Upper limit for search
processor speed 1.9GHz and RAM 1GB) with Ubuntu 9.04
range
installed and no other software running in the background.

TABLE III The measurement results of both codecs can provide an


Parameter Settings for the H.264 AHM 2.0 Encoder
assessment of the coding efficiency of current wavele t-based
Parameter Value(CIF)
codecs compared to state-of-the-art single-layered codecs. A
Input File “../Akiyo300_cif.yuv”
Frames To Be Encoded 300
first general remark is the fact that, for certain bit rates, there
Source Width 352 are no measurement points for MC -EZBC. MC-EZBC is not
Source Height 288 able to encode that particular video sequence at such low
Trace File “trace_enc.txt” target bit rates. In case of low bit rates, a codec may also
Recon File “trace_rec.yuv” decide to skip some frames.
Output File “test.264”
Search Range 16
TABLE IV
Number Reference Frames 1
Average coding gain of MC-EZBC and H.264 between 500- 1000 Kbps
Restrict Search Range 2
RD Optimization 1
Context Init Method 1 Video Codec Foreman (YSNR) dB
MC-EZBC 37.90
Rate Control Enable 1 H.264 38.06
Rate Control Type 0
Bit rate 100Kbps
For video sequences with a hig her amount of movement (FO) [8] T. Wiegand, H. Schwarz, A. Joch, F. Kossc ntini, G. J Sullivan,
“Rate- Constrained Coder Control and Comparison of Video Coding
indicates that on an Average, H.264 JM 6.0 performs Standards”, IEEE trans. Circuits sytems, video Technology , vol.13, no.
significantly better than MC -EZBC in terms of PSNR-Y at 7, pp- 688-703, July 2003.

almost all bit rates. It is also observed that H.264 outperforms [9] H.264.AVC Reference Software [Online]. Available:
http://iphome.hhi.de/suehring/tml/download/
well throughout the bitrate for High complexity.
[10] Proposed Draft Description of Rate Control on JVT standard ,
TABLE V
ISO/IECJTC1/SC29/WG11 and ITU -T SG16/Q.6, JVT-document
Subset of Quality Measurements for Video CIF Sequences JVT-F086, Dec. 2002
Bit Rate MC-EZBC H.264
(Kbps) [11] P. Chen. Fully Scalable Subband / Wavelet Coding. PhD Thesis,
Foreman Sequence Foreman Sequence Rensselaer Polytechnic Institute, Troy, New York, May 2003.
100 27.86 30.33
400 34.88 35.73
1000 39.12 39.30

IV. CONCLUSION
In this paper, an overview was given of the rate distortion
performance of the two state of t he art video codec
technologies in terms of YSNR. From the above results it is
clear that the tools that are incorporated in the H.264 standard
outperform MC-EZBC. Although at around 1000 Kbps the
performance of MC-EZBC is comparable with that of H.264
for high complexity sequences.

REFERENCES
[1] M Ghanbari. Standard Codecs: Image Compression to Advanced video
Coding. IEE Telecommunications Series 2003.

[2] S.S. Tsai, motion Information Scalability for Interframe Wavelet Video
Coding, MS Thesis, National Chiao Tung University, Hs inchu,
Tiawan, R.O.C., Jun.2003

[3] S.S. Tsai, motion Information Scalability for Interframe Wavelet Video
Coding, MS Thesis, National Chiao Tung University, Hsinchu,
Tiawan, R.O.C., Jun.2003.

[4] J. W. Woods and P. S. Chen, “Improved MC-EZBC with Quarter-pixel


Motion Vectors”, ISO/IEC/JTC1SC29/WG11 doc no. m8366, fairfax,
May 2002.

[5] T. Wiegand, G Sullivan and A. Luthra. Overview of the H.264/AVC


Video Coding Standard. IEEE Trans. On CSVT, Vol.13, pp.560 -576,
July 2003.

[6] I.E.G. Richardson. H.264 and MPEG -$ Video compression. Hoboken,


NJ: Wiley, 2003.

[7] S.T. Hsiang and J. W. Woods. “Embedded Video Coding using


Invertible Motion Compensated 3 -D Subband/ wavelet filter Bank.”
Signal Process.: Image Communication, vol. 16, pp.705-724, May
2001
[8] MC-EZBC Software: www.cipr.rpi.edu/~golwea/mc_ezbc.htm
1

FPGA BASAED IMPLEMENTATION OF


IIR FILTERS
1
Anup Saha, 2Saikat Karak, 3Surajit Kangsabanik, and 4Joyita RoyChowdhury
Department of Electronics and Communication Engineering
4th Year, MCKV Institute of Engineering,
1
anupsaha.0733@gmail.com, 2karaksaikat@yahoo.com, 3surajitkangsabanik@gmail.com,
4
joyitaroychowdhory@yahoo.in

Abstract: Digital filtering technique is implemented using This paper describes the way of implementation of IIR digital
general purpose digital signal processing chips. Audio and special filtering algorithm on field programmable gate arrays
purpose digital filtering algorithms are designed on ASICs for (FPGAs).Recent advancements in FPGA technology have
higher bit rate. This paper describes the implementation of IIR enabled these devices to be applied to a variety of applications
filter algorithms based on field programmable gate arrays traditionally reserved for ASICs. FPGAs are well suited for
(FPGAs). IIR Filter design shows significant reduction in the
data path designs, such as those encountered in digital filtering
computational complexity required to achieve a given frequency
response as compared to FIR filter for the same response. FPGA applications. The advantages of the FPGA approach to digital
based implementation includes higher sampling rates that are filter implementation include higher sampling rates than those
available in traditional DSP chips. It produces a low cost along are available from traditional DSP chips,[2] lower costs than
with flexibility in design in comparison to ASIC. It follows an ASIC for moderate volume applications, and more software
pipeline architecture that gives us the advantages of parallel flexibility than the alternate approaches. In particular, multiple
processing. We have observed and compared the filtering multiply-accumulate (MAC) units may be implemented on a
characteristics of IIR filter of direct form-2 realization using single FPGA, which provides comparable performance to
MATLAB by altering the bit length and also the order. We have general-purpose architectures which have a single MAC unit.
implemented the digital filter in Xilinx Spartan 3E kit using
In comparison to FIR filter[3] IIR filter uses less MAC unit to
VHDL. FPGA architectures are in-system programmable, the
configuration of the device may be changed to implement achieve the same frequency response resulting in lesser
different functionality as per requirement. Our work illustrate memory requirement and less computational complexity for
that the FPGA approach is both flexible superior to traditional IIR filter. The configuration of the FPGA device may be
approaches. changed to implement alternate filtering operations only by
Keywords: ASIC, FPGA, IIR, FIR, VHDL, Pipeline altering the software, such as lattice filters and gradient-based
Architecture, Xilinx Spartan 3E adaptive filters, or entirely different. In our project we have
implemented digital IIR filter using FPGA. IIR systems have
an impulse response function that is non-zero over an infinite
I. INTRODUCTION length of time. This is in contrast to finite impulse response
A filter is used to modify an input signal in order to facilitate (FIR) filters[4], which have fixed-duration impulse responses.
further processing. A digital filter works on a digital input (a To obtain the similar stability IIR filter requires less order
sequence of numbers, resulting from sampling and quantizing compared to FIR filter. IIR Filter is one of the Digital Filters
an analog signal) and produces a digital output. According to that is used mostly in Audio Signals Processing. One good
Dr. U. Meyer-Baese [1], “the most common digital filter is the application of IIR filter technology is the generation and
Linear Time-Invariant (LTI) filter”. Designing an LTI recovery of dual tone multi-frequency (DTMF) signals used
involves arriving at the filter coefficients which, in turn, by Touch-Tone telephones.
represents the impulse response of the IIR filter design. These
coefficients, in linear convolution with the input sequence will The rest of the paper is organized as follows: Section II
result in the desired output. The linear convolution process describes related works and Section III deals with proposed
can be represented as [2]: The most common approaches to architecture. Our scheme is evaluated by results obtained from
the implementation of digital filtering algorithms are generally extensive simulation in Section IV. Finally, we conclude in
implemented on digital signal processing chips for audio Section V.
applications and application-specific integrated circuits
(ASICs) for higher rates.
2

II. RELATED WORKS signal is related to the input signal. We have modeled the
Customized VLSI chips influenced the former and most of the equation as
researches implementing digital filter. The architecture of
these filters are largely determined by the target application.
Typical DSP chips like Texas instrument’s TMS320, Free 1
y[n] = (b0 * x[n] + b0 * x[n −1] + .........bp * x[n − P]
scale’s MSC81xx, Motorola’s 56000, Analog device’s ADSP- a0 (1)
2100 family efficiently performs filtering operations in audio −a1 * y[n −1] − a2 * y[n − 2] − ........... − aQ * y[n − Q])
range. For higher frequency domain, CMOS and Bi-CMOS
technology is used. There are some disputes in the customized
chips. The biggest shortcoming is low flexibility as they are
application specific. Also, lack of adaptability in these chips is Where:
severe. Typical custom approaches do not allow the function
of a device to be modified during the evaluation, for an • is the feed forward filter order
example, fault correction. The FPGA approach is therefore
necessary to provide the designing freedom. Many of the • are the feed forward filter coefficients
popular FPGAs are in-system programmable, which allows
modification of the operation using simple programming. But • is the feedback filter order
for filtering purposes FIR[3] filters have been commonly
used. In • are the feedback filter coefficients
this particular work, IIR filters are implemented as they
require fewer calculations and lesser memory requirement.IIR
filters also outperforms FIRs[5] for narrow transition bands. • is the input signal
They can also provide a better approximation for traditionally
analog systems in digital applications than competing filter • is the output signal.
types.IIR filters are mainly used in audio applications such as
speakers and sound processing functions. In this work, Now from the above equation we modeled the transfer
XILINX SPARTAN 3E series is used for implementing function of IIR filter as
various digital filtering algorithms. XILINX SPARTAN 3E
consists of reconfigurable combinational logic blocks with Y (z ) b + b z −1 + b2 z −2
= H ( z ) = 0 1 −1 (2)
multi input and output, router or switching matrix for X (z ) 1 + a1 z + a 2 z − 2
connection and buffers.
For hardware representation of the digital filter we have
III PROPOSED ARCHITECTURE modeled the transfer function by using adder, multiplier and
delay unit.
IIR filter implementations on FPGA board illustrate that the
x(n) w(n) b0 y(n)
FPGA approach is both flexible and provides performance
+ +
superior to traditional approaches. Because of the
programmability of this technology, the examples in this paper
can be extended to provide a variety of other high z-1
performance IIR filter realizations. Using powerful computer
based software tools to perform redundant calculations in the -a1 w(n-1) b1
filter design process enables a designer to achieve the best + +
design within the shortest time. While implementing a filter
on hardware, the biggest challenge is to achieve specified
system performance at minimum hardware cost. In this paper z-1
we achieve this goal by designing the digital filter which also
gives better noise margin and less ageing effect of -a2 w(n-2) b2
components in comparison to Analog filter. One among the
hurdles is to understand, estimate and overcome where
possible, the effects of using a finite word length to represent
the infinite word length coefficients. Selecting a non Figure 1: Direct Form-2 Structure of Digital Filter
optimized word length[6] can result in the filter transfer A basic IIR filter consist of 3 main blocks-
function being different from what is expected. The effects of (i) Adder (ii) Multiplier (iii) Delay unit
using finite word length representation can be minimized by
analytical or qualitative methods or simply by choosing to A Implementation of Adder
implement higher order filters in cascaded or parallel form
Digitals filters[7] are often described and implemented in We have implemented this system using serial adder. A serial
terms of the difference equation that defines how the output adder is a binary adder that adds the two numbers bit-pair
3

wise. Each bit-pair are added in a single clock pulse. The A. Software Simulation
carry of each pair is propagated to the next pair.
The sampling frequency is chosen as 4 times the stop band
B. Implementation of Multiplier and the filter has a steep transition band with a width of 1000
Hz. These specifications are fed as inputs to the FDA tool in
The multiplier has been configured to perform multiplication
MATLAB R2009a. The tool performs the filter design
of signed numbers in two’s complement notation We have
calculations using double precision floating point numeric
used signed multiplication where a n-bit by n-bit
representation and displays the response of a IIR elliptical low
multiplication takes place and result in a 2*n-bit value.
pass filter of order 6. Figure 2 shows the filter design window
of FDA tool, after completion of the design process.
C. Implementation of Delay Unit
We have used shift register for the purpose of delay. A shift
register is a group of flip-flops set up in a linear fashion with
their inputs and outputs connected together in such a way that
the data is shifted from one device to another when the circuit
is active. (i) A provides the data movement function
(ii). A shift register “shifts” its output once every clock cycle.
PASS BAND STOP BAND
IV SIMULATION RESULT
To check the response of proposed filter we have used Filter
Design and Analysis Tool (FDA Tool) which is a graphical
user interface (GUI) available in the Signal Processing
Toolbox of MATLAB for designing and analyzing filters. It
takes the filter specifications as inputs. Table 1 shows the
specifications of an IIR low pass elliptical filter of order 6.

Table 1: IIR filter specifications

Filter performance Value


parameter
Pass band ripple 0.5dB
Pass band frequency 11000 Hz Figure 2 Filter design using MATLAB FDA tool
Stop band frequency 12000 Hz
Stop band attenuation 35 dB We have designed the IIR filter of direct form-2 .Using
Sampling frequency 48000 Hz VHDL we have simulated and downloaded it in Xilinx
Spartan 3E kit. The response we have obtained by simulating
the VHDL code is shown below.
4

Figure 4: Internal Block Diagram of FPGA Architecture

V. CONCLUSION

We have implemented the IIR filter in FPGA and our results


shows better improvement over existing filter design
architecture. In future we will implement our scheme for real
time application.
Figure 3 The simulation output of IIR filter
in Xilinx ISE 7.01 REFERENCE
[1] U. Meyer-Baese, Digital Signal Processing with Field Programmable Gate
The coding scheme that we are using is VHDL (Very high Arrays Second Edition , Springer, p.109.
[2] U. Meyer-Baese, Digital Signal Processing with Field Programmable Gate
speed integrated circuit hardware description language). Since Arrays Second Edition , Springer, p.110.
we have designed the filter in digital domain, so to [3] DUSAN M. KODEK, 1980, “Design of Optimal Finite Word length FIR
accommodate it in current existing analog system we have to Digital Filters Using Integer Programming Techniques” IEEE Transactions on
add a A/D converter before the system and a D/A converter Acoustics, Speech, and Signal Processing, Vol. ASSP-28, No. 3, JUNE 1980.
[4] Wonyong Sung and Ki-Il Kum, 1995, “Simulation-Based Word-Length
after the system. Optimization Method for Fixed-point Digital Signal Processing Systems”,
IEEE Transactions on Signal Processing, Vol. 43, No.12, December 1995.
B. Hardware Implementation
[5] X. Hu, L. S. DeBrunner, and V. DeBrunner, 2002, “An efficient design for
We have implemented digital IIR filter using FPGA based FIR filters with Variable precision”, Proc. 2002 IEEE Int. Symp. on Circuits
Xilinx Spartan 3E kit which consists of an interior array of 64- and Systems, pp.365-368, vol. 4,May 2002.
[6] Y. C. Lim, R. Yang, D. Li, and J. Song, 1999. “Signed-power-of-two term
bit CLBs, surrounded by a ring of 64 input-output interface
allocation scheme for the design of digital filters,” IEEE Transactions on
blocks. The FPGA architecture is shown below. Circuits and Systems II, vol. 46, pp.577- 584, May 1999.
[7] S. C. Chan, W. Liu, and K. L. Ho, 2001, “Multiplier less perfect
reconstruction modulated filterbanks with sum-of-powers-of-two
coefficients,” IEEE Signal Processing Letters, vol. 8, no. 6,pp. 163-166, June
2001
Enhanced Clocking Rule for A5/1 Encryption
Algorithm
Rosepreet Kaur Bhogal, ECE Dept., rosepreetbhogal155-2006@lpu.in ,
Nikesh Bajaj, Asst. Prof., ECE Dept., nikesh.14730@lpu.co.in, Lovely Professional University -India

completely determined by its current or previous state.


Abstract—GSM acronym for global system for mobile uses However, LFSR the well chosen feedback function can
various encryption algorithms as A5/1/ 2/ 3.This is use to encrypt produce a sequence of bits which appear random and which
the information when transmit from mobile station to base station has long cycle [2].
during communication. As stated that A5/1 is strong algorithm
but it exhibit some weakness as basis on attacks happened on it.
In A5/1 attacked on linearity complexity, clocking taps etc. So, in In cryptography, correlation attacks are a class of known
this paper proposed concept to improve A5/1 encryption plaintext attacks for breaking stream ciphers whose key stream
algorithm to some extend by consideration or improve clocking is generated by combining the output of several linear
mechanism of registers present in A5/1 and modified version of feedback shift registers using a Boolean function. Correlation
A5/1 is fast and easy to implement which make it ideal to future. attacks [6] exploit a statistical weakness that arises from a poor
choice of the Boolean function – it is possible to select a
Index Terms—GSM, encryption, A5/1 stream cipher, clock function which avoids correlation attacks, so this type of
controlling unit, correlation cipher is inherently insecure. It is simply essential to consider
susceptibility to correlation attacks when designing stream
I. INTRODUCTION ciphers of any type.

In wireless communication technology, wireless In this paper proposed a new clocking mechanism for to
communication is effective and convenient for sharing avoid correlation attack on the place of m-rule i.e. majority
information[7]. GSM is a very good example of that wireless rule used by A5/1 stream cipher. Form in different sections as
communication .But this information should be secure means follows. In section 2 description of A5/1 stream cipher is
nobody could interfere like eavesdropper. So, to protect our given. In section 3 correlation attack analysis. In section 4
information cryptography play vital role. However, for sending proposed modified structure of A5/1 key stream generator. At
information mobile station to base station there is air interface last give conclusion.
serious security threat prevention between communicating
parties[10]. Then question arise how to protect while II. DESCRIPTION OF A5/1
communication. For this there is encryption algorithm use in
GSM as A5/x series. These algorithms used to encrypt voice A5/1 is a stream cipher [11] provide key stream so called
and data over GSM link. The various different key stream generator. Made up of three linear feedback shift
implementations A5/0 has no encryption, A5/1 is strong register of length 19, 22, 23 used to generate sequence of
version, A5/2 weaker version targeting market outside Europe binary bits. GSM conversations are in form of frames as length
and at last A5/3 based in block ciphering strong version of 228 bit i.e. 114 for each direction for encrypt/ decrypt
created as part of 3rd generation partnership project (3GPP)[5]. data[4]. A5/1 initialize 64 bit key together with 22 frame
number publicly known. It used linear feedback shift registers
as R1, R2 and R3 to correspondence tap as (13, 16, 17, 18)
In this paper we explore about A5/1 that is also strong
contained by R1, (20, 21) by R2 and (7, 20, 21, 22)
version but exhibit weaker due attack happened on it. A5/1
respectively. Each clocked using rule called as majority rule.
based on stream ciphering[1] that is very fast doing bit by bit Clocking tap considered as A, B, C to correspondence
XOR and getting result. If we take simple encryption we could registers R1, R2 and R3 as R1 (8), R2 (10) and R310). Before
perform by take a plaintext bit XOR with any key that keep register is clocked feedback is calculated by using linear
secret so choose any whatever got that is called cipher text and operator i.e. XOR. The one bit shift to right (discarding the
reverse process is called decryption. rightmost) bit produced by feedback location store leftmost
locations of linear feedback shift registers. This cycle goes up
A5/1 made up using linear feedback shift register. Initial to 64 times. This done on basis of clocking rule that register
value of LFSR is called seed because operation of the register clocked irregularly according to majority rule. Majority rule
is deterministic stream values produced by registers is uses on three clocking bits of LFSR’s A, B, C. Among
clocking bit if one or more is 0, then m=0 whose value match
with m that register will clock. Similarly, if one or more

1
clocking bits is 1, then whose values match with m that will can examine the state of LFSRs mean some of LFSRs bits are
clock. At each clocking LFSR generate one bit which related to the output sequence generated. Linear complexity
combined by linear function. In A5/1, the probability of an should be longer for more security but does not indicate for
individual LFSR being clocked is 3/4. The clocking bit secure one. And further correlation immunity, higher linear
generates bit m defined as using Boolean algebra (A.B (+) B.C complexity by combining the output sequence more non linear
(+) A.C) as shown in figure 1 structure of A5/1 stream cipher manner. So, insecurity arises that output of the combining
and possible cases refer to table 1. function is correlated with output of individual LFSRs due this
correlation attack exist. If observing the output sequence
obtains information about internal state output. Using that
could determine other internal states by this entire stream
cipher generator is broken. Now, come on main point that
A5/1 stream cipher is also using three LFSRs and clocking
taps look strong but cryptographically weak shown by attacks.
In the output of generator equal two output of LFSR2 75%
times, if feedback is known, we can determine the initial bit of
the LFSR2 and generate output sequence then count number of
times LFSR2 output is agrees with output of generator. If two
sequences will agree about 50% times then guess wrong if
agree 75% then guessed right. Similarly, the output sequence
agrees 75% times with LFSR3 using correlation. We could
easily cracked by known plaintext attack.

It is clear that basic idea behind A5/1 is good it passes


statistical test example NIST test [12] but still have weakness
that LFSRs length is short enough to made feasible for
cryptanalysis. Make A5/1 longer as possible for more security.

Figure 1: Structure of A5/1 stream cipher IV. MODIFIED A5/1 STREAM CIPHER
Table 1: Possible cases of A5/1 register to clocked
The new clock control mechanism is proposed to overcome
Clocking bit Clocking bit Register(s) problem of getting probability of 3/4 explained. By proposed
(A,B,C) generated using Clocked concept probability become 1/2 by using modified clock
m-rule controlling unit. Consider three bits as A, B and C of
(0,0,0) 0 R1,R2,R3 respective registers R1, R2 and R3 called as clocking bits .The
structure of proposed A5/ 1 stream cipher as shown in figure 2.
(0,0,1) 0 R1,R2
(0,1,0) 0 R1,R3
(0,1,1) 1 R2,R3
(1,0,0) 0 R2,R3
(1,0,1) 1 R1,R3
(1,1,0) 1 R1,R2
(1,1,1) 1 R1,R2,R3

As shown in table 1 that possible cases of register to


clocked according to m-rule explained. In this each register
clocked with probability 3/4 [8] i.e. each output bit of this
yield some information about the state of LFSRs [3]. Due to
this the whole thing falls to a correlation attack and we
determine bits.

III. ANALYSIS CORRELATION ATTACK

Analyzing stream cipher is easier as compare to block


cipher. There is two main factor consider while designing any
stream cipher that is correlation and linear complexity. Linear
complexity is important because Berlekamp messey algorithm
Figure 2: Modified stream cipher

2
A. Clocking controlling unit V. CONCLUSION

In the new clock control mechanism each register has one


A5/1 key stream generator is easy to implement and also
clocking tap in bit 8 for R1, bit 10 for R2 and bit 10 for R3.
efficient encryption algorithm used in communication
The clocking bit generated by using Boolean algebra for
application GSM. So, it exhibit weakness like length of LFSRs
expression as next write. In this used and gate due to that
is short and basic correlation attack discussed in section 3.
linear complexity also increase. In the text ¬ is not and (+) is
After analysis these things decreased the possibility of
XOR. As that equation given below:
correlation attack. A5/1 modified structure has been given
which is easy to implement and fast to do section 4. But if
y = ¬ A. (B (+) C) + A. ¬ (B (+) C) (1) compare clocking mechanism based on majority rule then
modified a5/1 stream cipher. However, it has proved that
As above expression made by using different gate stated that encryption algorithm is insecure based on m-rule. The
consider clocking bits A, B and C to respective register. For enhancement proposed in new clock mechanism increase level
each cycle register whose clocking tap is agree with y refer of security and also decrease the possibility of attack called as
equation (1) that register clocked are shifted. For example correlation attack. As probability of linear feedback shift
A,B,C are clocking taps of R1,R2,R3 respectively then table 1 register clocked was 3/4 reduced up to 1/2. So, it prevents
show the all possible combination for clocking. state identified by output sequence i.e. it gave bits which
unrelated with output sequence up to 6 cycles. Hence, all
Table 2: Possible cases modified stream cipher to clock register.
shown by modified structure of A5/1 stream cipher in section
3.
Clocking bit Clocking bit Register(s)
(A,B,C) generated (y) Clocked ACKNOWLEDGMENT
(0,0,0) 0 R1,R2,R3
(0,0,1) 1 R3
This is part completion of masters as dissertation. The
(0,1,0) 1 R2 contribution in assorted ways to do work and the making of the
(0,1,1) 0 R1 deserved special mention. It is a pleasure to convey my
(1,0,0) 1 R1 gratitude to them all in my humble acknowledgment. Thanks
(1,0,1) 0 R2 to guide Mr. Nikesh Bajaj for his supervision, advice, and
(1,1,0) 0 R3 guidance for the every stage of this paper as well as giving me
(1,1,1) 1 R1,R2,R3 extraordinary experiences throughout the work. Above all and
the most needed, he provided me unflinching encouragement
As refer table above at each cycle at least one register and support in various ways. His truly intuition has made him
as a constant oasis of ideas and passions in electronics, which
should clock else it stop that position where it not clocked.
exceptionally inspire and enrich my growth as a student.
Consideration of these problems above mechanism made. Lets
case 1: A=0, B=0, C=0 getting result by using equation (1)
Last but not the least; I would like to thank my fellow being
y=0 so whose register agree with value that clocked like R1, for the stimulating discussions and successful realization.
R2, R3 agree so all register clocked shift to right (discarding
rightmost bit) .In case 2: A=0, B=0, C=1 using equation (1) y REFERENCES
comes as 1 then R3 clocked and shifted. In case 3: A=0, B=1,
C=0 using equation (1) y comes as 1 then R2 clocked and [1] Instant cipher text-only cryptanalysis of GSM encrypted
shifted. In case 4: A=0, B=1, C=1 using equation (1) y comes communication, Elad Barkan, Eli Biham, Nathan Keller, Advances in
Cryptology – CRYPTO 2003.
as 0 then R1 clocked and shifted. In case 5: A=1, B=0, C=1
using equation (1) y comes as 0 then R2 clocked and shifted. [2] On LFSR based stream cipher , analysis and design , Patrik Ekdahl.
In case 6: A=1, B=0, C=1 using equation (1) y comes as 0 then
[3] A complex linear feedback shift register design for the a5 keystream
R2 clocked and shifted. In case 7: A=1, B=1, C=0 using
generator , Mohmed Sharaf , Hala A.K.Mansour , Hala H.Zayed , M L
equation (1) y comes as 0 then R3 clocked and shifted. Last In Shore.
case 8: A=1, B=1, C=1 using equation (1) y comes as 1 then
all register clocked and shifted. [4] GSM Security and Encryption by David Margrave, George Mason
University.

Note, that if compare the possible outcomes to clock [5] A Practical-Time Attack on the A5/3 Cryptosystem Used in Third
registers in table 1 and 2. In table 1 each cycle at least 2 Generation GSM TelephonyOrr Dunkelman, Nathan Keller, and Adi
Shamir.
registers are shifted with 75% probability. This reduced by
50% shown in table 2 where at least one registers shifted. The [6] A précis of the new attacks on GSM encryption Greg Rose,
register bit that got output which is unrelated to state of LFSRs QUALCOMM Australia.
for 6 clock cycles.

3
[7] Communication security in gsm networks petr bouška, martin
drahanský faculty of information technology, brno university of
technology.

[8] Enhanced a5/1 cipher with improved linear complexity ,musheer


ahmad and izharuddin.

[9] Mobile networks security,tkl markus peuhkuri ,2008-04-22.

[10] Security enhancements for a5/1 without loosing hardware efficiency


in future mobile systems,n. komninos, ‘b. honary, m. Darnel1

[11] Stream Ciphers for GSM Networks,Chi-Chun La and Yu-Jen Chen


Institute of Information Management,National Chiao-Tung
University.

[12] http://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.ht
ml.

Rosepreet Kaur Bhogal pursuing the master’s degree in


signal processing from Lovely Professional University,
Punjab, India, in 2004. Currently, doing dissertation under
the supervision of Mr. Nikesh Bajaj, the assistant
professor of electronic department. Research interests
include different aspects of cryptography like
cryptographic assumptions and encryption algorithms use
in GSM etc

Nikesh Bajaj received his bachelor degree in Electronics


& Telecommunication from Institute of Electronics And
Telecommunication Engineers, and he received his master
degree in Communication & Information System from
Aligarh Muslim University, India. Now, he is working in
LPU as Asst. Professor, Department of ECE. Research
interests include Cryptography, Cryptanalysis, and Signal
& Image Processing.

4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

An Application of Kalman Filter in State Estimation of a


Dynamic System
Vishal Awasthi1, Krishna Raj2

Abstract-- Most wireless communication systems for indoor H.B.T.I., Kanpur-24, U.P., (email: kraj_biet@yahoo.com)
positioning and tracking may suffer from different error
sources, including process errors and measurement errors. Information is usually obtained in the form of measurements
State estimation algorithm deals with recovering some desired and the measurements are related to the position of the object
state variables of a dynamic system from available noisy that can be formulated by Bayesian filtering theory. Since
measurements. A correct and accurate state estimation of Kalman filter theory is only applicable for linear systems and
linear or non-linear system can be improved by selecting the
in practice almost all practical dynamic systems (relation
proper estimation technique. Kalman filter algorithms are
often used technique that provides linear, unbiased and between the state and the measurements) are nonlinear. The
minimum variance estimates of an unknown state vectors for most celebrated and widely used nonlinear filtering algorithm
non-linear systems. In this paper we tried to bridge the gap is the extended Kalman filter (EKF), which is essentially a
between the Kalman Filter and its variant i.e. Extended suboptimal nonlinear filter. The key idea of the EKF is using
Kalman Filter (EKF) with their algorithm and performance in the linearized dynamic model to calculate the covariance and
the state estimation of the car moving with a constant force.
gain matrices of the filter. The Kalman filter (KF) and the
Index Terms-- Stochastic filtering, Bayesian filtering, EKF are all widely used in many engineering areas, such as
Adaptive filter, Unscented transform, Digital filters. aerospace, chemical and mechanical engineering. However, it
is well known that both the KF and EKF are not robust against
1. INTRODUCTION modelling uncertainties and disturbances.

Kalman filtering is an optimal algorithm, widely applied in the


In the area of telecommunications, signals are the mixtures of forecasting of system dynamic and estimating an unknown
different Frequencies. Least squares method proposed by Carl state. Measurement devices are constructed in such a manner
Friedrich Gauss in 1795 was the first method for forming an that the output data signals must be proportional to certain
optimal estimate from noisy data, and it provides an important variables of interest. Knowledge of the probability density
connection between the experimental and theoretical sciences. function of the state conditioned on all available measurement
Before Kalman, In 1940s, Norbert Wiener proposed his data provides the most complete possible description of the
famous filter called Wiener filter which was restricted only to state but except in the linear Gaussian case, it is extremely
stationary scalar signals and noises. The solution obtained by difficult to determine this density function [6]. To enhance
this filter is not recursive and needs the storing of the entire these concepts, several algorithms were proposed using
pas observed data. Early 1960s, Kalman filtering theory, a parametric and non-parametric techniques such as Extended
novel recursive filtering algorithm, was developed by Kalman Kalman Filter (EKF), Unscented Kalman filter (UKF)
and Bucy which did not require the stationarity assumption respectively.
[1], [2]. Kalman filter is a generalization of Wiener filter. The
significance of this filter is in its ability to accommodate Unscented transformation (UT) is an elegant way to compute
vector signals and noises which may be non stationary. The the mean and covariance accurately up to the second order
solution is recursive in that each update estimate of the state is (third for Gaussian prior) of the Taylor series expansion. Low-
computed from the previous estimate and the new input data, order statistics of a random variable undergoes a non-linear
so, contrary to Wiener filter, only the previous estimate transformation y = g(x) and generate and propagate sigma
requires storage, so Kalman filter eliminate the need for points through the nonlinear transformation-
storing the entire past observed data. Most of the existing
approaches need a priori kinematics model of the target for the
prediction. Although this predictor can successfully filter out Yi = g(X ) i , i = 0,……,2zx (1)
the noisy measurement, its parameters might be changed due
to different dynamic targets. Where zx is the dimension of x. Scaling parameters are used to
control the distance between the sigma points and the mean .
1
Member IETE, Lecturer, Deptt. of Electronics & Comm. Engg., UIET.,
CSJM.University, Kanpur-24, U.P., (email: awasthiv@rediffmail.com)
2
Fellow IETE, Associate Professor, Deptt. of Electronics Engineering,

SIP0107-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

In the presence of a random disturbances (white noise) or The Kalman filter is an optimal observer in the sense that it
when few system parameters change, the use of an adaptive produces unbiased and minimum variance estimates of the
and optimal controller turns out necessary [3], [4]. In this states of the system i.e. the expected value of the error
between the filter’s estimate and the true state of the system is
paper we are choosing to use Kalman filter as a controller.
zero and the expected value of the squared error between the
This technique is based on the theory of Kalman's filtering [5], real and estimated states is minimum.
it transforms Kalman's filter into a Kalman controller. 2.1 WEINER FILTER
Simulation results show that the state estimation performance
provided by the robust Kalman filter is higher than that Weiner was as a pioneer in the study of stochastic and noise
provided by the EKF. processes [15] who proposed a class of optimum discrete time
filters during the 1940s and published in 1949. Its purpose is
to reduce the amount of noise present in a signal by
Recently, results on some new types of linear uncertain comparison with an estimation of the desired noiseless signal.
discrete-time systems have also been given. Yang, Wang and The Wiener process (often called as Brownian motion) is one
Hung presented a design approach of a robust Kalman filter of the best known continuous-time stochastic process with
for linear discrete time-varying systems with multiplicative stationary statistical independence increments. The Wiener
noises [7]. Since the covariance matrices of the noises cannot filter uses the mean squared error as a cost function and
be known precisely, Dong and You derived a finite-horizon steepest-descent algorithm for recursively updating the
weights.
robust Kalman filter for linear time-varying systems with
norm-bounded uncertainties in the state matrix, the output The main problem with this algorithm is the requirement of
matrix and the covariance matrices of noises [8]. Based on the known input vector correlation matrix and cross correlation
techniques Zhu, Soh and Xie gave a robust Kalman filter vector between the input and the desired response and
design approach for the linear discretetime systems with unfortunately both are unknown.
measurement delay and norm-bounded uncertainty in the state
matrix [9]. Hounkpevi and Yaz proposed a robust Kalman 2.2 DISCRET KALMAN FILTER
filter for linear discrete-time systems with sensor failures and A state estimate is represented by a probability density
norm-bounded uncertainty in the state matrix [10]. functions (pdf) and the description of full pdf is required for
the optimal (Bayesian) solution but the form of pdf is not
Currently many systems successfully using the Kalman filter restricted and hence it can’t be represented using finite number
algorithms in different diverse areas such as the processing of of parameter [14], [16]. To solve this problem R.E. Kalman
signals in mobile robot, GPS position based on neural network designed an optimal state estimator for linear estimation of the
[11], aerospace tracking [12], [13], underwater sonar and the dynamic systems using state space concept [17], that has the
statistical control of quality. ability to adapt itself to non-stationary environments. It
supports estimations of past, present, and even future states,
In this paper the state of the car has been estimated through and it can do so even when the precise nature of the modeled
Kalman filter and Extended Kalman filter which is moving system is unknown. A set of mathematical equations provides
with a constant force. Dynamic model of the system is very an efficient computational (recursive) means to estimate the
much nonlinear and hence firstly we linearized the nonlinear state of a process, in such a way that minimizes the mean of
system equations using EKF algorithm, secondly we perform the squared error.
the time domain analysis of the dynamic model using
sampling time 10 millisec. The filter is very powerful in several aspects:

The Kalman filter is an efficient recursive filter


2. TECHNOLOGICAL DEVELOPMENT OF KALMAN algorithm that estimates the state of a dynamic system
FILTER from a series of noisy measurements and hence the filter
can be viewed as a sequential minimum mean square
A stochastic process is a family of random variables indexed
error (MSE) estimator with additive noise.
by the parameter and defined on a common probability
space. Bayesian models are a general probabilistic approach
It works like an adaptive low-pass infinite impulse
for estimating an unknown probability density function
response (IIR) digital filter, with cut-off frequency
recursively over time using incoming measurements and a
depending on the ratio between the process- and
mathematical process model [14].
measurement (or observation) noise, as well as the
estimate covariance.

SIP0107-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

The Kalman filter is a set of mathematical equations that


provides an efficient computational (recursive) means to Measurement Update (Corrector) Equations: which are
estimate the state of a process, in such a way that responsible for the feedback i.e. for incorporating a new
minimizes the mean of the squared error. measurement into the a priori estimate to obtain an
improved a posteriori estimate.
2.2.1 DYNAMIC SYSTEM MODEL OF KALMAN Sawaragi et al. [18] examined some design methods of
CONTROLLER Kalman filters with uncertainties and observed that under poor
observabilty and numerical instability Kalman filters do not
The Kalman filter is used for estimating or predicting the next work properly.
stage of a system based on a moving average of measurements
driven by white noise, which is completely unpredictable. It 2.2.3 FLOW CHART OF TIME & MEASUREMENT
needs a model of the relationship between inputs and outputs UPDATE ALGORITHM
to provide feedback signals but it can follow changes in noise
statistics quite well. The Kalman filter is an optimum Time Update:
estimator that estimates the state of a linear system developing
dynamically through time.  Initialize error covariance

Kalman filter theory is based on a state-space approach in


which a state equation models the dynamics of the signal
generation process and an observation equation models the  Compute The Kalman Gain
noisy and distorted observation signal. For a signal and
noisy observation , equations describing the state process
model and the observation model are defined as:
 Update the Error Covariance

… (2)

… (3)

 Update the sample of new sample


where, is the P-dimensional signal vector, or the state
time i.e.
parameter, at time k, M is a P × P dimensional state transition
matrix that relates the states of the process at times k –1 and k,
E is the control-input model which is applied to the control
vector uk , Jk (process noise) is the P-dimensional uncorrelated Measurement Update:
input excitation vector of the state equation. Jk is assumed to
be a normal (Gaussian) process p(Jk)~N(0, Q), Q being the P ×  Initialize the state estimate
P covariance matrix of J(k) or process noise covariance. is  Take the Initial measurement sample
the M dimensional noisy observation vector, h is a M × P at k instant i.e.
dimensional matrix which relates the observation to the state
vector. is the M-dimensional noise vector, also known as
measurement noise, is assumed to have a normal
distribution p( )~N(0, R)) and R is the M ×M covariance  Update state estimate with new
matrix of (measurement noise covariance). measurement

2.2.2 KALMAN FILTER ALGORITHM

Initially the process state is estimated at some time and then


obtains feedback in the form of (noisy) measurements. The
equations for the Kalman filter fall into two groups:
 Calculate the state estimate to
Time Update (Predictor) Equations: which are next sample time i.e.
responsible for projecting forward (in time) the current
state and error covariance estimates to obtain the a priori
 Update the sample of new
estimates for the next time step.
sample time i.e.

SIP0107-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Although the EKF is computationally efficient recursive


update form of the Kalman filter still it suffers a number of
serious limitations [14]:

Figure 1. Recursive Updation Procedure for Discrete Kalman Filter (1) Linearized transformations are only reliable if the error
propagation is well approximated by a linear function. If this
2.3 EXTENDED KALMAN FILTER (EKF) condition does not hold, then the linearized approximation
would be extremely poor and hence it causes its estimates to
The extended Kalman filter (EKF) is the nonlinear version of diverge altogether.
the Kalman filter that linearizes the non-linear measurement
and state update functions at the prior mean of the current time (2) The EKF does not guarantee unbiased estimates and also
step and the posterior mean of the previous time step, calculate error covariance matrices that do not necessarily
respectively. represents the true error covariance.

2.3.1 EXTENDED KALMAN FILTER ALGORITHM 3. PROBLEM DESCRIPTION

Time Update: We consider a dynamic system i.e. a car with a constant force
moving with a constant acceleration and follow a linear/ non-
(1) Project the state ahead : linear motion. To estimate the state i.e. position, the
continuous time state space model is discretised with a 10
millisec sampling time.
… (4)
3.1 MATHEMATICAL MODELING OF SYSTEM
(2) Project the error covariance ahead:
In a dynamic system, the values of the output signals depend
on both the the past behavior of the system and also on
… (5) instantaneous values of its input signals. The output value at a
given time t can be computed using the measured values of
The time update equations project the state and covariance output at previous two time instants and the input value at a
estimates from the previous time step k-1 to the current time previous time instant.
step k.

Measurement Update:

(1) Compute the Kalman gain:

… (6)

(2) Update estimate with measurement z k :

… (7)
Figure 2. Free body diagram of car-model
(3) Update the error covariance:
Horizontal and Vertical motion is govern by the following
… (8) equations:

The measurement update equations correct the state and (9)


covariance estimates with the measurement . An important
feature of the EKF is to propagate or “magnify” only the (10)
relevant component of the measurement information. (11)

2.3.2 LIMITATIONS OF EKF ALGORITHM (12)


(13)

SIP0107-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

(14) 30 0.0221 0.0213 0.024 0.0008 -0.0019


60 0.0746 0.0712 0.0743 0.0034 0.0003
For steady state analysis, considering has very small value in
90 0.1567 0.1751 0.1712 -0.0184 -0.0145
between -10 to 10 radians.
100 0.1988 0.1824 0.2113 0.0164 -0.0125
(15)
(16) 0.25
true position
measured position
0.2
estimated position

0.15

Car position
0.1

0.05

-0.05
Figure 2 illustrates the modeled characteristics of the car. The
front and rear suspension are modeled as spring/damper
-0.1
systems. This model include damper nonlinearities such as 0 10 20 30 40 50 60 70 80 90 100
velocity-dependent damping. The vehicle body has pitch and Time (sec)
bounce degrees of freedom. They are represented in the model Figure 3. Comparison of True, Measured & Estimated position with KF
by four states: vertical displacement, vertical velocity, pitch
angular displacement, and pitch angular velocity. The front
difference between true position and measured position
suspension influences the bounce (i.e. vertical degree of 0.1
difference between true position and estimated position
freedom).

Dynamic model of the system is very much nonlinear and 0.05


hence firstly we linearized the nonlinear system through EKF
algorithm.
error

4. SIMULATION RESULTS 0

The mean and covariance of the posterior distribution were


recorded at each time step and compared to the true estimate. -0.05
For comparison, the data was also processed with EKF. Figure
shows the mean error of different filters it can be seen that
EKF works quite well and optimal for linear measurements
-0.1
regardless of the density function of the error. The mean errors
did not vary much between different filters. However, EKF 0 10 20 30 40 50 60 70 80 90 100
performed quite well even with large blunder probabilities. A Time (sec)
comparative chart is given below to demonstrate the Error in
estimating the state through KF and EKF. Figure 4. Comparison of Error between true, measured & estimated position
value with KF
TABLE I TABLE II
Comparative Chart of State (Position) Values with Kalman Filter Comparative Chart of State (Position) Values with Extended Kalman Filter

Time True Measured Estimated Error (true - Error (true - Time True Measured Estimated Error (true - Error (true -
(sec) state state (mt) state measured estimated (sec) state state (mt) state measured estimated
(mt) (mt) position) position) (mt) (mt) position) position)
(mt) (mt) (mt) (mt)

1 0.0125 0.0223 0.0011 -0.0098 0.0114

SIP0107-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

1 0.0012 0.0181 0.0010 -0.0169 0.0002 make future state and measurement predictions more accurate
30 0.0186 0.0251 0.022 -0.0065 -0.0034 and therefore improving the accuracy of target positioning and
60 0.746 0.744 0.731 0.0020 0.015
tracking. Further efforts in kalman filter will lead to improved
estimation of signal arrival time and more accurate target
90 0.147 0.189 0.148 -0.042 -0.0010
positioning and tracking.
100 0.1791 0.181 0.183 -0.0019 -0.0039

0.3 This work can be used as theoretical base for further studies in
true position a number of different directions such as tracking system, to
0.25
measured position achieve high computational speed for multi-dimensional state
estimated position
estimation.
0.2
REFERENCES
Car position

0.15
[1] Kalman, R. E.,” A new approach to linear filtering and prediction
problems”, Journal of Basic Engineering Transactions of the
0.1 ASME, Series D, Vol. 82, No. 1, pp. 35-45, 0021- 9223, 1960.
[2] Kalman, R. E. & Bucy R. S.,” New results in linear filtering and
prediction problems”, Journal of Basic Engineering Transactions
0.05
of the ASME, Series D, Vol. 83, No. 3, pp. 95- 108, 0021-9223,
1961.
0 [3] Mudi Rajani, K. & Nikhil Pal, R. ,”A robust self-tuning scheme for
PI and PD type fuzzy controllers”, IEEE transactions on fuzzy
systems, Vol. 7, No. 1, ( February 1999) 2-16, 1999.
-0.05 [4] Zdzislaw, B. ,”Modern control theory”, Springer-Verlag Berlin
0 10 20 30 40 50 60 70 80 90 100
Time (sec) [5] Eubank, R. L.,”A Kalman filter primer”, Taylor & Francis Group,
2006.
Figure 5. Comparison of True, Measured & Estimated position with EKF [6] D. L. Alspach, and H. W. Sorenson, “Nonlinear Baysian
estimation using Gaussian sum approximations,” IEEE Trans.
Automatic Cont., vol. 17, no. 4, pp. 439-448, Aug. 1972.
0.1
difference between true position and measured position [7] Yang, F.; Wang, Z. & Hung, Y. S.,” Robust Kalman filtering for
difference between true position and estimated position discrete time-varying uncertain systems with multiplicative
noises”, IEEE Transactions on Automatic Control, Vol. 47, No. 7,
pp.1179-1183, 0018-9286, 2002.
0.05
[8] Dong, Z. & You, Z. ,” Finite-horizon robust Kalman filtering for
discrete time-varying systems with uncertain-covariance white
noises”, IEEE Signal Processing Letters, Vol.13, No. 8, pp. 493-
496, 1070-9908, 2006.
[9] Zhu, X.; Soh, Y. C. & Xie, L,” Design and analysis of discete-time
error

0
robust Kalman filters. Automatica”, Vol. 38, pp. 1069-1077, 0005-
1098, 2002.
[10] Hounkpevi, F. O. & Yaz, E. E.,” Robust minimum variance linear
-0.05
state estimators for multiple sensors with different failure rates”,
Automatica, Vol. 43, pp. 1274-1280, 0005-1098, 2007.
[11] Wei Wu and Wei Min, “The mobile robot GPS position based on
neural network adaptive Kalman filter”, International Conference
-0.1
on Computational Intelligence and Natural Computing, IEEE, pp.
26-29, 2009
[12] Y. Bar-Shalom and Li X.R., Estimation and Tracking: Principles,
0 10 20 30 40 50 60 70 80 90 100
Techniques, and Software, Artech House, 1993.
Time (sec)
[13] Y. Bar Shalom, X.-R. Li, and T. Kirubarajan, Estimation With
Figure 6. Comparison of Error between true, measured & estimated position Applications to Tracking and Navigation. New York: Wiley, 2001.
[14] Y. C. Ho and R. C. K. Lee, “A Bayesian approach to problems in
value with EKF
stochastic estimation and control,” IEEE Trans. Automatic Cont.,
vol. AC-9, pp. 333-339, Oct. 1964.
[15] P. Maybeck, Stochastic Models, Estimation and Control. New
5. CONCLUSION
York: Academic Press, vol. I, 1979.
[16] S. Haykin, Adaptive Filter Theory. Prentice-Hall, Inc., 1996.
In this paper, a detailed overview of Kalman filter and [17] H. J. Kushner, “Approximations to optimal nonlinear filters,” IEEE
Extended Kalman Filter to improve inadequate statistical Trans. Automatic Cont., AC-12(5), pp. 546-556, Oct. 1967.
models, nonlinearities in the measurement is presented. [18] Sawaragi, Yoshikazu and Katayama, Tohru, “Performance Loss
And Design Method of Kalman Filters For Discrete-time Linear
Simulation results show that the performance of the Extended Systems With Uncertainties”, International Journal of Control,
Kalman filter is higher than that of the Kalman filter and 12:1, 163 — 172, 1970.
conclude that the Kalman filter-based scheme is capable of
effectively estimating the position errors of moving target to

SIP0107-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

SIP0107-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

Wideband Direction of Arrival Estimation by using


Minimum Variance and Robust Maximum Likelihood Steered
Beamformers: A Review

SANDEEP SANTOSH1, O.P.SAHU2, MONIKA AGGARWAL3


Astt. Prof., Department of Electronics and Communication Engineering1 ,
Associate Prof., Department of Electronics and Communication Engineering2 ,
National Institute of Technology , Kurukshetra, 1,2
Associate Prof., Centre For Applied Research in Electronics (CARE)3 ,
Indian Institute of Technology, New Delhi.3
INDIA
profsandeepkkr@gmail.com http://www.nitkkr.ac.in

Abstract source signal under interference,


reverberation and propagation
Beamforming of sensor arrays is a mismatches. The Minimum Variance
fundamental operation in Sonar, Radar Steered Beamformer (MV-STBF) and the
and telecommunications. The Minimum use of Steered Covariance Matrix is
Variance Steered Beamformer and illustrated and the Robustness of the
Robust Maximum Likelihood Steered Maximum Likelihood Steered
Beamformer are two important methods Beamformer (ML-STBF) by using a
for Wideband Direction of Arrival Modified Newton Algorithm is
Estimation. This research paper presents explained.
a comparative study between Minimum
Variance Steered Beamformer and Key-Words :- Wideband Direction of
Robust Maximum likelihood Steered Arrival(DOA) Estimation, Minimum
Beamformer . MV beamformers can Variance , Robust Maximum Likelihood,
place nulls in the array response in the Steered Beamforming, Covariance
direction of unwanted sources even if Matrix.
located within a beamwidth from the
source of interest provided that the 1. Introduction
interfering signals are uncorrelated with
the desired one. A steered wideband Beamforming of sensor arrays is a
adaptive beamformer optimized by novel fundamental operation in Sonar, Radar
concentrated maximum likelihood (ML) and telecommunications. The
criterion in the frequency domain can development of minimum
be considered and this ML beamforming variance(MV) adaptive beamforming
can reduce the typical cancellation has taken in the last three decades . The
problems encountered by adaptive MV bad effect of multipath on MV
beamforming and preserve the beamforming is the cancellation of the
intelligibility of a wideband and colour desired signal even if coherent

SIP0108-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

component is very weak at the output of Hence , a second order algorithm is


the generalized sidelobe canceller. The developed starting from a procedure
classical cure for this phenomenon lies originally introduced for fast neural
in the definition of a set of linear or network training which recasts the ML
quadratic constraints on adaptive part of problem as an iterative least square
the beamformer, based on proper minimization . The robust ML wideband
modeling of the array perturbations . beamformer also incorporates a norm
Wideband arrays are less sensitive to constraint to reduce the risk of signal
signal cancellation because reflections cancellation under propagation
exhibit a delay of several sampling uncertainities. [1],[2],[3],[4]
periods with respect to direct (useful)
path.

Prefiltering of the array outputs and 2 . Narrowband and Wideband MV


proper constraints on the weight vector Beamforming
helps in contrasting cancellation . It is
not clear if the MV criterion is optimal An array with M sensors receives the
in ensuring best possible reconstruction signal of interest s(t) radiated by a point
of a wideband signal of interest e.g source ,whose position is characterized
intelligibility in case of speech. by generic coordinate vector p. The
Therefore , there is a need of frequency propagating medium and the sensors are
domain wideband beamformer, starting assumed linear even if they may
from the concepts of focusing matrices introduce temporal dispersion on s(t).
and steered beamforming that aligns the The direct or the shortest path of the
component of the direct path signal wave propagation is characterized by (M
along the same steering vector as in a Χ 1) vector of impulse responses hd(t,
narrowband array. p),starting from t=td. Multiple delay and
filtered copies of s(t) generated by
This beamformer uses a single set of multipath ,reverberation and scattering
weights for the entire bandwidth but the are also received by the array and can be
adaptation is made on the basis of globally modeled by the vector (M X 1)
concentrated maximum likelihood (ML) vector hr(t,p) impulse responses starting
cost function derived by using a from t =tr > td. Interference and noise are
stochastic Gaussian assumption on the statistically independent of s(t) and are
frequency samples of the beamformer conveniently collected in the (M X 1)
outputs . vector v(t). Therefore, the (M X 1) array
output vector or snapshot x(t) obeys the
It is found that the ML solution does not continuous time equation,
depend on any prefiltering applied to
array outputs provided that none of the x(t) = ∫td ∞ hd(т,p) s( t - т)dт +
subband components are nulled out.
Nonconvexity of the derived ML cost ∫tr∞hr(т,p)s(t-т)dт+v(t). (1)
function makes it unsuitable for classical
Newton optimization . This model represents a large number of
real world environments, encountered in
SIP0108-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

telecommunications, remote sensing , The narrowband arrays obey (2) with


underwater acoustics, seismics and known hd(n,p) = hd(p)u0(n-Nd), an
closed room applications . The objective unknown hr(n,p) = hr (p) u0(n-Nd), and
of beamforming is to recover s(t) in the Nd1= Nd2 = Nr1 = Nr2. Hence, we have,
presence of multipath, noise and
interference terms given the knowledge x(n)=[hd(p)+hr(p)]s(n-Nd)+v(n). (3)
of direct path response only. In fact hd(t,
p) is accurately described by analytical The sources are assumed white within
and numerical methods or measured the sensor bandwidth. Reflection delays
under controlled conditions but hr(t,p) must be less than the sampling period so
depends on great number of that the spectrum at the sensor outputs
unpredictable and time-varying factors remains white. The sequence s(n) is
.An alternative view is to consider a conveniently scaled so that |hd(p) | = 1.
reference model containing only the A( M X 1) complex valued weight
terms related to direct path and to vector w is applied to the baseband
develop robust algorithms that are able snapshot x(n) to recover a spatially
to bound in a statistical sense the effects filtered signal y(n,w) where
of sufficiently small perturbations on the
final estimate. y(n,w)=wHx(n) (4)

2.1 Discrete Time Signal Model according to some optimality criteria.


For example, in the classical MV-DR
Array outputs x(t) are properly beamformer, w^ solves the LS
converted to the baseband , sampled and minimization problem as,
digitized. Under general assumption
equation (1) is written in discrete time as w^=argwmin[Σn=1N|y(n,w)|2] (5)
the vector FIR convolution as,
subject to hd(p)Hw = 1 using N
x(n) = Σk=Nd1 Nd2 hd(k,p) s(n-k) + independent
snapshots. The output is finally
Σk=Nr1Nr2hr(k,p)s(n-k)+v(n) (2) computed as

The relationships among discrete-time y(n,w^1)=y0(n)+w^1Hy1(n)2 (6)


transfer functions of (2) and their analog
counterparts in (1) are quite involved
depending on the receiver architecture. 2.1.2 Wideband MV Beamforming
In many cases, the delays of reflection
with respect to direct path exceed the The extensions of MV beamforming to
Nyquist sampling period of the baseband wideband arrays have been proposed
signal (i.e Nr1 > Nd2 so that hd(n,p) and several times in the past following
hr(n,p) do not overlap. either time-domain or frequency domain
approaches . The main drawback of
2.1.1. Narrowband MV Beamforming wideband MV beamforming is the high
number of free parameters to adapt
which produces slower convergence
SIP0108-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

,high sensitivity to mismodeling and often well separated in time in


strong misadjustment for short wideband environments.
observation times . The introduction of However, the signal replicas may
large number of linear or quadratic still cancel the desired signal s(n-
constraints may enlarge these issues at N0) if multipath delay does not
the expense of a reduced capability of extend the correlation time of
suppressing interference . The very s(n).
complex and largely unpredicatable b) It is not obvious anymore that the
structure of reverberant fields can make MV criterion leads itself to good
it impossible to specify an effective set solution in wideband scenarios.
of constraints. When the beamformer is a
steered off-source , it mostly
2.1.3 Steered Adaptive Beamformer captures background noise which
often is considered temporally
An interesting tradeoff between white .In this case the MV
complexity and efficacy is obtained by criterion realizes a particular ML
the Wideband Steered adaptive estimator i.e when the
beamformer (STBF) which was beamformer points towards a
introduced on the basis of coherent correlated source, the quality of
focusing technique ,Coherent Signal the output is influenced by the
Subspace Processing (CSSM) developed spectra of the source itself and of
by Wang and Kaveh . In the frequency the inteference.
domain formulation of the STBF , the
sequence x(n) (n= 1,2,….,N) is These two problems are strictly
partitioned into L nonoverlapping related .An optimal cost function
blocks of length J that are separately should impose a trade off on
processed by a windowed Fast Fourier performance at different frequencies
Transform (FFT).Finally the frequency when a single weight vector w1 is
domain output is computed as , used for the entire bandwidth. A
wideband beamformer aiming to
y(ωj,l,w1)=y0(ωj,l)+w1Hy1(ωj,l). (7) preserve the signal intelligibility
should minimize the noise plus
Where y0(ωj,l ) and y1(ωj,l ) can be interference power in those subbands
computed.[1],[2],[3],[10] where the useful source spectrum has
valleys. The ineference nulling
becoms less important near the
3 . Limitations of MV-STBF spectral peaks of the useful signal
whose strength may be adequate to
It is known that most wideband signals mask the unwanted
of interest are strongly correlated in components.[1],[2],[3],[10]
time .Effects of temporal correlation of
s(n) on MV-STBF are two fold i.e.
4. Maximum Likelihood STBF
a) The impulse responses of the
direct path and reflections are
SIP0108-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

In order to overcome the drawbacks After neglecting irrelevant additive


of MV beamforming, a general constants, the Optimal weights w1 are
stochastic model can be formulated finally found as,
and exploited to derive the proper
ML estimator of w, subject to the Lc(w1)=Σj=j1j2log[1/L(Σl=1L|y(ωj,l,)|2] (9)
given constraint CHw = d. Using the
reduced space formulation, this w1ˆ=argw1 minLc(w1) (10)
constrained ML problem can still be
converted into unconstrained The wideband ML-STBF using a
maximization of the likelihood single w1 must instead optimize
function of the beamformer output equation (9) by coherently combining
containing the useful signal plus information from all frequencies which
noise and interference residuals. results in a highly nonlinear
Although, the crucial assumption for problem.[1],[10]
validity of this model is that the
multipath terms are uncorrelated
with the direct path , it can be shown 5. Properties of the Cost Function
that the resulting ML estimator is
also effective in decorrelating the The fuunction Lc(w1) is clearly
multipath terms having a delay nonconvex, due to presence of
higher than one sampling period, logarithms, and it is not even guaranted
independently of the source to be lower bounded or to have a unique
spectrum. minimizer . Nonconvexity hampers the
use of classical Newton optimization
In particular , for the central limit algorithms when initialized far from the
theorem,y(ωj,l,w1) can be considered global minimum. But,if a ζjˆ2(w1)
to be independent ,zero-mean becomes zero during adaptation,
,circular Gaussian random variable indicating perfect signal cancellation in
regardless of the original distribution the jth subband ,then Lc(w1) →-∞ and the
of the signal and interference but minimization cannot proceed further . If
characterized by a different variance multiple bins have ζj ˆ2(w1) = 0 , then
ζj2 in each subband. In reverbrent many local minima may likely occur and
fields and in presence of coloured a descent algorithm may get stuck before
sources ,such as speech and sonar reaching a global optimum. The two
targets, these conditions can be theorems associated with Cost Function
further approached by proper show the effect of lack of information
prewhitening of highly correlated occurring when L ≤ Mb. Despite of
components that are present in both these limitations , two other properties
y0(n) and y1(n). The scaled global of equation (9) appear extremely
negative log-likelihood of the STBF interesting from both theoretical and
output can be written as , practical viewpoints i.e Scaling
invariance in the frequency domain and
L(w1)=Σj=j1j2[log(ζj2)+ the Link with Cepstral Analysis. A
(Σl=1L|y(ωj,l,w1)|2/Lζj2)] (8) decisive advantage of the ML –STBF
over cepstral processing lies in the
SIP0108-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

intrinsic linearity of beamforming w1[q]= argw1min


which is highly desirable when dealing [Σj=j1j2(Σl=1L|y(ωj,l,w1)|2)/Lζjˆ2(w1[q-1] )]
with music, speech or digital (12)
transmission of data.[1]
For q= 1,2…, subject to | w1[q] |2 ≤
γ2 until convergence is achieved.
6. Robustness of the ML-STBF Equation (12) is a standard quadratic
ridge regression problem.[1]
The cost function given by equation (9)
grows logarithmically i.e. very slowly
with respect to each subband error 8 . Modified Newton Algorithm
variance ζjˆ2(w1). This behaviour is
typical of statistically robust estimators The ML–STBF can be interpreted as a
that are able to cope with outliers in the two layer perceptron with constrained
data or significant deviation from the weights. Therefore, the algorithms
assumed probabilistic model . As a result developed for fast neural network
, the performance of traditional training should be highly effective .The
frequency domain MV beamforming descent in the neuron space is adapted in
might result quiet suboptimal in the this wok. In this case ,the minimization
presence of coloured source and of Lc(w1) is still converted into an
interferences. The following quadratic iterative LS procedure but using a single
constraint is deduced , system matrix for all steps .Only matrix
sums and products are performed at each
|w1|2≤(1-δ/εmax)2≈γ2 (11) iterations . The modified Newton
algorithm consists of three steps as 1)
Equation (11) theoretically justifies the Data Preconditioning, 2) System Setup
common practice of limiting the norm of and 3) Main Loop[1]. The summary of
w1 in MV beamforming and furnishes a the algorithm is given below :
guideline for properly choosing the
parameter γ2 . [1],[3] 8.1 Algorithm Summary

7. Iterative LS Minimization of Step 1 ) Collect N= LJ snapshots x(n)


Lc(w1) for n= 1,..N.

Signal nonstationarity and moving Step 2) Compute frequency domain


sources require short observation times snapshots x(ωj , l) for l= 1,2,..L and j=
and fast numerical conversion to the 1,2,..J using a windowed FFT of Length
optimal solution, therefore function J applied to L sequential blocks of x(n).
Lc(w1) should be minimized by second
order Newton like algorithm in order Step 3) For j = j1 , j2 , synthesize
to be competitive with the MV approach focusing matrices Tj and compute
in real time applications. Therefore, we focused snapshots xf (ωj , l)= Tjx( ωj , l ).
have,
Step 4) For j = j1 , j2 build y0(ωj , l)=w0xf
(ωj , l) and y1((ωj , l)=CH┴ xf (ωj , l) .
SIP0108-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

( θ ) is obtained by substituting the


Step 5) For each j , build regularized estimate Rˆ(θ) in place of R(θ) in
matrices Rj,µ. equation (14). The comparison of the
STMV method and CSDM based
Step 6) Compute the system matrix F minimum variance distortionless
and the vector g. response (MVDR) method is made
possible by expressing R(θ) as a sum of
Step 7) Initialize w1[0] with all zeros and cross-spectral density matrices.
small complex random values . Substituting the value of R(θ) in
equation (14) gives,
Step 8) For q=1,2,, iterate until
convergence to w1ˆ and solving of the Zstmv(θ)=1/[1H[ Σk=lh Tk(θ)K(ωk)Tk(θ)H]-1
LS system. (15)
Observe that in the case when h=l
Step 9) Compute the optimal weight equation (15) can be rewritten as ,
vector w1ˆ and or compute the output
sequence y(ωj,l, w1).[1] Zstmv(θ)=[1HTl(θ)K(ωl)Tl(θ)H]-1 (16)

9. Steered Minimum Variance Where the identity Tk(θ)-1 = Tk(θ)H. Note


Beamforming (MV-STBF) that Tl(θ)H.1 = Dl(θ) ,the direction vector
of an arrival at frequency ωl and
The Steered Minimum variance (STMV) direction θ. Hence , equation (16)
is defined by finding the beamformer becomes ,
weight vector w which minimizes the
beam power given by equation (13) Zstmv(θ)=[Dl(θ)HK(ωl)-1Dl(θ)]-1 (17)
subject to the constraint that the
processor gain is unity for a broad-band The equation (17) is precisely the
plane wave in direction θ. The problem MVDR or maximum likelihood spatial
alternatively can be viewed as one of spectral estimate. Thus, in narrowband
estimating the dc component of the case the STMV reduces to conventional
STCM steered in direction θ by means MVDR method. For broad-band sources,
of minimum variance (MV) approach. In the MVDR beampower is obtained by
either case , this technique has the effect summing narrow-band beampowers
of choosing w to minimize the power over the band of interest i.e.
contribution from the sources and noise
not propagating from direction θ. The Zmvdr(θ)=Σk=lh[Dk(θ)HK(ωk)-1Dk(θ)]-1 (18)
solution is derived by several authors
and the resulting STCM based spatial With a finite –time observation ,an
spectral estimate , denoted as STMV estimate ,Z^ mvdr(θ), can be computed by
method ,is given by substituting
K^(ωk) for its true value K(ωk). The
Zstmv(θ)=[1HR(θ)-11]-1 (14) comparison of equations (15) and (18)
reveals the essential difference between
Where 1 is an M X 1 vector of ones. A the STMV and MVDR methods for
finite –time estimate , Zˆstmv( θ ) of Zstmv broad-band signals . Specially in
SIP0108-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

equation (15), cross-spectral density focusing matrices. The relationship


matrices are coherently averaged prior between K(ωk), k=l,….,h and R(θ) as
to matrix inversion while in equation given in the equation (19) suggests a
(18) the matrix inversion is applied to natural way of estimating R(θ) by using
individual narrow-band CSDM’s prior to finite time CSDM estimates , K^(ωk). A
averaging . While asymptotically ,the common method of forming K^(ωk) from
STMV method is strictly suboptimal discrete-time sensor outputs is to divide
,when only a limited number of data the T second observation into N
snapshots are available. The coherent nonoverlapping segments of ΔT
averaging in equation (15) provides a seconds each and then apply the Discrete
more statistically stable matrix to invert Fourier Transform (DFT) to obtain
thus facilitating more accurate spatial uncorrelated frequency domain vectors,
spectral estimation. We can estimate the Yn(k), for each segment n=1,…,N. The
steered covariance matrix by calculating cross-spectral density matrix at
R(θ),K(ωk),K^(ωk), and R^(θ). frequency ωk is then estimated by
The steered covariance matrix is taking ,
estimated as ,
K^(ωk)=1/NΣn=1NYn(k)Yn(k)H (20)
R(θ)=Σk=lhTk(θ)K(ωk)Tk(θ)H (19)
Substituting K^(ωk) in place of its true
where K(ωk) is given as, value ,K(ωk) in equation (19) gives an
estimate of the steered covariance matrix
K(ωk) = E{Y(k) Y(k)H } is the R^(θ) such that,
conventional unsteered CSDM at
frequency ωk . The above equation R^(θ)=Σk=lhTk(θ)K^(ωk)Tk(θ)H (21)
expresses the STCM in the same form
as coherently focused covariance matrix Note that the efficient computation of
proposed by Wang and Kaveh for the R^(θ) from equation (21) can be
case where all the sources in a field are achieved by using the Fast Fourier
in a single group, unresolved by Transform (FFT) to obtain the Yn(k)
conventional Direct-Spread (DS) from discrete-time sensor outputs.[10]
beamformer. In the coherent subspace
method, the equation (19) is appropriate The various steps used to perform the
only in the single group case since just STMV method are as follows:
one focused covariance matrix is formed
where each source has a rank one 1) Form the estimated cross –
characterization. In the STCM methods spectral density matrices , Kˆ(ωk)
R(θ) is calculated for each steering , over the frequency band of
direction θ of interest. The need to interest as given in (20).
compute R(θ) for each θ makes STCM- 2) Compute the estimated steered
based methods more computationally covariance matrices , Rˆ(θ ),for
intensive than coherent subspace each steering direction θ as given
methods. However, it avoids the in (21).
problem of source location bias 3) Compute Rˆ(θ )-1 and form
resulting from errors made in forming Zˆstmv(θ) = [1H Rˆ(θ )-1 1]-1 for
SIP0108-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

each steering direction θ to IEEE Transactions on Signal


obtain a broad-band spatial processing, vol.49, pp. 2179-2121, Oct.
power spectral estimate as shown 2001.
by equation (14). Note that the
estimation of the source location [3] D.H. Johnson and D.E. Dudgeon,
is achieved by determining the “Array Signal Processing”, Englewood
peak positions of the spatial Cliffs, NJ:Prentice Hall,1993.
power spectral estimate Zˆ
stmv(θ) .[10] [4] J.L. Krolik, “ The performance of
matched –field beamformers with
Mediterranean vertical array data,”
10 . Conclusion IEEE Transactions on Signal
Processing, vol. 44,pp. 2605-2611,Oct .
The ML-STBF and MV-STBF were 1996.
tested for 1)Far- Field point sources , 2)
Mediterranean Vertical array data and [5] G.Xu, H.P.Lin, S.S.Jeng, W.J.Vogel ,
3) Reverberant room . All these “Experimental studies of spatial
demonstrated the higher performance signature variation at 900 MHz for smart
and robustness of novel ML-STBF over antenna systems,” IEEE Transactions on
MV-STBF.[1],[4],[5],[6] . Antennas propagation, vol. 46, pp.953-
962, July 1998.
The ML-STBF is based on concentrated
ML cost function in the frequency [6] M.Agrawal and S.Prasad ,“ Robust
domain and trained by fast modified Adaptive beamforming for wideband
Newton algorithm. The ML cost moving and coherent jammers via
function performs a direction – uniform linear arrays,” IEEE
dependent spectral whitening on the Transactions on Signal processing , vol.
beamformer output . The computational 47, pp. 1267-1275, Aug. 1999.
cost of the ML-STBF and MV-STBF are
comparable in most cases and dominated [7] Q.G. Liu, B.Champagne and
by the common preprocessing of P.Kabal,, “A microphone array
wideband array data[7],[8],[9][10]. processing technique for speech
enhancement in a reverberant space,”
Speech Communication, vol. 18, pp.
References 317-334,1996.

[1] E.D.Claudio and R Parisi “ Robust [8] B.Champagne, S.Bedard and


ML Wideband beamforming in A.Stephenne, “Performance of time-
reverberant fields” , IEEE Transactions delay estimation in the presence of room
on Signal processing, vol 51,no.2, pp reverberation,” IEEE Transactions on
338-349,Feb. 2003. Speech ,Audio processing, vol.4, pp.148-
152, Mar. 1996.
[2] E.D.Claudio and R Parisi “ Waves:
Weighted Average of Signal Subspaces [9]D.N.Swingler ,“ A low complexity
for robust wideband direction finding”, MVDR beamformer for use with short
SIP0108-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-
27 2011

observation times,” IEEE Transactions


on Signal processing, vol. 47, pp. 1154-
1160, Apr. 1999.

[10] J.Krolik and D.N.Swingler,“


Multiple wideband source location
using steered covariance matrices,”
IEEE Transactions on Acoustics, Speech
and signal processing,vol.37, pp. 1481-
1494,,Oct. 1989.

SIP0108-10
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

Electricity Generation by People Walk through


Piezoelectric Shoe: An Analysis
1.
Dr. Monika Jain, 2.Ms. Usha Tiwari, 3.
Mohit Gupta, 4.Magandeep singh Bedi
1.
Member IEEE, IETE, Professor-Dept of Electronics & Instrumentation Engg,
Galgotias College of Engineering & Technology,Greater Noida,UP, INDIA
2.
Assistant Professor-Dept of Electronics & Instrumentation
Galgotias College of Engineering & Technology,Greater Noida,UP, INDIA
.3&4.
B.Tech, 4th year student Dept of Electronics & Instrumentation
Galgotias College of Engineering & Technology,Greater Noida,UP, INDIA
1
monika_jain24@rediffmail.com 2usha.pant@rediffmail.com.
3
mohit_luvkush@yahoo.co.in4.magandeep.bedi@gmail.com

Abstract— In todays, high crisis of I. INTRODUCTION


electrical power, there has been an
increasing demand for low-power and Piezoelectric generators are based on
portable-energy sources due to the piezoelectric effect i.e. the ability of
development and mass consumption of certain materials to create electrical
portable electronic devices. potential when responding to mechanical
Furthermore, the portable-energy changes. In real time application, when
sources must be associated with compressed or expanded or otherwise
competitive market price, changing shape a piezoelectric material
environmental issues and other imposed will output certain voltage. This effect is
regulations. These tremendous demands also possible in reverse in the sense that
support lots of research in the area of putting a charge through the material will
portable-energy generation methods. In result in it changing shape or undergoing
this scope, piezoelectric materials has some mechanical stress. These materials
always been chosen as an attractive are useful in a variety of ways. Certain
choice for energy generation and piezoelectric materials can handle high
storage. In this paper, different voltage extremely well and are useful in
techniques are being explored and transformers and other electrical
analysed to generate electricity by usage components. Piezoelectric crystals are
of piezoelectric crystal. In-depth study boon of sensor technology field as it might
and analysis to describes the use of be possible to make motors, reduce
piezoelectric polymers in order to utilize vibrations in sensitive environments, used
and the best optimisation of energy as an energy collector and in many more
from people-walk and the fabrication of applications. In today’s power crisis world,
a smart shoe, capable of generating and one of the most interesting area is energy
accumulating the energy has peen collection and generation. In this paper, a
presented. cheap and smart however a reliable
mechanism to generate energy capable
Keywords— Energy harvesting, PZT, enough to charge our phone, MP3 players
uninterrupted power supplies. has been explored and analysed. An
interesting methodology of power

SIP0109-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

generation through the walking steps of film, h is the total transducer height, t is
human being is reviewed and presented the film thickness, and N is the number of
here. The sole of shoe could be constructed film layers in the transducer [6]. The
of piezoelectric materials and every step a piezoelectric polymer power generator and
person took would begin to generate conversion circuit provide over 2 mW of
electricity. This smart mechanism of regulated power at 4.5 V. The transducer is
generation of electricity through shoe sole low cost, ecological, and soft suitable
could then be stored in a battery or used shock absorption inside heel. The design
immediately in personal electronics of electromagnetic generators that can be
devices. integrated within shoe soles is described.
In this way, parasitic energy expended by a
II. LITERATURE REVIEW person while walking can be tapped and
The most common methodology of used to power portable electronic
shoe power generators include dielectric equipment. Designs are based on discrete
elastomers [1] and piezoelectric ceramics permanent magnets and copper wire coils,
[2,3]. The elastomer demonstrated and it is intended to improve performance
significant power output but it required a by applying micro-fabrication
large bias (2 kV) and the heavy technologies. The proposed approach is
construction is likely to negatively affect good in an aspect that voltage level are
the user experience. The power harvesting comparable with piezoelectric generator
shoe reported in [2] and [3] uses however, its complex circuitry is a
piezoelectric ceramic bi-morphs for power constraint. Vibration based generators
harvesting. As piezoelectric materials were using three types of electromechanical
employed, no bias voltage was needed. transducers: electromagnetic [8],
However, a complex PZT/metal bi-morph electrostatic [9], and piezoelectric [10-11]
was required and the power output after have also been presented.
dc/dc conversion and regulation was low In all of these methods, vibrations consist
(<1 mW) [2]. The schematic of of a traveling wave in or on a solid
microstructured piezoelectric polymer film material, and it is often not possible to find
that is used for the power generation as a relative movement within the reach of a
shown in below figure1. small generator. Therefore, one has to
couple the vibration movement to the
generator by means of the inertia of a
seismic mass.

Microstructure Piezoelectric Polymer Energy Storage Density Comparison


Film Type Practical Aggressiv
Maximu e
To increase the transducer power output, m Maximum
the film is rolled into a 1-cm thick stack of
approximately 120 layers. The generated
charge per step is Q e33Fh/Yt where Piezoelectric 35.4 335
e33 is the piezoelectric coefficient for Electrostatic 4 44
compression, F = mg is the force exerted Electromagneti 24.8 400
by the foot determined by the mass of the c
user m and the gravity constant g =9.81
m/s2, Y is the Young’s modulus for the

SIP0109-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

[1] Roy Kornbluh, “Power from plastic:


There are two types of piezoelectric how electroactive polymer artificial
signals that can be used for technological muscles will improve portable ower
applications: the direct piezoelectric effect generation in the 21st century military,”
that describes the ability of a given Presentation [Online],
material to transform mechanical strain Available:http://www.dtic.mil/ndia/2003tri
into electrical signals and the converse service/korn.ppt
effect, which is the ability to convert an
applied electrical solicitation into [2] John Kymisis, et.al., “Parasitic power
mechanical energy. The direct harvesting in shoes” in Proc. of the 2nd
piezoelectric effect is more suitable for IEEE Int. Conf. On Wearable Computing,
sensor applications, whereas the converse Pittsburgh. PA, pp. 132-139, 19-20 Oct.
piezoelectric effect is most of the times 1998.
required for actuator applications[12].
High-performance films, prepared by [3] S. Shenck and J. Paradiso, “Energy
researchers [14-15] is also explored. In this scavenging with shoe-mounted
the electromechanical properties of the piezoelectrics”, IEEE Micro, Vol. 21, pp.
film were improved by a treatment that 30-42, May-June, 2001.
consists of pressing, stretching, and poling
at a high temperature [14]. [4] P. Miao, et.al., “Micro-Machined
Variable Capacitors for Power
Generation”, in Proc. Electrostatics
Edinburgh, UK, 23-27 Mar. 2003.

[5] Mitcheson, P.D.; Green, T.C.;


III. CONCLUSION Yeatman, E.M.; Holmes, A.S.,
"Architectures for vibration-driven
In this paper, an analysis for Electricity- micropower generators," Journal of
Genration for low power devices is done. Microelectromechanical Systems, vol.13,
Different methodologies for generation of no.3, pp. 429-440, June 2004.
electricity is reviewed and presented. We
[6] Ville Kaajakari, “Practical MEMS”,
analysed that some of the methodologies
Small Gear Publishing, 2009.
are not feasible due to too much circuitry
in real time portable charging and some [7] M.Duffy & D.Carroll,”
are feasible but they are on an analysis Electromagnetic generators for power
stage. We have found that piezoelectric harvesting” 35th AM^ IEEE Power
generators implanted in shoe can provide Electronics Specialists Conference
a great achievement if collaboratively an Aachen, Germany, 2004 ;pp. 2075-2081
effort is made to bring a commercial
[8]M. El-hami, P. Glynne-Jones, M.
battery charger for low power house White, M. Hill, S. Beeby, E. James, D.
devices, just by utilization of walking steps Brown, and N. Ross, “Design and
of a person. fabrication of a new vibration-based
electromechanical power generator,”
REFERENCES Sens. Actuators A, Phys., vol. 92, no. 1–3,
pp. 335–342, Aug. 2001.

SIP0109-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

[9] M. Miyazaki, H. Tanaka, G. Ono, T.


Nagano, N. Ohkubo, T. Kawahara, and K.
Yano, “Electric-energy generation using
variablecapacitive resonator for power-
free LSI,” in Proc. ISLPED, 2003, pp.
193–198.

[10] C. Keawboonchuay and T. G. Engel,


“Maximum power generation in a
piezoelectric pulse generator,” IEEE
Trans. Plasma Sci., vol. 31, no. 1, pp. 123–
128, Feb. 2003.

[11] J. Yang, Z. Chen, and Y. Hu, “An


exact analysis of a rectangular plate
piezoelectric generator,” IEEE Trans.
Ultrason., Ferroelectr., Freq. Control, vol.
54, no. 1, pp. 190–195, Jan. 2007.

[12] T. Sterken, P. Fiorini, K. Baert, R.


Puers, and G. Borghs, “Anelectret-based
electrostatic micro-generator,” in Proc.
Transducers,2003, pp. 1291–1294.

[14] V. Sencadas, R. Gregorio Filho, and


S. Lanceros-Mendez, “Processing and
characterization of a novel nonporous
poly(vinilidene fluoride) films in the β
phase,” J. Non-Cryst. Solids, vol. 352, no.
21/22, pp. 2226–2229, Jul. 2006.

[15] S. Lanceros-Mendez, V. Sencadas,


and R. Gregorio Filho, “A new
electroactive beta PVDF and method for
preparing it,” Patent PT103 318, Jul. 19,
2006.

SIP0109-4
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

Spectral and Cepstral analysis Using Modified


the stochastic process. While the non-linear

Bartlett Hanning Window


estimation is based on the assumption that the
observed samples are wide sense stationary with zero
mean[3]. So the spectral analysis of a noise like
Rohit Pandey1, Rohit Kumar Agrawal 1
random, Sneha
signal Shree
1
is usually carried out by nonlinear
Department of Electronics & Communication Engineering
1 methods
Jaypee University of Engineering & Technology, Guna,like
MP,Periodogram,
India Welch etc. To analyze the
rohitpandey2108@gmail.com,rohitagrawal07162@gmail.com

Abstract-This paper describes non-parametric approach


for spectral analysis using three different window
functions with three power spectrum estimation
techniques. Window functions used are Hamming, non-linear method first we have to see the role of
Blackman & Modified Bartlett Hanning window for different window functions. In spectral analysis the
power spectral estimation using Periodogarm, Welch & discontinuity resulting from the periodic extension of
autocorrelation as estimation methods. The role of these the signal which gives rise to the leakage at the end
different Window functions has been analyzed in terms points and the high side lobe levels result in false
of spectral leakage and scalloping loss. And the frequency detection which is reduced by the use of
objective of using three different techniques for power window functions. The bin crossover that results in a
spectral density estimation is to find out the BW of the signal detection loss (scallop loss) due to the reduced
signal. This work has been further extended to spectral signal level at frequency points of the bin centers.
analysis in voice signals to detect the fundamental The window function modifies the frequency
frequency of the speaker. The frequency domain responses which are used to reduce the bin crossover
cepstrum analysis for voiced speech segments is also losses.
used. This is conventional method of fundamental peak
picking i.e. fundamental frequency or pitch. The voice Pitch is the fundamental parameter of speech[11].
segments of different speakers with minimum 30dB Pitch detection is one of the important tasks of speech
SNR as a threshold has been taken and cepstrum has signal processing[5],[7],[9]. Pitch i.e. fundamental
been analyzed using different window functions. frequency of voice signals (varies from 40Hz to
600Hz). Accurate representation of voiced/unvoiced
Index Terms— Autocorrelation, Cepstrum, MBH,
character of speech plays an important role in Voice
periodogram, PSD, Pitch, spectrum, welch.
activity detection(VAD), coding, synthesis, speech
training, speech and speaker recognition systems and
vocoders[6],[8]. For accurately detect and estimate the
I. INTRODUCTION fundamental frequency of a speaker we use cepstrum
[5]
analysis which is also called spectrum of spectrum.

P ower Spectral Estimation is the method of


determining the power spectral density (PSD) of
It is used to separate the excitation signal (pitch) and
transfer function (voice quality). One of these
a random process that provides the information about algorithms that show good performance for quasi-
the structure of spectrum. The purpose of estimating periodic signals is the cepstrum (CEP) algorithm
the spectral density is to detect any periodicities in However, its ability to separate the source signal
the data, by observing peaks at the frequencies (that conveys pitch information) from the vocal
corresponding to these periodicities. For spectrum tract response fails wherever the speech frame cannot
estimation the two approaches are linear and non- be contemplated as just the result of a linear
linear methods. In linear approach, the task is to convolution between both components, as occurs
estimate the parameters of the model that describes transitions or non-stationary speech segments, or

SIP0110-1
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

when the recorded speech signal includes additive


of each section and average it.
noise [5],[7] .
Auto-correlation method [3] is extracting the similarities
between the signals which is given by-
N 1 m

xx [ m] x[n m ]x[n]
n 0 -(6)
II. WINDOW FUNCTIONS

These are the window functions used for spectrum and The sequence x(a) is windowed and autocorrelated and psd
cepstrum analysis.
is calculated by -
[1]
MBH are used for the estimation techniques. Modified
Bartlett-Hanning (MBH) window is extended to the form [1] N 1
jn
xx (f) xx [m]e . –(7)
w(t,α)=α-(4α-2)|t|+(1-α)cos2πt; |t| ≤0.5, 0.5≤α <1.88 -(1) n 0

Where α is index parameter.

Blackman window:
IV. CEPSTRUM ANALYSIS
W(n)=.42-.50cos((2πn)/(M-1))+.08cos((4πn)/(M-1)) -(2)

Hamming window: Cepstrum is a frequency domain analysis of voiced


W(n)=.56-.42cos((2πn)/(M-1)) -(3); speech segments. Real cepstrum is inverse Fourier
Where M is window length and n is number of samples transform of log magnitude of Fourier transform.
n=0:M-1;
Algorithm is: signal FT -> abs()-> log-> IFT.

It is similar to spectral analysis of signal but in the


III. SPECTRAL ESTIMATION TECHNIQUES cepstrum we take the logarithm of the spectrum[10].
This is due to the fact that speech signals are quasi-
[3]
Periodogram method as sequence x(a) is to be made periodic in nature and only spectrum analysis cannot
finite by using window functions. Now the windowed be very useful for the characteristic feature extraction
sequence x(n) is autocorrelated and periodogram is of the voice signals. While calculating cepstrum we
calculated by- have taken speech sample of 25ms and with the
2
sampling frequency fs of 8000Hz.
N 1
j 1 jn
I N (e ) x[n]e
N n 0 -(4)
V. APPROACH FOR CEPSTRUM ANALYSIS
Where N is the length of the finite sequence.
Voice
Welch method [2] in Welch process firstly the Sectioning of LPF windowing FFT
Signal
data is done according to the sequence length. For each
section of length ‗m‘ we calculate a modified periodogram
[3]
by
abs
N 1 2
1
I Nr (e j ) xr [ m]e jn

N n 0 Log
-

(5)

Normalization windowing* SIP0110-2


IFFT
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

TABLE-I(bandwidth of
periodogram[4])

Smooth cepstrum

This procedure for smoothing the composite log


spectrum to obtain the log spectral envelop is referred
as cepstral smoothing.

VI. SIMULATION AND RESULTS

Sinusoidal signal has been taken as input signal with two


different frequency components and amplitudes
A*sin(2πft);A=[2 1.5];f=[150;175].

To estimate the spectrum of a noisy signal the sinusoidal


signal must be added with random sequence which is
generated in MATLAB.

Fig.2(Welch method)

Fig.1 (Peridogram method)

SIP0110-3
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

TABLE-II(bandwidth of
welch[4])
For cepstrum analysis we have taken the voice
samples of two speakers of duration 25 ms each.
And after that we have passed the voice samples
through the low pass filter of cut-off frequency
0.15*pi. We have used low pass filter here to
eliminate the high frequency additive noise and
analyzed the cepstrum of the filtered voice samples
for pitch detection.

Windowing is done after IFFT for smoothing


of cepstrum and detecting the clear cepstral peaks.

Fig.4(Cepstrum Analysis)

Fig. 5(Smooth Cepstrum)

VII. CONCLUSION

The aim is to detect and estimate the signal [3]. For the
identification of two different frequency components in a
presence of noise different threshold levels has been taken
starting from -3dB [4]. In periodogram method (Fig.1) -
3dB,-6dB and -15dB of threshold is taken and it is
observed from the results (TABLE-I) that at -3dB two
sinusoidal peaks are not detected and beyond -15dB noise
is detected. Same is the case with autocorrelation PSD
method that at -3db (TABLE-III) no peaks are detected but
we can detect our signals up to -20dB in comparison to
peridogram method. But in the case of Welch method
(TABLE-II), (Fig.2) detection at -3dB is possible i.e. the
minimum threshold to detect the signal. As the Fourier
transform of sinusoidal signal is an impulse so in the Welch
method (TABLE-II) using MBH window, side lobe levels
are more suppressed and width of main become
narrower(tending to an impulse Fig 2) than Hamming and
Blackman windows .

The above comparison shows that the Welch method is


giving better results in comparison to periodogram and
autocorrelation method. Using Welch method with MBH
Fig.3 (Autocorrelation method) window gives more accurate results than Hamming and
Blackman.
TABLE-III(bandwidth of auto-correlation[4])

SIP0110-4
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

Taking ―HELLO‖ as an iterative voice sample for [5] The Cepstrum Guide: A Guide to Processing by
two speakers, we have estimated the average pitch. It is Donald G. Childers,David P. Skinner and Robert C.
observed that in cepstrum (fig.4) error in pitch detection is Kemerait, PROCEEDINGS OF THE IEEE, VOL. 65,
more than smooth cepstrum (fig. 5). Now, considering 0.4 NO. 10, OCTOBER 1977, pp 1428-1443.
as threshold level we see that the periodicity in smooth
cepstrum is more distinguished and hence pith can easily be [6] Signal Modeling Techniques in Speech Recognition by
detected. This can be further used in voice recognition JOSEPH W. PICONE, SENIOR MEMBER, IEEE,
systems in order to minimize false acceptance rate (FAR) PROCEEDINGS OF THE IEEE, VOL. 81, NO. 9,
and false rejection rate (FRR). SEPTEMBER 1993, pp 1215-1247.

ACKNOWLEDGMENT [7] ―Ceptrum pitch determination,‖ J. Acoust. SOC.


Am., vol. 41, no. 2, pp. 293-309, Feb. 1967.
The authors acknowledge the valuable guidance of Prof
Rajiv Saxena who helped to improve the quality of the [8] A Tutorial on Text-Independent Speaker Verification,
paper. EURASIP Journal on Applied Signal Processing 2004:4,
430–451.
REFERENCES

[1] IEEE TRANSACTIONS ON SIGNAL PROCESSING,


On the Modified Bartlett-Hamming Window (Family) by

Jai Krishna Gautam, Arun Kumar, and Rajiv Saxena, pg


2098-2102, VOL. 44, NO. 8, AUGUST 1996.

[2] IEEE TRANSACTIONS ON AUDIO AND


ELECTROACOUSTICS, The Use of FFT for the
Estimation of Power Spectra: A Method Based on Time
Averaging Over Short, Modified Periodograms PETER D.
WELCH ,pg 70-73, VOL. AU-15, NO. 2, JUNE 1967.

[3] Digital Signal Processing by David J. Defatta, Joseph


G. Lucas, Willam S. Hodgkiss..

[4] F. J. Hams, ―On the use of windows for harmonic


analysis with the discrete Fourier transform,‖ Proc. ZEEE,
vol. 66, pp. 51-83, Jan. 1978.

SIP0110-5
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

[9] http://en.wikipedia.org/wiki/Cepstrum

[10] Spectrum Analysis in Speech Coding, James L.


Flanagan Senior Member IEEE, IEEE TRANSACTIONS
ON AUDIO AND ELECTROACOUSTICS, VOL. AU-
15, NO. 2, JUNE 1961, pp 66-69.

[11] http://en.wikipedia.org/wiki/Human_voice

SIP0110-6
1

A 3D APPROACH TO FACE-EXPRESSION
RECOGNITION
Akshay Gupta , Ananya Misra , Hridesh Verma , Garima Chandel-Member IEEE

ABES Institute of Technology, Ghaziabad-201009, India.

akshaygupta.abes@gmail.com , misraananya@yahoo.in , hridesh.verma@abes.in ,


garimachandel@rediffmail.com

ABSTRACT: Face recognition has been in expression has become a big challenge in 3D face
research for the last couple of decades. With the recognition systems. In this paper, we propose an
advancement of 3D imaging technology, 3D face approach to tackle this problem, through the
integration of expression recognition and face
recognition emerges as an alternative to overcome
recognition in a system.
the problems inherent to 2D face recognition, i.e.
sensitivity to illumination conditions and positions II. EXPRESSION AND FACE
of a subject. But 3D face recognition still needs to RECOGNITION
tackle the problem of deformation of facial
geometry that results from the expression changes From the psychological point of view, it is still not
of a subject. To deal with this issue, a 3D face known whether facial expression recognition
information aids the recognition of faces by human
recognition framework is proposed in this paper.
beings. It is found that people are slower in
It is combination of three subsystems: expression identifying happy and angry faces than they are in
recognition system, expressional face recognition identifying faces with neutral expression.
system and neutral face recognition system. A The proposed framework involves an initial
system for the recognition of faces with one type assessment of the expression of an unknown face,
of expression (smile) and neutral faces was and uses that assessment to assist the progress of its
recognition. The incoming 3D range image is
implemented and tested on a database of 30
processed by an expression recognition system to
subjects. The results proved the feasibility of this find the most appropriate expression label for it. The
framework. expression labels include the six prototypical
expressions of the faces, which are happiness,
Index Terms- face recognition, databases, neutral sadness, anger, fear, surprise and disgust, plus the
face, smiling face, image acquisition. neutral expression. According to different
expressions, a matching face recognition system is
I. INTRODUCTION then applied. If the expression is recognized as
neutral, then the incoming 3D range image is directly
Mostly the face recognition attempts that have been passed to the neutral expression face recognition
made use of 2D intensity images as the data format system, which uses the features of the probe image to
for processing. In spite of the success reached by 2D directly match those of the gallery images, which are
recognition methods, certain problems still exist. 2D all neutral, to get the closest match. If the expression
face images not only depend on the face of a subject, found is not neutral, then for each of the six
but also depend on imaging factors, such as the expressions, a separate face recognition subsystem
environmental illumination and the orientation of the should be used. The system will find the right face
subject. These variable factors can become the cause through modelling the variations of the face features
of the failure of the 2D face recognition system. With between the neutral face and the face with
the advancement of 3D imaging technology, more expression. Figure 1 shows a simplified version of
attention is given to 3D face recognition, which is this framework. This simplified diagram only deals
robust with respect to illumination variation and with the smiling expression, which is the most
posing orientation. In [1], Bowyer et al. provide a commonly displayed by people publicly.
survey of 3D face recognition technology. Mostly the
3D face recognition systems treat the 3D face surface III. DATA ACQUISITION AND
as a rigid surface. But actually, the face surface is PROCESSING
deformed by different expressions of the subject,
which causes the failure of the systems that treat the To test the approach proposed in this model, a
face as a rigid surface. The involvement of facial database, which includes 30 subjects, was built. In
2

this database, we test the different processing of the generated by contraction of the zygomatic major
two most common expressions, i.e., smiling versus muscle. This muscle lifts the corner of the mouth
neutral. Each subject participated in two sessions of obliquely upwards and laterally, producing a
the data acquisition process, which took place in two
characteristic “smiling expression”. So, the most
different days. In each session, two 3D scans were
acquired with a Polhemus Fastscan scanner. One was distinctive features associated with the smile are the
a neutral expression; the other was a happy (smiling) bulging of the cheek muscle and the uplift of the
expression. The resulting database contains 60 3D corner of the mouth, as shown in Figure 3.
neutral scans and 60 3D smiling scans of 30 subjects. The following steps are followed to extract six
representative features for the smiling expression:-

1. An algorithm is developed to obtain the


coordinates of five characteristic points in the face
range image as shown in Figure 3. A and D are the
extreme points of the base of the nose. B and E are
the points defined by the corners of the mouth. C is
in the middle of the lower lip.

Figure1- Simplified framework of 3D face recognition Figure 3- Illustration of features of a smiling face versus a
neutral face
The left image in Figure 2 shows an example of the
2. The first feature is the width of the mouth, BE,
3D scans obtained using this scanner, the right image
normalized by the length of AD. Obviously, while
is the 2.5D range image used in the algorithm.
smiling the mouth becomes wider. The first feature
is represented by mw.
3. The second feature is the depth of the mouth (The
difference between the Z coordinates of point B
point C and point E point C) normalized by the
height of the nose to capture the fact that the
smiling expression pulls back the mouth. This
second feature is represented by md.
4. The third feature is the uplift of the corner of the
mouth, compared with the middle of the lower lip
d1 and d2, as shown in the figure, normalized by
the difference of the Y coordinates of point A point
Figure 2- 3D surface (left) and a mesh plot of the converted
range image (right) B and point D point E, respectively and
represented by lc.
IV. EXPRESSION RECOGNITION 5. The fourth feature is the angle of line AB and line
DE with the central vertical profile, represented by
The face expression is a basic mode of nonverbal ag.
communication among people. In [5], Ekman and 6. The last two features are extracted from the
Friesen proposed six primary emotions. Each semicircular areas shown, which are defined by
possesses a distinctive content together with a unique using line AB and line DE as diameters. The
facial expression. These six emotions are happiness, histograms of the range (Z coordinates) of all the
sadness, fear, disgust, surprise and anger. Together points within these two semicircles are calculated.
with the neutral expression, they also form the seven
basic prototypical facial expressions. Figure 4 shows the histograms for the smiling and
In our experiment, we aim to recognize social the neutral faces of the subject in Figure 3. The two
smiles, which were posed by each subject. Smiling is figures in the first row are the histograms of the range
3

values for the left cheek and right cheek of the pattern classification methods are applied to
neutral face image; the two figures in the second row recognize the expression of the incoming faces. The
are the histograms of the range values for the left first method used is a linear discriminant (LDA)
cheek and right cheek of the smiling face image. classifier, which seeks the best set of features to
separate the classes. The other method used is a
support vector machine (SVM).
300
200 V. 3D FACE RECOGNITION
100 Series1
0 A. Neutral face recognition
In our earlier research work, we have found that the
abcde f gh i j central vertical profile and the contour are both
discriminant features for every person. Therefore, for
neutral face recognition, the results of central vertical
200 profile matching and contour matching are combined.
The combination of the two classifiers improves the
Series1 overall performance significantly. The final similarity
0
score for the probe image is the product of ranks for
abcde f gh i j each of the two classifiers (based on the central
vertical profile and contour). The image with the
smallest score in the gallery will be chosen as the
matching face for the probe image.
300
200 B. Smiling face recognition
100 For the recognition of smiling faces we have
Series1
adopted the probabilistic subspace method proposed
0 by B. Moghaddam et al. [8,9]. It is an unsupervised
abcde f gh i j technique for visual learning, which is based on
density estimation in high dimensional spaces using
Eigen decomposition. Using the probabilistic
subspace method, a multi-class classification problem
150 can be converted into a binary classification problem.
100 In the experiment for smiling face recognition,
50 because of the limited number of subjects (30), the
Series1 central vertical profile and the contour are not used
0 directly as vectors in a high dimensional subspace.
ab c de f gh i j Instead, they are down sampled to a dimension of 17
to be used. The dimension of difference in feature
space is set to be 10, which contains approximately
Figure 4- Histogram of range of cheeks (L &R) for 97% of the total variance. The dimension of
neutral (top row), and smiling (bottom row) face.
difference from feature space is 7.
From the above figures, we can see that the range In this case also, the results of central vertical
histograms of the neutral and smiling expressions are profile matching and contour matching are combined,
different. The smiling face tends to have large values improving the overall performance. The final
at the high end of the histogram because of the bulge similarity score for the probe image is the product of
of the cheek muscle. On the other hand, a neutral face ranks for each of the two classifiers. The image with
has large values at the low end of the histogram the smallest score in the gallery will be chosen as the
distribution. Therefore two features can be obtained matching face for the probe image.
from the histogram.
One is called the ‘histogram ratio’, represented by VI. EXPERIMENTS AND RESULTS
hr, the other is called the ‘histogram maximum’,
represented by hm. One gallery and three probe databases were used
for evaluation. The gallery database has 30 neutral
ℎ6 + ℎ7 + ℎ8 + ℎ9 + ℎ10 faces, one for each subject, recorded in the first data
ℎ = acquisition session. Three probe sets are formed as
ℎ1 + ℎ2 + ℎ3 + ℎ4 + ℎ5
follows: Probe set 1: 30 neutral faces acquired in the
hm = i; i = arg {max (h (i))} second session.
Probe set 2: 30 smiling faces acquired in the second
After the six features have been extracted, this session.
becomes a general classification problem. Two Probe set 3: 60 faces, (probe set 1 and probe set 2).
4

Experiment 1: Testing the expression recognition On the other hand, if the incoming faces are
module smiling, then the neutral face recognition algorithm
The leave-one-outout cross validation method is used does not
to test thee expression recognition classifier. Every These experiments emulate a realistic situation in
time, the faces collected from 29 subjects in both data which a mixture of neutral and smiling faces (probe
acquisition sessions are used to train the classifier set 3) must be perform well, only 57% rank one
and the four faces of the remaining subject collected recognition rate is obtained. (Rankone means only
in both sessions are used to test the classifier. Two the face which scores highest is selected from the
classifiers are used. One is the linear discriminant gallery. Rank one recognition rate is the ratio
classifier; the other is a support vector machine between number of faces correctly recognized and
classifier. LDA tries to find the subspace that best the number of probe faces. Rank three ree means three
discriminates different classes by maximizing the highest scored faces instead of one face are selected.)
between class scatter matrix, while minimizing the In contrast, when the smiling face recognition
within-class
class scatter matrix in the projective subspace. algorithm is used to deal with smiling faces, the
Support vector machine is a relatively new recognition rate can be as high as 80%.
technology for classification. It relies on pre- pre
processing the data to represent patterns in a high Experiment 3: Testing a practical scenario
dimension, typically much higher than the original
feature space. With an appropriate nonlinear mapping These
ese experiments emulate a realistic situation in
to a sufficiently high dimension, data from two which a mixture of neutral and smiling faces (probe
categories can always be separated by a hyper plane. set 3) must be recognized. Sub experiment 1
investigates the performance obtained if the
Table 1- expression recognition results expression recognition front end is bypassed, and the
recognition of all the probe faces is attempted with
Method LDA SVM
Expression recognition rate 90.8 92.5 the neutral face recognition module alone. The last
two sub experiments implement the full framework
shown in Figure 1. In 3.2 the expression recognition
Experiment 2: Testing the neutral and smiling is performed with the linear discriminant
discrim classifier,
recognition modules separately while inn 3.3 it is implemented through the support
vector machine approach.
In the first two sub experiments, probe faces are a. Neutral face recognition module used alone:
directly fed to the natural face recognition module. In probe set 3 is used.
the third sub experiment, the leave-one one-out cross b. Integratedd expression and face recognitio
recognition:
validation is used to verify the performance of the probe set 3 is used. (Linear discriminant
smiling face recognition module. classifier for expression recognition.)
recognitio
c. Integrated expression and face recognition:
a. Neutral face recognition: probe set
1.(neutral face recognition module used.) probe set 3 is used.(support vector machine
b. Natural face recognition: probe set 2(neutral for expression recognition.)
face recognition module used.) It can been seen in Figure 6 that if the incoming
c. Smiling face recognition: probe pro set faces include both neutral faces and smiling faces,
2(smiling face recognition module used). the recognition rate can be improved about 10
percent by using the integrated framework proposed
From Figure 5, it can be seen that when the here.
incoming faces are all neutral, the algorithm which
treats all the faces as neutral achieves a very high CONCLUSION
recognition rate.
1.5 The work reported in this paper represents an
rank 1 attempt to acknowledge and account for the presence
1 recognition of expression on 3D face images, towards their
rate improved identification. The method introduced here
0.5 is computationally efficient. Furthermore, this
rank 3
recognition method also yields as a secondary result the
0 information of the expression found in the faces.
rate
a b c Based on these findings we believe that the
acknowledgement of the impactact of expression on 3D
Figure 5 Results of Experiment 2(three sub-experiments)
experiments)
face recognition and the development of systems that
5

account for it, such as the framework introduced


here, will be keys to future enhancements in the field
of 3D Automatic Face Recognition.

1.2
1
rank 1
0.8 recognition
0.6 rate
0.4
rank 3
0.2 recognition
0 rate
a b c

REFERENCES

[1] K. Bowyer, K. Chang, and P. Flynn, “A Survey of Approaches


to 3D and Multi-Modal
Modal 3D+2D Face Recognition,” IEEE Intl.
Conf. o Pattern Recognition, 2004.

[2] R. Chellappa, C.Wilson, and S. Sirohey, “Human and Machine


Recognition of Faces: A Survey,” Proceedings of the IEEE, 1995.
83(5): pp. 705-740.

[3] www.polhemus.com.

[4] C. Li, A.Barreto, J. Zhai and C. Chin. “Exploring Face


Recognition Using 3D Profiles and Contours,” IEEE SoutheastCon
2005. Fort Lauderdale.

[5] P.Ekman, W. Friesen, “Constants across cultures in the face


and emotion,” ” Journal of Personality and Social Psychology,
Psychology
1971. 17(2): pp. 124-129

[6] Y. Hu, D. Jiang, S. Yan, L. Zhang, and H. Zhang, "Automatic


3D Reconstruction for Face Recognition," presented at
International conference on automatic face and gesture
recognition, Seoul, 2004.

[7]"Notredame 3D Face Database, "http://www.nd.edu/~cvrl/.

[8].B. Moghaddam, A. Pentlend, “Probabilistic Visual Learning


for Object Detection,” International Conference of Computer
Vision (ICCV' 95), 1995.

[9]B. Moghaddam, A. Pentlend, “Probabilistic Visual Learning for


Object Representation,”
,” IEEE Trans. on Pattern Analysis and
Machine Intelligence, 1997. 19(7): pp. 696-710.
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Performance Evaluation of Signal Selective DOA Tracking for


wideband cyclostationary sources

SANDEEP SANTOSH1, O.P.SAHU2, MONIKA AGGARWAL3


Astt. Prof., Department of Electronics and Communication Engineering1 ,
Associate Prof., Department of Electronics and Communication Engineering2 ,
National Institute of Technology , Kurukshetra, 1,2
Associate Prof., Centre For Applied Research in Electronics (CARE)3 ,
Indian Institute of Technology, New Delhi.3
INDIA
profsandeepkkr@gmail.com http://www.nitkkr.ac.in

Abstract obvious method of DOA tracking is to first


find DOAs by an existing DOA estimation
In this paper ,we present a new signal- algorithm for each time frame on
selective direction of arrival (DOA) tracking assumption that directions do not change
algorithm for moving sources emitting within each time frame ,then to associate
narrowband or wideband cyclostationary each of newly estimated DOAs to those
signals. Here, the DOAs of the sources are previous estimates in order to keep tracking
updated recursively based on most current the DOA changes and source movement . A
array output in a way that no data major problem of this method is that data
association is needed.The interference and association , or correctly assigning the
noise are suppressed by exploiting estimated DOAs
cyclostationarity .Only, the sources of at each time frame to their corresponding
interest are tracked.The tracking previous estimates to form DOA tracks,
performance of this algorithm can be requires extensive computations. Data
improved via the kalman filter. association involves searching over I
possible combinations between the
Index Terms – Array signal processing, estimated DOAs and the targets, where I is
cyclostationarity, direction of arrival supposed to be number of DOAs[1]. Some
tracking. DOA tracking algorithms which do not
require data association have been proposed
1. Introduction such as [1]-[5].The authors of [1] obtain the
current DOA estimates of the sources by
Direction of arrival (DOA) tracking of minimizing the norm of an error matrix
multiple moving sources has been a central function, based on a covariance matrix
research topic on signal processing for related to an array output at the current time
decades ,due to its wide applications such as frame. The authors of [2] track the source
survellience in military applications and air movement by estimating DOA changes for
traffic control in civilian applications. One each time frame ,rather than new DOAs

SIP0112-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

through solving a least squares (LS) approach. For multiple targets both [6] and
problem. The authors of [3] improve the [7] require data association. In [7], the data
performance of [2] by employing the source association is done by Bayes classifier
movement model and refining the updated which is computationally expensive. The
DOAs through a Kalman filter. The authors authors of [8] develop two computationally
of [4] update the DOA estimates of each simple methods for DOA tracking based on
time frame by solving a maximum– recursive expectation and maximization
likelihood (ML) problem of most current (REM) algorithm. These two methods apply
array output. This approach also employs a for both narrowband and wideband signals .
source movement model and refines the From [8], the first method does not work
DOA estimates through a Kalman filter as in properly when two DOAs are crossing , and
[3].The authors of [5] introduce multiple the second method requires a linear DOA
target states (MTS) to describe the target motion model, restricting DOA tracks to
motion ,and the DOA tracking is only straight lines.
implemented through updating the MTS by
maximizing the likelihood function of the Recently,a statistical property,
array output. Whether by LS or ML method, cyclostationarity, which many type of man
whether introducing MTS or other models to made signals in communications such as
describe the target motion , whether using BPSK,FSK,AM exhibit has been exploited
Kalman filter or not ,all these algorithms in DOA estimation[10]-[12].By exploiting
implement the DOA tracking in a way that cyclostationarity, interference and noise that
the order of the estimated DOAs for do not share the same cycle frequency as the
different times or time frames is maintained desired signals or do not exhibit
, thus data association is avoided. Therefore, cyclostationarity can be suppressed ,thus
they are more computationally efficient than performance of DOA estimation is improved
the methods requiring the data association. when the DOA of interference is close to
DOA of desired signal. The
All the above methods are applicable to Cyclostationarity could be exploited to
narrowband signals and they would fail for improve performance of DOA tracking. All
wideband signals .Wideband signals are the DOA tracking algorithms discussed
becoming more and more common previously [1]-[7] assume that the signals
nowdays. Therefore, research work on are stationary but not cyclostationary .Here
developing DOA tracking algorithms that ,a new signal selective DOA tracking
work for wideband sources has been carried algorithm for wideband multiple moving
out[6]-[8].The authors of [6] use focusing sources by exploiting the cyclostationarity
matrices to align steering vectors of different of the signals is proposed .In this algorithm ,
frequency bins to carrier frequency so that the signals emitted by moving sources can
wideband signals can be treated the same be either narrowband or wideband
way as narrowband signals in estimating the cyclostationary. Our algorithm assumes that
DOAs by multiple signal DOAs in each time frame are fixed and
classification(MUSIC)[9].When new data tracks the DOA changes from frame to
arrive ,[6] first updates the focusing matrices frame by exploiting the difference of
and then applies MUSIC to obtain new averaged cyclic cross correlation of the
estimated DOAs. In [7], the authors estimate array output. DOA tracking is initiated by
the DOAs of each time frame by an ML applying once a wideband DOA estimation

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

method : averaged cyclic


MUSIC(ACM)[12]. Then , the DOA Rαzz(τ)=‹z(t+τ/2)zH(t-τ/2)e-j2παt› (2)
changes for each time frame are estimated
by finding the minimum solution to an LS Where [.]H denotes Hermitian transpose.
cost function related to averaged cyclic cross
correlation of the array output. Similar to
[12], averaging the cyclic correlation B. Data Model
enables wideband application .In order to
avoid inconsistent solutions for DOA Consider the tracking problem by a uniform
changes when the DOAs are crossing ,the linear array with N identical elements. I
proposed cost function also includes a moving sources are assumed to generate I
regularization term that reflect the signals with cycle frequency α impinging on
assumption that sources are moving at the array. These signals are considered as
constant speeds . Similar to [1]-[5] ,our signal of interest(SOI). Other signals from
signal selective DOA tracking algorithm other moving sources which do not exhibit
does not require data association . Also , cyclostationarity or have different cycle
incorporation of a Kalman filter into our frequencies are considered as interference.
signal selective DOA tracking algorithm is Take the first antenna as reference ,then the
presented. Via the Kalman filter, the signal received by the nth antenna in the
tracking performance of our algorithm is array is ,
improved .The effectiveness of the proposed
algorithm is demonstrated by simulations. Zn(t)=Σi=1I si(t +(n-1)Δi(t))ej2πfo(n-1)Δi(t) +
ηn(t)
(3)
1. Cyclostationarity
and Data model Where si(t) is the complex baseband signal
of the ith signal of interest (SOI) induced at
A. Cyclostationarity the first antenna, fo is the carrier frequency
and Δi(t)=dsinθi(t)/c is the time delay
Given a signal s(t) , the cyclic correlation is between two adjacent antennas. Here, θi(t) is
defined as [15], the impinging direction of the ith SOI at
time t, d is the intersensor spacing of the
rαss(τ)=‹s(t+τ/2)s*(t-τ/2)e-j2παt› (1) uniform linear array, c is the propagation
speed. Note that ηn(t) has two components
where (.)* denotes complex conjugate and :interference and noise induced at the nth
‹.›denotes time average . s(t) is said to be antenna. Interference and noise are assumed
cyclostationary if rαss(τ) is not zero at some to be cyclically uncorrelated with SOI.
delay τ and some cycle frequency α. Many Therefore, ηn(t) is neglected.
man made communication signals exhibit
cyclostationarity due to modulation ,periodic Now, assume that the DOA of the sources
gating etc. They usually have cycle change little during the time frame of length
frequency at twice the carrier frequency .or T i.e θi(t) or Δi(t) are constant during the
multiples of the baud rate or combination of kth time frame [(k-1)T,KT] where k=1,..,K.
these. For a given signal vector z(t),we can The total tracking time is assumed to be KT
calculate the cyclic correlation matrix as seconds. We have Δi(k)=dsinθi(k)/c for the
[10]., kth time frame. Our tracking algorithm will

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

deal with the data samples collected during a we can obtain N-1 cross-cyclic correlations
time frame. estimated at the kth time frame,

rαsisj (τ,k)= ∫k si(t + τ/2) sj*(t- τ/2) )e-j2παtdt rαz1zn (τ,k) =∫kz1(t+τ/2)zn*(t- τ/2)) e-j2παtdt
(4)
α
Now let us define the following vectors and = Σi=1 I [ Σp=1 I r spsi (τ-(n-1) Δi(k),k)]. E-
j2π(fo-(α/2))(n-1)Δi(k)
matrices, (9)

S(t)=[s1(t)……sI(t)]T (5) Since evaluation of cyclic correlation will


only retain SOI, interference and noise are
Z(t)=[z1(t)…zN(t)]T (6) ignored in (9).To eliminate the dependence
of rαspsi (τ,k) on Δi(k) or on τ, we further
A(f,k)=[a1(f,k),………aI(f,k)] (7) evaluate of rαz1zn (τ,k)at different time delay
τ and average them to obtain an averaged
ai(f,k) = [1,ej2πfΔi(k),., ej2πf(N-1)Δi(k)]T (8) cross cyclic correlation between z1(t) and )
zn(t) at the kth time frame as,
where [.]T denotes matrix transpose ,s(t) is
the source signal vector ,z(t) is the received
signal vector, A(f, k) is the steering matrix ‹rαz1zn(k)›τ = Στ= τ1τ=τ2 rαz1zn (τ,k)
evaluated at the frequency f for the kth time
α
frame and ai(f,k)is the steering vector for the =Σi=1I [Σp=1I ‹ r spsi (k)›τ ]. e-j2π(fo-
(α/2))(n-1)Δi(k)
ith SOI evaluated at f for the kth time frame. (10)

‹rαspsi(k)›τ= Στ= τ1τ=τ2rαspsi (τ-(n-1) Δi(k),k)]


(11)
2. LS Tracking
algorithm Thus for source signals ,si(t) ,
i=1,….,I,which normally have certain time
We will first evaluate the averaged cross invariant characteristics, if the duration of a
cyclic correlations between signals received time frame is long enough , then ‹rαspsi(k)›τ
at the first antenna and other antennas can be assumed to be independent of k. In
during the kth time frame .These our simulations ,a 0.5 s time frame or 3200
correlations will be simplified as functions snapshots of data samples give results. We
of signal directions at this time frame .Based drop k and define ,
on these functions, an LS method of tracking
the direction of sources will be discussed. Ei=Σp=1I‹rαspsi›τ (12)
In addition define,
A. Averaged Cross–Cyclic correlation
and initial DOA estimation gn(θ)=e-j2π(fo-(α/2))(n-1)dsinθ/c (13)
Then, (10) can be written as ,
For the kth time frame ,calculate the cross-
cyclic correlation of z1(t) and zn(t), where ‹rαz1zn(k)›τ=Σi=1IEign(θi(k)). (14)
zn(t) is the signal received at the nth
antenna., for n=2,..,N .Using (3) and (4) , To derive our algorithm we need to know Ei.
. First,we apply the signal selective DOA

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

estimation algorithm ACM[12] to estimate


the initial DOAs. The number of sources From (20) rn(k) can be written as,
emitting SOI are assumed to be known or
estimated by minimum description rn(k)=[cn,1(k-1),..,cn,I(k-1)]Θ~(k) 23)
length(MDL) criteria. ACM works for both Θ~(k)=[θ1~(k),…….,θI~(k)]T (24)
narrowband and wideband signals. A Stacking rn(k) for n=2,….,N, we obtain,
summary of this algorithm is given below:
r(k)=[r2(k),..,rN(k)]T=C(k-1)Θ~(k) (25)
1. Estimate the cyclic correlation
matrix Rαzz(τ,1) during the first time The DOA changes Θ~(k) can be estimated
frame. by solving the LS problem of,
2. Average Rαzz(τ,1 )over τ.
3. Apply the singular value r(k)=C^(k-1)Θ~(k) (26)
decomposition (SVD) to Rαzz(1 )τ
to estimate all DOAs of SOI for first Θ^(k)=Θ^(k-1)+Θ~(k) (27)
time frame i.e . θi(1) where i= 1,..,I
4. Obtain Δi(1)= dsinθi(1)/c .we have, Θ~(k)=Θ~(k-1) (28)

Now, define a revised LS cost function,


‹Rαzz(1)›τ=A(fo+α/2,1)AH(fo-α/2,1)
‹Rαss(1)›τ (15) f(Θ~(k))=[C^(k-1)[Θ~(k) – r(k)]H[C^(k-
1)Θ~(k) – r(k)]+ [ Θ~(k) -Θ~(k-1)]H۸(k)
[ Θ~(k) -Θ~(k-1)] (29)
‹Rαss(1)›τ=A(fo+α/2,1)† AH(fo-α/2,1)†
‹Rαzz(1)›τ (16)
Θ~(k)=[C^H(k-1)C^(k-1)+۸(k)]-1[C^H(k-
1)r(k)+۸(k)Θ~(k-1)] (30)
B. Recursive Direction Updating
The computational complexity of the LS
The tracking algorithm can be developed as
tracking algorithm is O(NI),O(N,Ns,Na),
follows.
O(I2N) and O(I).
θi(k)=θi(k-1)+θi~(k) (17)

gn(θi(k))=gn(θi(k-1))+∂ gn(θ )/ ∂θ│θ=θi(k-1)


3. Kalman filter
θi~(k) (18)
In this section,we introduce a source
∂gn(θ)/∂θ│θ=θi(k-1)=[-j2π(fo-α/2(n-1)dcosθi(k-
movement model ad apply a Kalman filter to
1)/c] (19)
track the DOAs. The estimated DOAs by the
LS method are viewed as measurements of
‹rαz1zn(k)›τ = ‹rαz1zn(k-1)›τ + Σi=1Icn,i(k-1)
DOAs in the Kalman filter model. The
θi~(k) (20)
current DOAs of the sources are first
predicted from previous DOAs using the
cn,i(k-1)=Ei∂gn(θ)/∂θ│θ=θi(k-1) (21)
source movement model. Then , the
rn(k)=‹rαz1zn(k)›τ-‹rαz1zn(k-1)›τ (22)
predicted DOAs are refined by the Kalman

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

filter. Our simulation shows that Kalman 2. Obtain θi^(k) by LS tracking method.
filter refinement further improves DOA Use θi^(k-1│k-1) in place of θi^(k-1).
tracking accuracy and reduces the burden of 3. Obtain Qi^(k-1) and σ2yi (k) from (39) &
selecting optimum ۸(k) in(30). (40).Use Qi^(k-1)as an approximation of
Qi^(k).
4. Calculate Pi^(k│k-1)= F Pi^(k-1│k-1)FH
Define the state of the ith(i=1,…,I) source at
+ Qi^(k).
the kth time frame as,
5. Calculate the Kalman filter gain G(k)=
Pi^(k│k-1) HH/R(k) where
xi(k) = [ θi(k) ]
R(k)= H Pi^(k│k-1) HH + σ2yi (k).
[ θi˙(k) ]
6. Update the state for the kth time frame
[θi˙˙(k)] (31)
by xi^(k│k)= xi^(k│k-1)+ G(k)( θi^(k) - H
xi^(k│k-1)).
xi(k)=Fxi(k-1)+wi(k) (32)
7. Take the first element of xi^(k│k ) as
the refined DOA estimate for the kth time
yi(k)=Hxi(k)+vi(k) (33)
frame, θi^(k│k) .
8. Prepare the next recursion by calculating
F = [ 1 T T2/2 ]
Pi^(k│k)= Pi^(k│k-1) – G(k)H Pi^(k│k-1).
[ 0 1T ]
[001] (34)
4. Simulations
E[wi(j) wiH (k)] = { Qi(k) , j=k }
{ 0, j ≠ k } for
Tracking performance versus SNR.
i=1,…,I (35)
In this simulation, three sources are assumed
to emit three wideband BPSK signals with
H=[100] (36)
raised cosine pulse shaping. Two of them
are SOI with same baud rate 20 MHz and a
ei(k)=xi^(k│k)-Fxi^(k-1│k-1) (37)
same carrier frequency 100 MHz. The other
is interference with a baud rate 6 MHz and a
εi(k)=θi^(k)–Hxi^(k│k-1) (38)
carrier frequency 80 MHz. The cycle
frequency of SOI is 20 MHz, which is
Since both process noise and measurement
assumed to be known. The two SOI are
noise are assumed to be zero mean ,their
coherent. A ULA with 7 antennas with
variance can be estimated by,
equal spacing of c/(2fo+α)= 1.36 m is used.
The subarray size is 6 for SS during
Qi^(k)=1/LΣj=k-L+1kei(j)eiH(j) (39)
initialization .The duration of each time
frame is 0.5s during which 3200 snapshots
σ2yi(k)=1/LΣj=k-L+1kεi(j)εi*(j) (40)
of data samples are obtained. The SNR of
one SOI is 1 db lower than other. The SNR
The steps to estimate DOAs for the kth time
of the interference is 5 db lower than the
frame are as follows:
higher powered SOI. To see how the
performances of the LS method and the
1. Obtain the predicted state by xi^(k│k-1)
Kalman filter method change with SNR, we
= F xi^(k-1│k-1).

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

vary the SNR of high powered SOI from -5 except that SNR for both SOI are the same
db to 15 db. and there is one more interference with a
baud rate of 6MHz and a carrier frequency
Generally , source crossing poses difficulty 100MHz.whose SNR is also 5 db lower than
for tracking algorithm. The tracking that of SOI.
algorithm fails if the estimation error is so We first assume that the SNR of the SOI is 5
large that the tracks of two crossing sources db and runs both the LS method and Kalman
are switched and lost as shown in fig 1. We filter method 40 times. We assume that SNR
define failure rate as the ratio of number of of SOI is 15 db and runs these two tracking
failed trials to the total number of trials methods both for 40 times again. We plot
,which is 40 in our estimation.Fig2 shows ensemble averages of estimated DOAs by
the failure rates of LS algorithm and Kalman the LS method when SNR is 5 db in fig4.
filter algorithm with respect to SNR.We can Three other plots for the mean of the
see with the usage of a Kalman filter,failure estimated DOAs by the LS method when
rate is lower than that with the LS method SNR is 15 db and by Kalman filter
and at and above 5 db SNR ,Kalman filter methodwhen SNR is 5 db and 15 dbare
method does not fail at all. similar and hence omitted. The comparisons
of the rms errors of the estimated DOAs by
In this simulation ,we also plot the rms error our two algorithms is illustrated in fig 5 and
of the estimated DOA in fig 3. Consider fig6 .for one SOI. It can be seen from these
aspecific value of SNR; we can calculate plots that both methods track the DOAs of
mean squared error of the estimated DOAs the SOI well with Kalman filter method
for each trial of LS algorithm or Kalman outperforming the LS method in accuracy.
filter algorithm. Then, the root of the mean
of the mse obtained through all 40 trials is
what we call rms of estimated DOAs at this
certain SNR. We should note that if the
algorithm fails to track the sources at one
trial ,the mse for that trial will be large,it is
excluded from calculating the final rms. If
we ignore this value by not considering the
failed trial ,the final rms will tend to be
smaller than true value, not reflecting the
tracking failure.From fig3 we see that
Kalman filter method performs better than
the LS method.

Comparison of the estimated tracks with


real tracks of sources

In this simulation, we look into to see how


well the LS method and Kalman filter
method track the targets . The signals and
settings are same as in first simulation

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

recursive EM algorithms”, EURASIP J


5. References applied signal processing vol 2005,no1,pp
50-60,2005.
[1] C.R.Sastry,E.W.Kamen “An efficient
algorithm for tracking the angles of arrival [9] R.O. Schimdt, “ Multiple emitter
of moving targets” IEEE Trans. In signal location and signal parameter estimation”,
processing vol 39,no1,pp242-246,Jan 1991. IEEE Trans Antennas Propagation,volap-
34,no 3, pp 276-280,March 1986.
[2] C.K.Sword,M.Simaan and E.W.Kamen,
“ Multiple target angle tracking using sensor [10] W.A.Gardner , “ Simplification of
array outputs”, IEEE Trans in Aerospace MUSIC and ESPIRIT by exploitation of
Electronic Sys.,vol26,no2,pp 367- cyclostationarity”,Proc IEEE vol 76, no 7 pp
373,March 1990. 845-847, July 1988.

[3] S.B.Park,C.S.Ryu,and K.K.Lee, “


Multiple target angle tracking algorithm
using predicted angles”,IEEE Trans in
Aerospace Electronic Sys.,vol 30 ,no 2,
pp643-648,April 1994.

[4] C.R.Rao,C.R.Sastry and B.Zhou , “


Tracking the direction of arrival of multiple
moving targets”, IEEE Trans in signal
processing vol 42, no.5,pp1133-1144,May
1994.

[5] Y.Zhou,P.C.Yip and H.Leung, “


Tracking the direction of arrival of multiple
moving targets by passive
arrays:algorithms”,IEEE Trans in signal
processing vol 47, no10, pp 2655-2666, Oct
1999.

[6] M.Cho and J.Chun, “ Updating the


focusing matrix for direction of arrival
estimation of moving sources”,in Proc Nat
Aero Electron Confer.Oct 2000, pp 723-727.

[7] A.Sathish and R.L.Kashyap , “


Wideband multiple target tracking”, in proc
IEEE Int Conf Acoustic,Speech,Signal
processing, April 2004,vol4,pp517-520.

[8] P.J.Chung,J.F.Bohme and A.O.Hero,


“Tracking of multiple moving sources using

SIP0112-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

SIP0112-9
Bartlett Windowed fast computation of
discrete trigonometric transforms for real-time
data processing
Abhijit Khare, Shubham Varshney, Vikram Karwal
{khareabhijit14, shubham7502909dece}@gmail.com, vikram.karwal@jiit.ac.in

Department of Electronics and Communication


Jaypee Institute of Information Technology, Noida, India

Abstract- Discrete trigonometric transforms (DTT) their powerful bandwidth reduction capability the
namely discrete cosine transform (DCT) and discrete DCT and DST algorithms are widely used for data
sine transform (DST) are widely used transforms in compression. DCT transforms a signal or image
image compression applications. Numerous fast from the spatial domain to the frequency domain,
algorithms for rapid processing of real time data exist where much of the energy lies in the lower
in theory. Windowing is a technique where a portion frequencies coefficients like Discrete Fourier
of the signal is extracted and its transform is Transform (DFT). The main advantage of the DCT
computed. These algorithms form a class of fast
over the DFT is that DCT involves only real
update transform that uses less computation as
compared to computing transform using conventional multiplications. The DCT does a better job of
definition. Different windows such as rectangular, concentrating energy into lower order coefficients
split-triangular and sinusoidal windows have been than the DFT for image data. The DCT is adopted
used in theory to sample the real time sequence and as a standard technique for image compression in
their performance compared. In this research fast JPEG and MPEG standards because of its energy
update algorithm are analytically derived that are compaction property.
capable of windowing the real time data in presence
of Bartlett window. Initially simultaneous update A portion of input signal is extracted using
algorithms are analytically derived and thereafter
windowing [6] and the transform of the windowed
algorithms capable of independently updating DCT
and DST are derived i.e. while computing the DCT contents is computed. These classes of algorithms
updated coefficients no DST coefficients are required already exist in theory and are known as fast update
and vice-versa. The analytically derived algorithms algorithms [2]. Different windows such as
are implemented in C language to test their rectangular, split-triangular, Hamming, Hanning
correctness. and Blackman windows have been used earlier to
sample the real time data and their performance
Keywords— Discrete trigonometric transform, compared [6]. In this paper we have developed
window, fast update
update algorithm in the presence of Bartlett
I. INTRODUCTION window. Initially the algorithms are derived for
simultaneous update of DCT/ DST coefficients, i.e.
we require to compute both the DCT and the DST
In the area of signal processing, transform coefficients to find the updated DCT/ DST
coding [8] provides an efficient way for coefficients. Thereafter algorithms are derived that
transmitting and storing data. The input data establish independence [1] between the DCT and
sequence is divided into suitably sized blocks and DST coefficients. These algorithms lead to easier
thereafter reversible linear transforms are implementation of the update transform as we do
performed. The transformed sequence has much not need to compute both the coefficients
lower degree of redundancy than in the original simultaneously.
signal. Karhunen-Loéve Transform (KLT) [3] has
emerged as a benchmark for Markov-1 type Section I lists the introduction of Discrete
signals. The Discrete Cosine Transform (DCT) trigonometric transforms, windowed update
[4,7] and the Discrete Sine Transform (DST) algorithms and their advantages. Section II lists the
perform quite closely to the ideal KLT and have Bartlett window and DTT definitions.
emerged as the practical alternatives to the ideal Simultaneous Bartlett windowed update algorithms
KLT. are also derived in Section II. In Section III
independent update algorithms are derived. Section
The DCT and DST have wide applications in IV includes the complexity calculations of the
signal and image processing for the purposes of derived algorithms and section V concludes the
pattern recognition, data compression, paper.
communication and several other areas [5]. Due to
II. DCT/DST TYPE-II WINDOWED 1
SIMULTANEOUS UPDATE ALGORITHMS
USING BARTLETT WINDOW

A. Basic algorithms for DCT and DST


The DCT of a signal f(x) of length N is defined
by
0 N/2 X
𝑁−1
2 2𝑥 + 1 𝑘𝜋 Fig. 1 Bartlett Window w(x)
𝐶 𝑘 = 𝑃 𝑓 𝑥 𝑐𝑜𝑠 (1)
𝑁 𝑘 2𝑁
𝑥=0 Defining 𝑚(𝑥) = 𝑤(𝑥) − 𝑤(𝑥 + 1), above equation
can be written as:
for k=0,1....,N-1
where, 𝑓𝑤 𝑛𝑒𝑤 (𝑥) = 𝑓 𝑥 + 1 𝑤 𝑥 + 1 + 𝑓 𝑥 + 1 𝑚(𝑥)
1
𝑖𝑓 𝑘 𝑚𝑜𝑑 𝑁 = 0
2
𝑃𝑘 = Therefore,
1 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝑓𝑤 𝑥 = 𝑓𝑤 𝑥 + 1 + 𝑓𝑚 𝑥 (6)
𝑛𝑒𝑤 𝑛𝑒𝑤

The DST of the same can be written as: 𝑓𝑜𝑟 𝑥 = 0, … … , 𝑁 − 1


𝑁−1
2 2𝑥 + 1 𝑘𝜋 Where,
𝑆 𝑘 = 𝑃 𝑓 𝑥 𝑠𝑖𝑛 (2)
𝑁 𝑘 2𝑁
𝑥=0 𝑓𝑚 𝑛𝑒𝑤 𝑥 = 𝑓 𝑥 + 1 𝑚(𝑥)
for k=1,2,...,N
and
B. Simultaneous Update Algorithm
𝑓𝑤 𝑥 + 1 = 𝑓 𝑥 + 1 𝑤(𝑥 + 1)
This section lists the update equations for
Bartlett windowed update DCT/DST. The 𝑓𝑚 𝑛𝑒𝑤 𝑥 can be rewritten as:
windowed update algorithms for DCT/DST type-II
𝑓𝑚 (𝑛𝑒𝑤 ) (𝑥) = 𝑓(𝑥 + 1) 𝑚 𝑥 + 1 − 𝑚 𝑥 + 1 + 𝑚 𝑥
[9] are derived as it is the most often used
transform. For the input signal f(x), x=0,1,.... ..,N-1,
and the Bartlett window w(x) of length N with tail- i.e.,
length N/2 given by equation (4), the windowed 𝑓𝑚 𝑥 = 𝑓 𝑥+1 𝑚 𝑥+1
𝑛𝑒𝑤
data is given by
+𝑓 𝑥 + 1 𝑚 𝑥 − 𝑚 𝑥 + 1 (7)
𝑓𝑤 𝑥 = 𝑓 𝑥 𝑤 𝑥 (3)
Now,
Bartlett (or triangular) window of length N is
defined by: 4 𝑁
−𝑁 𝑖𝑓 𝑥 = 2
−1
2𝑥
𝑓𝑜𝑟 𝑥 = 0,1, … … , 𝑁 4
𝑁
m(x)-m(x+1)= 𝑁
𝑖𝑓 𝑥 = 𝑁 − 1
w(x)=
0 all other x in 0,.....,N-1
𝑁
𝑤 𝑁−𝑥 𝑓𝑜𝑟 𝑥 = + 1, … … , 𝑁 − 1 (4)
2 4 4
𝑚 𝑥 −𝑚 𝑥+1 = − 𝛿 𝑁 + 𝛿 (8)
When the new data point f(N) is available, f(0) 𝑁 𝑥, 2 −1 𝑁 𝑥,𝑁−1
is shifted out and f(N) data point is shifted in. The
Substituting the value of m(x)-m(x+1) from
updated sequence is represented by f(x+1) and the
equation (8) to equation (7), we get
shifted windowed data is given by:
𝑓𝑚 𝑛𝑒𝑤 (𝑥) = 𝑓𝑚 𝑥 + 1
𝑓𝑤 (𝑛𝑒𝑤 ) (𝑥) = 𝑓(𝑥 + 1)𝑤(𝑥) (5)
4 4
which can be rewritten as: +𝑓 𝑥 + 1 − 𝛿𝑥,𝑁 −1 + 𝛿𝑥 ,𝑁−1
𝑁 2
𝑁

𝑓𝑤 (𝑛𝑒𝑤 ) (𝑥) = 𝑓(𝑥 + 1) 𝑤 𝑥 + 𝑤 𝑥 + 1 − 𝑤(𝑥 + 1) 𝑓𝑚 𝑛𝑒𝑤 (𝑥) = 𝑓𝑚 𝑥 + 1

𝑓𝑤 (𝑛𝑒𝑤 ) (𝑥) = 𝑓 𝑥 + 1 𝑤 𝑥 + 1 4 𝑁
+ 𝑁 −𝑓 2
𝛿𝑥,𝑁 −1 + 𝑓 𝑁 𝛿𝑥,𝑁−1 (9)
2

+𝑓 𝑥 + 1 𝑤 𝑥 − 𝑤(𝑥 + 1)
The windowed update version of fw(x) and while performing the windowed DCT update, both
fm(x) for moving DCT/DST for Bartlett window is the coefficients of DCT and DST are required.
represented by equations (6) and (9) respectively. In
equation (6), fw(x+1) represents non-windowed 𝑟𝑘𝜋 𝑟𝑘𝜋
𝐶+ 𝑘 = 𝑐𝑜𝑠 𝐶 𝑘 + 𝑠𝑖𝑛 𝑆(𝑘)
update of fw(x) and the second term fm(new)(x) is a 𝑁 𝑁
correction factor that converts this non-windowed 𝑁−1
update of fw(x) into an update in the presence of the 2
+ 𝑃 −1 𝑘 𝑓 𝑁 + 𝑟 − 1 − 𝑥
window. Similarly in equation (9), fm(x+1) 𝑁 𝑘
𝑥=0
represents non-windowed update of fm(x) and the 2𝑥 + 1 𝑘𝜋
second term converts this into the update in the − 𝑓(𝑟 − 1 − 𝑥) 𝑐𝑜𝑠
2𝑁
presence of the window.
𝑓𝑜𝑟 𝑘 = 0, … … , 𝑁 − 1
Taking DCT-II of equation (6) and equation (9)
yields: where, C+(k) represents the updated DCT
coefficients.
𝐶𝑤 𝑛𝑒𝑤 𝑥 = 𝐶𝑤 𝑥 + 1 + 𝐶𝑚 𝑛𝑒𝑤 𝑥 (10)
Similarly the DST update equation may be derived
𝐶𝑚(𝑛𝑒𝑤 ) = 𝐶𝑚 𝑥 + 1 and is:
𝑁−1
2 4 𝑁 𝑆𝑤 𝑥 = 𝑆𝑤 𝑥 + 1 + 𝑆𝑚 𝑥 (12)
+ 𝑃 −𝑓 𝛿 𝑁 𝑛𝑒𝑤 𝑛𝑒𝑤
𝑁 𝑘 𝑁 2 𝑥, 2 −1
𝑥=0
2𝑥 + 1 𝑘𝜋 𝑆𝑚 (𝑛𝑒𝑤 ) = 𝑆𝑚 𝑥 + 1
+ 𝑓(𝑁)𝛿𝑥,𝑁−1 𝑐𝑜𝑠
2𝑁
𝑁−1
2 4 𝑁 𝑁 − 1 𝑘𝜋
Solving the above equation yields: + 𝑃 −𝑓𝑠𝑖𝑛
𝑁 𝑘𝑁 2 2𝑁
𝑥=0
𝐶𝑚(𝑛𝑒𝑤 ) = 𝐶𝑚 𝑥 + 1 𝑘𝜋
+ 𝑓(𝑁)(−1)𝑘 𝑠𝑖𝑛 (13)
2𝑁
𝑁−1 𝑁
2 4 𝑁 2( 2 − 1) + 1 𝑘𝜋
+ 𝑃 −𝑓 𝑐𝑜𝑠 𝑓𝑜𝑟 𝑘 = 0, … … , 𝑁 − 1
𝑁 𝑘𝑁 2 2𝑁
𝑥=0
Equations (12) and (13) can be used to
2(𝑁 − 1) + 1 𝑘𝜋
+ 𝑓(𝑁)𝑐𝑜𝑠 calculate the simultaneous update of the moving
2𝑁 DST for Bartlett window. Sw(x+1) is the non-
windowed DST update of fw(x) calculated using
𝐶𝑚 𝑛𝑒𝑤 = 𝐶𝑚 𝑥 + 1 DST update equation for rectangular window
which is listed below [2], and Sm(x+1) is the non-
𝑁−1 windowed updated DST of fm(x) calculated using
2 4 𝑁 𝑁 − 1 𝑘𝜋
+ 𝑃 −𝑓 𝑐𝑜𝑠 the same equation. Clearly, it can be seen that
𝑁 𝑘𝑁 2 2𝑁
𝑥=0 while performing the windowed DST update both
2(𝑁 − 1) + 1 𝑘𝜋 the coefficients of DST and DCT are required.
+ 𝑓(𝑁)𝑐𝑜𝑠
2𝑁
𝑟𝑘𝜋 𝑟𝑘𝜋
𝑆+ 𝑘 = 𝑐𝑜𝑠 𝑆 𝑘 − 𝑠𝑖𝑛 𝐶(𝑘)
Therefore, 𝑁 𝑁

𝐶𝑚 = 𝐶𝑚 𝑥 + 1 𝑁−1
𝑛𝑒𝑤 2
+ 𝑃 −1 𝑘 𝑓 𝑁 + 𝑟 − 1 − 𝑥
𝑁−1
𝑁 𝑘
𝑥=0
2 4 𝑁 𝑁 − 1 𝑘𝜋 2𝑥 + 1 𝑘𝜋
+ 𝑃 −𝑓 𝑐𝑜𝑠 − 𝑓(𝑟 − 1 − 𝑥) 𝑠𝑖𝑛
𝑁 𝑘𝑁 2 2𝑁 2𝑁
𝑥=0
𝑘𝜋
+ 𝑓(𝑁)(−1)𝑘 𝑐𝑜𝑠 (11) where, S+(k) represents the updated DST
2𝑁
coefficients.
𝑓𝑜𝑟 𝑘 = 0, … … , 𝑁 − 1
III. DCT/DST TYPE-II WINDOWED INDEPENDENT
Equations (10) and (11) can be used to UPDATE ALGORITHMS USING BARTLETT
calculate the simultaneous update of the moving WINDOW
DCT for Bartlett window. Cw(x+1) is the non-
windowed DCT update of fw(x) calculated using A. Independent Update Algorithm
DCT simultaneous update equation for rectangular
window which is listed below [2], and Cm(x+1) is Above mentioned equations (10) and (11) can
the non-windowed DCT update of fm(x) calculated be used to calculate the independent update of the
using same equation. Clearly, it can be seen that moving DCT-II for Bartlett window. Cw(x+1) is
the non-windowed DCT-II update of fw(x), using window which is listed below [2], and Sm(x+1) is
DCT independent update equation for rectangular the non-windowed DST-II update of fm(x) also
window which is listed below [2], and Cm(x+1) is calculated using the same equation.
the non-windowed DCT-II update of fm(x) also
calculated using the same equation. 𝑟𝑘𝜋
𝑆𝑤 𝑛 + 𝑟, 𝑘 = 2𝑐𝑜𝑠 𝑆 𝑛. 𝑘 − 𝑆 𝑛 − 𝑟, 𝑘
𝑁
𝑟𝑘𝜋
𝐶𝑤 𝑛 + 𝑟, 𝑘 = 2𝑐𝑜𝑠 𝐶 𝑛. 𝑘 − 𝐶 𝑛 − 𝑟, 𝑘 𝑟−1
𝑁 2 𝑟𝑘𝜋
+ 𝑃 𝑠𝑖𝑛 [𝑓 𝑛 − 𝑁 − 𝑥 − 1
𝑟−1
𝑁 𝑘 𝑁
𝑥=0
2 𝑟𝑘𝜋
+ 𝑃𝑘 𝑠𝑖𝑛 [𝑓 𝑛 − 𝑁 − 𝑥 − 1
𝑁 𝑁 2𝑥 + 1 𝑘𝜋
𝑥 =0 − −1 𝑘 𝑓(𝑛 − 𝑥 − 1)] 𝑐𝑜𝑠
2𝑁
2𝑥 + 1 𝑘𝜋
− −1 𝑘 𝑓(𝑛 − 𝑥 − 1)] 𝑠𝑖𝑛 𝑟−1
2𝑁 2 𝑟𝑘𝜋
+ 𝑃 𝑠𝑖𝑛 [ −1 𝑘 𝑓 𝑛 + 𝑟 − 𝑥 − 1
𝑟−1
𝑁 𝑘 𝑁
𝑥=0
2 𝑟𝑘𝜋
+ 𝑃 𝑠𝑖𝑛 [ −1 𝑘 𝑓 𝑛 + 𝑟 − 𝑥 − 1
𝑁 𝑘 𝑁 2𝑥 + 1 𝑘𝜋
𝑥 =0 −𝑓 𝑛 + 𝑟 − 𝑁 − 𝑥 − 1 ]𝑠𝑖𝑛
2𝑁
2𝑥 + 1 𝑘𝜋
−𝑓 𝑛 + 𝑟 − 𝑁 − 𝑥 − 1 ]𝑐𝑜𝑠 𝑟−1
2𝑁 2 𝑟𝑘𝜋
− 𝑃𝑘 𝑐𝑜𝑠 [𝑓 𝑛 − 𝑁 − 𝑥 − 1
𝑟−1
𝑁 𝑁
𝑥=0
2 𝑟𝑘𝜋 2𝑥 + 1 𝑘𝜋
− 𝑃 𝑐𝑜𝑠 [ −1 𝑘 𝑓 𝑛 − 𝑥 − 1 − −1 𝑘 𝑓 𝑛 − 𝑥 − 1 ]𝑠𝑖𝑛
𝑁 𝑘 𝑁 2𝑁
𝑥=0
2𝑥 + 1 𝑘𝜋
−𝑓 𝑛 − 𝑁 − 𝑥 − 1 ]𝑐𝑜𝑠 for k=1,......,N
2𝑁

for k=0,1,......,N-1 When using the above equation to


calculate the non-windowed update we need the
When using the above equation to current value S(n,k) and the previous value S(n-
calculate the non-windowed update we need the 1,k). The current and previous values in the case of
current value C(n,k) and the previous value C(n- Sw are 𝑆 𝑓 𝑥 𝑤 (𝑥) and 𝑆 𝑓 𝑥−1 𝑤 (𝑥−1) respectively.
1,k). The current and previous values in the case of Since, the value of 𝑆 𝑓 𝑥−1 𝑤(𝑥−1) is not yet
Cw are 𝐶 𝑓 𝑥 𝑤 (𝑥) and 𝐶 𝑓 𝑥−1 𝑤 (𝑥−1) respectively. available we need to derive it from𝑆 𝑓 𝑥 −1 𝑤(𝑥)
Since, the value of 𝐶 𝑓 𝑥 −1 𝑤(𝑥−1) is not yet which is available from the previous step. Similarly
available we need to derive it from𝐶 𝑓 𝑥−1 𝑤 (𝑥) for Sm, we need to calculate the correction factor to
which is available from the previous step. Similarly compute 𝑆 𝑓 𝑥−1 𝑚 (𝑥−1) from 𝑆 𝑓 𝑥−1 𝑚 (𝑥) .
for Cm, we need to calculate the correction factor
to compute 𝐶 𝑓 𝑥−1 𝑚 (𝑥−1) from 𝐶 𝑓 𝑥−1 𝑚 (𝑥) . B. Computation for oldest time-step

Similarly the analogous formulae for The correction factor to calculate the
DST-II are obtained by taking DST-II of equations correct value C[f(x-1)w(x-1)] from C[f(x-1)w(x)]
(6) and (9): for DCT update algorithm, and the correct value of
S[f(x-1)w(x-1)] from S[f(x-1)w(x)] are derived here
𝑆𝑤 𝑛𝑒𝑤 𝑥 = 𝑆𝑤 𝑥 + 1 + 𝑆𝑚 𝑛𝑒𝑤 𝑥 (14) for the DST-II update algorithm.
𝑆𝑚 (𝑛𝑒𝑤 ) = 𝑆𝑚 𝑥 + 1 𝑓 𝑥 − 1 𝑤 𝑥 = 𝑓(𝑥 − 1) 𝑤(𝑥) + 𝑤(𝑥 − 1) − 𝑤(𝑥 − 1)
𝑁−1
2 4 𝑁 𝑁 − 1 𝑘𝜋 = 𝑓 𝑥 − 1 𝑤 𝑥 − 1 − 𝑓(𝑥 − 1) 𝑤(𝑥 − 1) − 𝑤(𝑥)
+ 𝑃 −𝑓
𝑠𝑖𝑛
𝑁 𝑘𝑁 2 2𝑁
𝑥=0 = 𝑓 𝑥 − 1 𝑤 𝑥 − 1 − 𝑓(𝑥 − 1)𝑚(𝑥 − 1)
𝑘𝜋
+ 𝑓(𝑁)(−1)𝑘 𝑠𝑖𝑛 (15)
2𝑁
Therefore,
𝑓𝑜𝑟 𝑘 = 0, … … , 𝑁 − 1 𝑓 𝑥 − 1 𝑤 𝑥 − 1 = 𝑓(𝑥 − 1)𝑤(𝑥)

Equations (14) and (15) can be used to +𝑓 𝑥 − 1 𝑚 𝑥 − 1 (16)


calculate the independent update of the moving
DST-II for Bartlett window. Sw(x+1) is the non- Calculating the correction factor to convert
windowed DST-II update of fw(x), using DST 𝑓(𝑥 − 1)𝑚(𝑥) into the correct value 𝑓(𝑥 − 1)𝑚(𝑥 − 1),
independent update equation for rectangular
𝑓 𝑥−1 𝑤 𝑥 =𝑓 𝑥−1 𝑤 𝑥 +𝑤 𝑥−1 −𝑤 𝑥−1 Taking DST-II of equation (18)
= 𝑓 𝑥 − 1 𝑚 𝑥 − 1 − 𝑓(𝑥 − 1) 𝑚(𝑥 − 1) − 𝑚(𝑥) 𝑆𝑚 𝑜𝑙𝑑 𝑘 = 𝑆 𝑓 𝑥−1 𝑚 𝑥−1

= 𝑓 𝑥 − 1 𝑚 𝑥 − 1 − 𝑓(𝑥 − 1)𝑚𝑝 (𝑥 − 1) =𝑆𝑓 𝑥−1 𝑚 𝑥


𝑁−1
2 4 𝑁
Therefore; + 𝑃 −𝑓 −1 𝛿 𝑁
𝑁 𝑘 𝑁 2 𝑥,
2
𝑥=0
𝑓 𝑥−1 𝑚 𝑥−1 =𝑓 𝑥−1 𝑚 𝑥 2𝑥 + 1 𝑘𝜋
+ 𝑓(−1)𝛿𝑥,0 𝑠𝑖𝑛
2𝑁
+𝑓 𝑥 − 1 𝑚𝑝 𝑥 − 1 (17)
Therefore,
where,
𝑆𝑚 𝑜𝑙𝑑 𝑘 = 𝑆 𝑓 𝑥−1 𝑚 𝑥−1
4 𝑁
− 𝑖𝑓 𝑥 = −1
𝑁 2
= 𝑆𝑓 𝑥−1 𝑚 𝑥
4
m(x)-m(x+1)= 𝑁
𝑖𝑓 𝑥 = 𝑁 − 1
𝑁−1
2 4 𝑁 𝑁 + 1 𝑘𝜋
+ 𝑃 −𝑓− 1 𝑠𝑖𝑛
0 all other x in 0,.....,N-1 𝑁 𝑘𝑁 2 2𝑁
𝑥=0
𝑘𝜋
𝑓 𝑥 − 1 𝑚(𝑥 − 1) = 𝑓 𝑥 − 1 𝑚(𝑥) + 𝑓(−1)𝑠𝑖𝑛 (21)
4 4 2𝑁
+ 𝑓 𝑥 − 1 − 𝛿𝑥,𝑁 + 𝛿𝑥,0
𝑁 2 𝑁
Taking DST-II of equation (16)
i.e.
𝑆𝑓 𝑥−1 𝑤 𝑥−1 =𝑆𝑓 𝑥−1 𝑤 𝑥 +𝑆𝑓 𝑥−1 𝑚 𝑥−1 (22)
𝑓𝑚 𝑥 − 1 = 𝑓 𝑥 − 1 𝑚 𝑥
Equations (21) and (22) together can be used
4 𝑁 to calculate the older time sequence windowed
+ −𝑓 − 1 𝛿𝑥,𝑁 + 𝑓 −1 𝛿𝑥,0 (18)
𝑁 2 2 DST-II values.

Taking DCT-II of equation (18) IV. COMPUTATIONAL COMPLEXITY


The algorithm developed is of computational
𝐶𝑚 𝑜𝑙𝑑 𝑘 = 𝐶 𝑓 𝑥−1 𝑚 𝑥−1
order N, whereas calculating the transform via fast
DCT/DST algorithms is of order Nlog2N.
=𝐶𝑓 𝑥−1 𝑚 𝑥
𝑁−1 V. CONCLUSION
2 4 𝑁
+ 𝑃 −𝑓 −1 𝛿 𝑁 New fast efficient algorithms that are capable
𝑁 𝑘 𝑁 2 𝑥,
2
𝑥=0 of updating the Bartlett windowed DCT and the
2𝑥 + 1 𝑘𝜋
+ 𝑓(−1)𝛿𝑥,0 𝑐𝑜𝑠 DST for a real time input data sequence are listed.
2𝑁 The windowed update algorithm aims at reducing
for k=0,1,.....,N-1 the complexity in calculating DCT every time a
new value is introduced in the input. Initially
Therefore, simultaneous Bartlett windowed update algorithms
for DCT/DST-II are developed and thereafter
𝐶𝑚 𝑜𝑙𝑑 𝑘 = 𝐶 𝑓 𝑥−1 𝑚 𝑥−1 independence is established between the update of
DCT and DST. The algorithms analytically
=𝐶𝑓 𝑥−1 𝑚 𝑥 derived are verified using C language.
𝑁−1 REFERENCES
2 4 𝑁 𝑁 + 1 𝑘𝜋
+ 𝑃 −𝑓− 1 𝑐𝑜𝑠 [1] Karwal V, B.G. Sherlock, Y.P. Kakad, “Windowed DST-
𝑁 𝑘𝑁 2 2𝑁
𝑥 =0 independent discrete cosine transform for shifting data”.
𝑘𝜋 Proceeding of 20th International Conference on Systems
+ 𝑓(−1)𝑐𝑜𝑠 (19) Engineering, Coventry, U.K., Sept. 2009 pp. 252-257
2𝑁
[2] Karwal Vikram,” Discrete cosine transform-only and discrete
sine transform-only windowed update algorithms for shifting
Taking DCT-II of equation (14) data with hardware implementation,” Ph.D. Dissertation.
University of North Carolina at Charlotte, 2009, ISBN:
𝐶𝑓 𝑥−1 𝑤 𝑥−1 =𝐶𝑓 𝑥 −1 𝑤 𝑥 +𝐶𝑓 𝑥−1 𝑚 𝑥−1 20 9781109343267.
[3] Ray W.D., Driver, R.M. “Further Decomposition of the
Karhunen-Loéve Series Representation of a Stationary Random
Equations (19) and (20) together can be Process”, IEEE Trans., 1970, IT-16, pp 12-13.
used to calculate the older time sequence
windowed DCT-II values.
[4] N. Ahmed, T. Natarajan, and K.R. Rao, "Discrete cosine
transform," IEEE Trans. Comput., vol. C-23, pp. 90-94, Jan.
1974.
[5] W.K. Pratt, Generalized Wiener "ltering computation
techniques, IEEE Trans. Comput. C-21 (July 1972) 636}641.
[6] Fedrick J. Harris, “On the Use of Windows for Harmonic
Analysis with the Discrete Fourier Transform”, Proceedings of
the IEEE, vol. 66, no. 1, January 1978
[7] P.Yip and K.R. Rao, "On the shift properties of DCT's and
DST's," IEEE Trans. Signal Processing, vol. 35, pp. 404-406,
Mar.1987.
[8] B.G. Sherlock, Y.P. Kakad, "Transform domain technique
for windowing the DCT and DST," Journal of the Franklin
Institute, vol. 339, Issue 1, pp. 111-120, April 2002.
[9] Jiantao Xi, Chicharo J.F.,” Computing running DCT’s and
DST’s based on their second order shift properties,” IEEE Trans.
On circuit and system-I, Vol. 47, No.5, 2000, pp 779-783.
[10] B.G. Sherlock, Y.P. Kakad,” Windowed discrete cosine and
sine transforms for shifting data”, Journal of signal processing,
Elsevier, Vol. (81) pp. 1465-1478.
[11] B.G. Sherlock, Y.P.Kakad, A. Shukla, “Rapid update of odd
DCT and DST for real-time signal processing,” Proc. Of SPIE
Vol. 5809 pp. 464-471. Orlando, Florida, March 2005.
Losslesss com
L mpresssion scchemee based d on
preedictioon for bayerr colorr filterr
Patil Anita U1 , Drr. Sudhirkumaar D. Sawarkar2, Nareshkuumar Harale3
1
A 3033 Joykung,Sector 56,Gurgaonn,+91 99998600692,patilanitaau@gmail.com
2
Plot No.-98, Sectorr 3,Navi Mumbbai (Thane) Maaharashtra,+91 9819768930 ,,principaldmcee@yahoo.com
3
MGM’s College of Engineering
E and Technology,Kamothe, Navvi Mumbai,+91 1-
98195144330,nareshkummar.harale@mggmmumbai.ac.in

Abstractt— In most digital


d cameraas Bayer coloor
filter arrray images caaptured and demosaicing
d i
is
generallyly carried out before compression n.
Recentlyy it was compmpression firstt scheme ou ut
perform the conveentional demosaicing firsst
schemess in terms off output imagge quality. An n
efficientt reduction based losslesss compression n
scheme ffor Bayer filteer color imagess proposed
Fig 2: single sensoor camera imaaging chain ((a)
Index TTerms—Bayer Color filter array, a Losslesss demosaaicing and (b) Compression
C
compresssion, Greenn predictionn, Non-greenn
predictioon, Adaptive coolor differencee. II.PRESEN
NT SCHEMES USED

I.INTRO
ODUCTION There are
a different schemes
s presen
nt in the markket
such ass
B
BAYER COLO
OR FILTER ARRAY
A
• Lossy comprression schemee
A Bayerr Filter color array
a usually coated
c over the • JPEG2000
sensors in these camerras to record onlyo one of the
three coolors componen nts at each pixeel location. The So now
w we have to llook the drawbbacks of preseent
resultantt image is referrred to as a CFA image. methodds.

• Lossy schem mes compress a CFA Image bby


discarding its visuallly redundaant
information.
• This schem me visually yields
y a highher
compressionn ratio as commpared with thhe
lossless scheemes.
• JPEG-2000 is used to encode a CF FA
image but onnly a fair perfformance can be
b
Bayer Patter haas Red sample in center
Fig:(1) B attained.
• JPEG-2000 is very expen nsive method to
Fig shoows the Bayerr Patter has R Red sample inn compress thee images.
center, compressed for storage. Then it waas
inefficieent in a way thee demosaicing process alwayys III. PROP
POSED SCHE
EME
introducce some reedundancy which w shouldd
eventuallly be rem moved in the t followingg A Preddiction basedd lossless CF FA compressioon
schemee is proposed. It divides a CFA
C images innto
compresssion step. Wee do the comppression before
two subb-images:
demosaiicing digital cameras
c can have
h a simpleer (a) A green sub-imaage which con ntains all greeen
design and low power connsumption as a samples of the CFA im mage
computaationally heavyy process likke demosaicingg (b) Non-green sub im mage which contains
c the reed
can be carried in ann offline pow werful personaal and bluue samples in thhe CFA image.
computeer. This motiv vates the demmand of CFA A
image coompression schhemes. This syystem is mainlyy consists of tw
wo parts

• Encoder
• Decoder
Encoderr:

Fig 4: Four possiblle directions associated wiith


green pixel
p

Let g(m
mk,nk)Є Φg(i,,j) for k=1,2,,3,4 be the foour
ranked candidates of sample g(i,j)
g Э(Sg(i,jj),
Sg(mu,,nu)) <= D D(Sg(i,j), Sg(mv,nv) ) ffor
1<=u<==v<=4
Fig 3: Sttructure of propposed scheme

Green Subimage
S is cooded first and the Non greenn If the directions oof g(i,j) is iddentical to thhe
Subimagge follows baased on greenn subimage as a directioons of all greenn samples in Sg(i,j), pixel (i,j)
reference and To reduuce the spectrral redundancyy, will bee considered in a homogennous region annd
the nonggreen subimaage is processeed in the coloor predictiion of g(i,j) is
differencce domain whereas
w the greeen subimage is
i
processeed in the intenssity domain as a reference foor
the coloor difference content of the nongreenn
subimagge. Both subim mages are proccessed in rasteer
i.e. {w1,w2,w3,w4}= ={1,0,0,0} Elsse the g(i,j) is in
scan seequence withh context maatching basedd
heteroggenous region and
a predicted value
v of g(i,j) iis
predictioon technique to removee the spatiaal
dependeency. The pred diction residuee planes of the
two suubimages aree then entrropy encodedd
sequentiially with our proposed realization scheme
of adaptiive Rice code.
i.e. {w11,w2,w3,w4}=
={5/8,2/8,1/8,0}
IIV. WORKING
G OF THE SC
CHEME
FLOW CHART FO
OR PREDICT
TION ON TH
HE
This prroposed schem me is mainlyy working onn GREEN
N PLANE
Predictioon on the greenn plane and Prrediction on the
Non-greeen plane.

Predictioon on the greenn plane

As the green plane is i raster scannned during the


predictioon and all preediction errorss are recordedd.
Now proocessing a parrticular green plane the fouur
nearest processed neiighboring sam mples of g (i,jj)
form a candidate set

We cann find the dirrections associated with the


green pixxels it need som
me process.

Adaptivve color differrence estimatioon for non greeen


plane
When compressing
c thhe nongreen co
olor plane, collor
differennce informatioon is exploitedd to remove thhe
color sppectral dependency.
Let c(mm,n) be the inntensity value at a non greeen
samplinng position(m m,n). Green-R Red(Green-Bluue)
color diifference of pixxel (m,n) is
d(m,n))=g’(m,n)-c(m,,n)
g’(m,n)) à estimated green comp ponent intensiity
value
m,n-1)+g(m,n+
GH=(g(m +1))/2 and
where g(i,
g j), d(i, j) are
a respectivelyy, the real greeen
m+1,n)+g(m+1,n))/2
Gv=(g(m
sample value and the color differencce value of pixxel
(i, j)

The error
e residue e(i, j) is thenn mapped to a
nonneggative integer aas follows to reshape
r its valuue
distribuution to an expponential one from
f a Laplaciaan
one

The E((i, j) ’s from thhe green sub-iimage are rastter


scannedd and coded with
w Rice code first. Rice codde
is emplloyed to coode E(i, j) because
b of iits
simpliccity and higgh efficiencyy in handlinng
exponeentially distributed sources When
W Rice codde
is usedd, each mappedd Residue E(i, j) is split intoo a
Quotiennt Q

Where parameter k is a non negativen integger


Predictioon on the non green
g plane
Quotiennt and Remainder are thhen saved ffor
storagee and transmisssion.
The Leength of code w word used for representing
r E
E(i,
j) is k dependent
d and is given by

Parameeter k is critical to th he compressioon


perform
mance as it deteermines the co
ode length of E
E(i,
j)

Optimaal parameter K is given by

Where is the gold


den ratio.
Color diifference prediiction of a nonn green sample For a geometric
g sourrce with distribbution parametter
c(i,j) witth color differeence value d(i,jj) is color sppaces I As lonng as is μ know wn, parameter ρ,
and, heence, the optim
mal coding parameter k for thhe
whole ssource can be determined
d eassily.
Μ is esstimated adaptively in course of Encoding

Where {{w1, w2, w3, w4}={4/8,


w 2/8, 1/8, 1/8}
Where k is predictor coefficient d(mk,nk)
d is kthh
ranked candidate
c in Φcc(i,j)

Compresssion scheme
The preediction Error of pixel (i, jj) in the CFA
A
image, say e(i, j) is givven by When codinng E(i, j) of green plane is
definedd to be
Image 2 6.188 5.218 4.847
Image 3 6.828 4.525 3.847
Table I

When cooding E(i, j) off non green plaane is defined too If we aalter the values of weighting g factor then w
we
be get impproved results in terms of coompression rattio
and alsoo reduce the biit rates of CFA
A.

Overall C CFA Bit Compression Raatio


Rate (in bpp)
Decodinng Process: α =0 4.9496 1.6163
α =0.6 4.8486 1.6496
Decodinng Process iss just reversse process of o α =0.8 4.8437 1.6516
Encodinng. Green Sub b-image is deccoded first andd α =1 4.8366 1.6537
then thee non-green su
ub-image is decoded with the Table-II
decodedd green sub-immage as a referrence. Originaal ADV VANTAGES OF O PROPOSE ED METHOD
CFA Immage is then recconstructed byy combining the
two sub images. We cann reduce the sppectral redunddancy mean tim
me
and alsso can get highh quality imag
ge. Reducing thhe
sensorss in digital ccameras from 3 to 1. Loow
compleexity to designn. Compare wiith JPEG2000 it
gives better performannce.

VI. EX
XPERIMENTA
AL RESULTS

Fig 5: Sttructure of Deccoder

BITRAT
TE ANALYSIS
S

From thee above fig, it shows that α = 1 can provide


a good compression performance.
p We assume the
W
predictioon residue is a local variablle and estimate
the meaan of its value distribution adaptively.
a The
divisor used to geneerate the Ricee code is thenn
adjustedd accordingly soo as to improve the efficiencyy
of Rice code.
c

V. COM MPRESSION PERFORMANC CE


Simulatiions were caarried out to evaluate the
performaance of propossed compressioon scheme. 244-
bit colorr images of sizze 512*768 were sub-sampledd
accordinng to the Bayerr pattern to forrm 8 bit testingg
CFA im mages. These Im mages are directly coded byy
the proposed compresssion scheme foor evaluation.

Some reepresentative Lossless


L compreession schemees
such as JPEG-LS, JPE EG 2000(losslless mode) andd
LCMI were
w used for co omparison of results
r

S No. JPEG LS JPEG Proposed


2000
Image 1 5.467 5.039 4.803
VII. CONCLUSION [11] F. Destrempes, J.-F. Angers, and M. Mignotte,
“Fusion of hidden Markov random field models and
CFA image encodes the sub-image separately with its Bayesian estimation,” IEEE Trans. Image
predictive coding Lossless prediction is carried out Process., vol. 15, no. 10, pp. 2920–2935, Oct. 2006.
in the intensity domain for the green. While it is
carried out in the color difference domain for the [12] Z. Kato, T. C. Pong, and G. Q. Song,
non green “Unsupervised segmentation of color textured
images using a multi-layer MRF model,” in Proc.
VIII.ACKNOWLEDGMENT Int. Conf. Image Processing, Barcelona, Spain, Sep.
2003, pp. 961–964.
The first author express his gratitude to the
remaining two authors towards the completion this [13] P. Pérez, C. Hue, J. Vermaak, and M. Gangnet,
project. “Colorbased
IX REFERENCES probabilistic tracking,” in Proc. Eur. Conf.
Computer Vision, Copenhagen,Denmark, Jun.
[1] S. Banks, Signal Processing, Image Processing
2002, pp. 661–675.
and Pattern Recognition. Englewood Cliffs, NJ:
Prentice-Hall, 1990. [14] J. B. Martinkauppi, M. N. Soriano, and M. H.
Laaksonen, “Behavior of skin color under varying
[2] S. P. Lloyd, “Least squares quantization in
illumination seen by different cameras at different
PCM,” IEEE Trans. Inf. Theory, vol. IT-28, no. 2,
color spaces,” in Proc. SPIE, Machine Vision
pp. 129–136, Mar. 1982.
Applications in
[3] P. Berkhin, “Survey of clustering data mining
techniques,” Accrue Software, San Jose, CA, 2002.

[4] J. Besag, “On the statistical analysis of dirty


pictures,” J. Roy. Statist. Soc. B, vol. 48, pp. 259–
302, 1986.

[5] D. Comaniciu and P. Meer, “Mean shift: A


robust approach toward feature space analysis,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 24,
no. 5, pp. 603–619, May 2002.

[6] J. Shi and J. Malik, “Normalized cuts and image


segmentation,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 22, no. 8, pp. 888–905, Aug. 2000.

[7] P. Felzenszwalb and D. Huttenlocher, “Efficient


graph-based image segmentation,” Int. J. Comput.
Vis., vol. 59, pp. 167–181, 2004.

[8] S. Zhu and A. Yuille, “Region competition:


Unifying snakes, region growing, and Bayes/MDL
for multiband image segmentation,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 18, no. 9, pp. 884–
900, Sep.1996.

[9] M. Mignotte, C. Collet, P. Pérez, and P.


Bouthemy, “Sonar image segmentation using a
hierarchical MRF model,” IEEE Trans. Image
Process., vol. 9, no. 7, pp. 1216–1231, Jul. 2000.

[10] M. Mignotte, C. Collet, P. Pérez, and P.


Bouthemy, “Three-class Markovian segmentation
of high resolution sonar images,” Comput. Vis.
Image Understand., vol. 76, no. 3, pp. 191–204,
1999.
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

OPTIMAL RECEIVER FILTER DESIGN


Vivek Kumar Dr. K. Raj
Deptt. of Electronics Engg. Deptt. of Electronics Engg.
IITM. Kanpur Harcourt Butler Technological
5/414, Avas Vikas,Farrukhabad Institute, Kanpur – 208002, India
Vivekhbti07@gmail.com kraj_biet@yahoo.com

Abstract consumption at the mobile units.


Simulation demonstrated that
In wireless communication systems, receiver filter designed using
the pulse shaping filters are often MMSE criterion can significantly
used to represent massage symbols improve the system performance by
for transmission through channel & reducing inter symbol interference
its matched filter at the receiver in comparison to the optimal
end. This paper deals with the matched filter.
design & comparison of the optimal
receiver filter that maximize the Key words: MMSE, 3G, AWGN,
signal to interference plus noise QAM, PSK,SIR, SINR
ratio of the received signal. The
1. Introduction
first approach is based on
optimizing optimal matched filter The fundamental operation of
criterion and the second approach is wireless communication systems is
based on optimizing MMSE to encode, modulate, up sample and
criterion which provides a closed then transmits digital information
form analytic solution for the filter symbols in a form of analog
coefficient. In 3G and beyond 3G waveform through wireless
systems, higher SIR of the received channel. This analog waveform are
signal is required so that higher the output of the transmit filters
order modulation schemes can be which include pulse shaping filter,
applied to achieve high data phase equalizers and R.F. filters.
transmission throughput and also On the receiver side, the received
short tap length receiver filters in waveforms are filtered by receiver
order to reduce the power filter which is normally matched to
SIP0201-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

the transmitter filter. The output of proposed to design general Fir


the receiver filter is down sampled transmitter pulse shaping filters and
then sent into demodulator & its matched filter that have
decoded to recover the transmitted orthogonally property as the root
information. Figure 1 shows a Nyquist filter. These methods can
simple communication link system. not be applied to design optimal
receiver filters because these
methods are application specific
g(i) Tx RF and require changing the transmitter
s(n)
pulse shaping filter to eliminate the
4X interference which is not easy.
In 3G and beyond 3G systems,
f(i) Rx RF
higher order modulation schemes
S^(n) 4X such as 8-PSK,16-QAM are used to
increase the data transmission
Figure 1. A communication throughput. These schemes require
link system higher SIR of the received signal so
Nyquist filters are commonly used that the transmission is reliable. The
in data transmission systems for receiver filter provides higher SIR
pulse shaping. They have the of the received signal and must
property that their impulse have short tap length so that we
responses have zero-crossing that is have large noise margin & less
uniformly spaced in time. If the power consumption at the mobile
channel is an AWGN channel, the units. So the main targets of the
Nyquist filter is an ideal pulse receiver filter design are following
shaping filter since it has an infinite (i) Maximizing SINR of the
length of impulse response. The received signal to use higher order
most well known is using the root modulation schemes.
Nyquist filter and its matched filter
which introduces no inter symbol (ii) Receiver filter must have short
interference. A practical tap length for less power
approximation of the Nyquist filter consumption at the mobile units
is the raised cosine filter. Several
methods [5,11,7,17,4] were

SIP0201-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

To design optimal receiver filter, [R1] We want to design a receiver


we use two approaches, the first filter, whose impulse response is
approach is based on optimal f (i) and filter length is L means ,i
matched filter criterion which is = 0,...,L − 1 , such that the
design using the optimal filter received signal s ˆ(i) has the
design method and have same highest SINR. With given an FIR
property of the transmitter filter transmitter filter, whose impulse
such as band edges frequencies, response is g ˆ (i) where i =0,..., N −
pass band and stop band ripples etc. 1.
The second approach is based on
MMSE criterion which minimize [R2] Since the transmitted signal is
the error the transmitted signal and bandwidth limited, the side lobe of
the received signal. This approach the receiver baseband filter in stop
leads a closed form analytic band means for frequency greater
solution of the receiver filter than 740 kHz (f ≥ 740 kHz), there
coefficients and can be extended to should be sharp cut of less then −40
design adaptive receiver filters. dB.
Here optimally is in the sense of
[R3] The wireless channel is
maximizing signal to interference
frequency non-selective and has
plus noise ratio of the received
only one path.
signal. Simulation demonstrated
that the receiver filter design using However, we add the requirement
MMSE approach can significantly [R2] as a constraint on adjacent
improve signal to interference plus channel interference power.
noise ratio (SINR) of the received
signal as compared to the receiver 3. Optimal receiver filter design
filter design using optimal matched
filter. In this section, we present two
approaches to design optimal
2. Requirements for receiver receiver filters that maximizing
filter design signal to interference plus noise
ratio of the received signal. With
We consider the following the receiver filter being the optimal
requirements for the design of the matched filter, the maximal signal
optimal receiver filter. to noise ratio of the received signal

SIP0201-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

can be achieved but the signal to The filter parameters are


interference ratio is not too good. determined such that the maximum
absolute value of E(ω) is
3.1 Optimal matched filter minimized. By using the remez
approach exchange algorithm, we can design
a filter which has optimal set of
In the optimal design method, the
filter coefficients such that receiver
weighted approximation error
filter being matched [f(i) = g(N-i)]
between the actual frequency
to the transmitter filter.
response & the desired filter
response is spread across the pass 3.2 MMSE approach
band and stop band and the
maximum error is minimized. This In this approach, we derive the
design method results ripples in signal model of the received signal
pass band & stop band. So the and assume that the information
frequency of the filter in the pass symbol sequence is white noise
band and the stop band random process. In the
respectively. communication link system as
shown in Fig.1, we assume that the
1 - δp ≤ │H(ejω)│≤ 1+δp channel impulse response h (t) has
│ω│≤ ωp been estimated using a pilot signal
or using blind channel identification
-δs ≤ │ H(ejω )│≤ δs
algorithms.
│ω│≥ ωs
If the impulse response of
Where, δp = pass band ripple & δs =
transmitter filter and channel is g
maximum attenuation in the stop
which is represented by the
band.
convolution of gˆ (t) and h (t) i.e. g
The weighted approximation error = gˆ ∗ h, where g has a finite
is defined as support on [0, N − 1]. Thus the
combined impulse response of the
E(ω) = W(ω) [Hd(ω) - H(ejω)] transmitter, channel and receiver
baseband filtering result is denoted
Where, Hd(ω) = desired frequency by the convolution g ∗ f. The
response & H(ejω) = actual
impulse response of g ∗ f is
frequency response.

SIP0201-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

N 1 [( N L 2 ) / 8 ]
h^(k) = g(i)f(k-i), ŝ (k) = h(0) s(k) +
i o i [( N L 2 ) / 8 ]
k=0,1,…………….N+L-1, i 0

(1) 0(k) + nACI(k)


(3)
We require sum of length of
where the first term is the desired
transmitter filter and length of
signal, second term represents the
receiver filter is an even number i.e.
ISI, third term represents the noise
N+L is an even number and filter
present in the received signal and
has linear phase property. So we
forth term represents the adjacent
can let,
channel interference on the right
h (k) = ĥ (k-(N+L-2)) hand side of equation (3).The
transmitted signal is s (k) and the
Then h (k) is represented as received signal is s^ (k). The mean
square error betweenthe transmitted
g (0)
= g (1) . f ( 0) and the receiver signal is given by
. . . .

. . . . . Minimize: MSE = E [( s (k) - ŝ (k))


. 2
. . . . g (0) ] (4)
. f ( L 1)
g(N 1) . .
.
g ( N 1)
By equation (2), we have h (i) = Gi
F, where Gi is the i-th row of the
h(k)=GF, (2) matrix G. We define a matrix Ĝ
made by the rows G 4i , where − [
Where G is a Toeplitz matrix of (N + L − 2)/8 ]≤ i ≤ [(N + L − 2)/8
g(k) and F is a vector of f(k). ] and i ≠ 0.
Let the frequency response of
After 4X down sampling at the the receiver filter at frequency fi
receiver, the received signal can be be represented as
represented as ₣iF, where ₣i is a row of the
complex Fourier transform matrix ₣
corresponding to the frequency fi,
i.e.,

SIP0201-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

₣= designing the receiver filter, the


1 exp( j 2 fi / Fs) . . . . exp( j2 (L 1) fi / Fs) . ACI power and the channel noise
Fs is the sampling frequency. The are not known in advance. The
MSE in (4) is approximately parameters λ and N0 can be adjusted
to meet the side lobe requirement
MSE ≈ ║ĜF- δ ║2 + N0 ║F║2 + λ and to optimize the transition band
║₣^F║2 (5) but the adjustment of these
Where N0 is the power spectral parameters are not easy and not
density of the channel noise, λ is straightforward.
the flat power spectral density in
the adjacent frequency band, 4. Comparison between design
filters
δ = [0 … … 1 … … 0]T ,
₣^ = [₣jT ₣j+1T ………… The frequency response of the
T T optimal receiver filter design using
₣M ]
MMSE approach and using optimal
fj = 740Hz to 3.4KHz(voice matched approach is shown in
frequency) & M is the number of figure 2. We observe that the
frequency sampling points. MMSE receiver filter has more flat
Thus, the minimum mean square frequency response as compared to
error problem (16) becomes, the matched filter in the passband.
This frequency response is close to
Minimize: ║ĜF- δ ║2 + N0 the frequency response of the root
║F║2+λ║₣^F║2 (6) Nyquist filter which has a flat
frequency response in the passband.
The receiver filter which minimize We also observe that receiver filter
the mean square of the estimation designed using MMSE approach
error in (6) is has high skirt than the filter
designed using the matched filter
F=( ĜĜT + N0I + λ ₣^₣^T)-1 ĜT δ(7) approach.
Now we compare the receiver
Where I is an identity matrix. This filters designed by both approach
analytic solution can be applied in using a simple wireless
designing an adaptive receiver filter communication system. We assume
with channel being estimated. In that wireless channel is frequency

SIP0201-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

non-selective and has only one Eye Diagram for In-Phase Signal
4
path. In 3G systems, we require to
2
transmit data rate as high as

Amplitude
possible. To increase the data 0

transmission throughput, we have -2

to use spectral efficient modulation -4


-0.5 0 0.5

schemes such as 16- QAM. Due to Time

Eye Diagram for Quadrature Signal


increasing the data throughput, 4

there is high ISI problem. 2

Amplitude
0
20
Matched filterdata1
-2
MMSE filter

0 -4
-0.5 0 0.5
Time
Normalized magnitude response(dB)

-20

Figure 3. Eye diagram of


-40 received signal using optimal
matched filter
-60 Figure 4 shows the eye diagram of
receiver pulse shaping filter design
-80 using MMSE approach. In this eye
diagram, we can resolve that 16-
-100
0 0.5 1 1.5 2 2.5
QAM modulated signal can be
Frequency(MHz)
received reliably using this 48-tap
Figure 2. Frequency response of receiver filter. We can
receiver pulse shaping filters examine receiver filter design
using MMSE approach is racier to
We know that eye diagram provides sampling time error of the received
a great deal of useful information signal and provides large noise
about the performance of a data margin to the system as compared
transmission system. The eye to the receiver filter design using
diagram of using 48 tap optimal optimal matched approach.
matched receiver filter shows that
eyes are very small due to high ISI.

SIP0201-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

We compare the optimal matched


receiver filter and optimal receiver
filter design using MMSE approach
for tap length L = 36, 48 and 64
respectively. From Table 1, we
observe MMSE receiver filter can
provide higher SINR but the cost
we bear for significantly gain of
SINR is a slight degradation of
Number of 36 48 64 SNR. We can achieve higher SINR
taps using matched filter with more
SINR/SIN 5.36 9.865 20.40 number of tap length but this
Rmatched(d 4 56 increase the power consumption at
B) mobile units. We can increase the
SNR/SNR - - - SNR of the received signal by
matched(dB) 0.51 0.225 0.353 increasing power spectral density of
71 0 4 the channel noise in the MMSE
4
Eye Diagram for In-Phase Signal filter & when channel noise = ∞,
the MMSE filter is the same as the
2
optimal matched filter.
Amplitude

0
Table 1. Comparison between
-2 optimal matched filter and
-4
-0.5 0 0.5
MMSE filter with different no. of
Time
taps
Eye Diagram for Quadrature Signal
4

2
Amplitude

-2
5. Conclusion
-4
-0.5 0
Time
0.5 In 3G and beyond 3G system,
higher SIR of the received signal is
Figure 4. Eye diagram of required so that high order
received signal using receiver modulation schemes such as 8-
MMSE filter PSK, 16-QAM can be applied from
which we can achieve high data

SIP0201-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

transmission throughput. By pulse shaping. In IEEE Int. Conf.


designing receiver filter using Commun., Chicago, IL,
approaches presented in section 3, June 1992.
we can achieve high data [4] T.N. Davidson, Z.Q. Luo, and
transmission throughput. The first K.M. Wong. Design of orthogonal
design method is based on optimal pulse shapes for
matched criterion and the second communications via semide finite
approach is based on optimizing an programming. IEEE Trans. Signal
MMSE criterion which provides an Processing, 48(5):1433–
analytic solution of filter 1445, May 2000.
coefficients. By simulations, we can [5] H. Samueli. On the design of
demonstrate that there is FIR digital data transmission filters
significantly improvement in with arbitrary magnitude
performance of using the optimized specifications. IEEE Trans.
receiver pulse shaping MMSE filter Circuits Syst., 38(12):1563 –1567,
over the optimal matched filter. December 1991.
[6] L. Tong, G. Xu, and T.
References Kailath. Blind identi fication and
equalization based on second-order
[1] N.C. Beaulieu, C.C. Tan, and statistics: A time domain approach.
M.O. Admen. A “better than” IEEE Trans. Information Theory,
Nyquist pulse. IEEE 40(2):340–349,
Communications Letters, 5(9):367 March 1994.
–368, September 2001. [7] J. Tuqan. On the design of
[2] T. Berger and D.W. Tufts. optimal orthogonal finite order
Optimum pulse amplitude transmitter and receiver filters for
modulation part I: Transmitter- data transmission over noisy
receiver design and bounds from channels. In Proc. of the 34th
information theory. IEEE Trans. Asilomar Conf. on Signals,
Information Theory, Systems and Computers, volume 2,
13(2):196 –208, April 1967. pages 1303 – 1307, October 2000.
[3] J.O. Coleman and D.W. Lytle. [10] Haykin Simon. “Adaptive
Linear programming techniques for filter theory” fourth edition ,
the control of intersymbol Pearson Education , Delhi, pp.
interference with hybrid FIR/analog 436-460.

SIP0201-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM
(SPRTOS)” MARCH 26-27 2011

SIP0201-10
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Signal Acquisition and Analysis System Using


LabVIEW
Subhransu Padhee, Yaduvir Singh
Department of Electrical and Instrumentation Engineering
Thapar University, Patiala, 147004, Punjab
subhransu_padhee@yahoo.com, dryaduvirsingh@gmail.com

Abstract- In the present era virtual instrumentation super imposed in the data which comes from the
technique is considered as a separate discipline of field with the help of transducers and data
engineering education. It has replaced the acquisition system. After acquiring data from the
conventional technique of measurement and data field, the signal processing operation is performed.
acquisition and taken the instrumentation In signal processing operation, different noises
experiment in to a new level. With easy to use, which are super imposed in the original process
graphical programming enabled software, supported signal is removed and the signal is amplified so that
by dedicated, easy to use hardware virtual the signal keeps its original traits and the data
instrumentation has transformed the notion of which comes with the signal remains intact. After
engineering education and simulation based the signal processing part, the data is given to a data
experiments. processing algorithm which processes the data and
This paper gives a brief idea of the need and stores the data in a memory unit.
advantages of virtual instrumentation in engineering With the advantage of technology
education and discusses the need of distant personal computers with PCI, PXI/compact PCI,
laboratory in engineering education. It also PCMCIA, USB, IEEE 1394, ISA, VXI, serial and
develops a simple application for signal acquisition, parallel ports are used for data acquisition, test and
analysis and storage. measurement and automation. Personal computers
Keywords- LabVIEW, virtual instrumentation are linked with the real world process with the help
of OPC, DDE protocol and application software is
I. INTRODUCTION used to form a closed loop interaction between the
Acquiring multiple data, the data may be analog real world process, application software and
or discrete in nature from the field or process at personal computing unit. Many of the networking
high speed using multi channel data acquisition technologies that have already been available for a
system, processing the data with the help of a data long time in industrial automation (e.g., standard
processing algorithm and a computing device and and/or proprietary field and control level buses),
displaying the data for the user is the elementary besides having undertaken great improvements in
need of any industrial automation system [1,2,3,4]. the last few years, have also been progressively
Modern day process plants, construction sites, integrated by newly introduced connectivity
agricultural industry [11], petroleum, wireless solutions (Industrial Ethernet, Wireless LAN, etc.).
sensor network [16], power distribution network They have greatly contributed to the technological
[17], refinery industry, renewable energy system renewal of a large number of automation solutions
[10,28] and every other industry where data is of in already existing plants. Obviously, even the
prime importance use wireless data acquisition, data software technologies involved in the
processing and data logging equipments. Acquiring corresponding data exchange processes have been
data from the field with the help of different sensor greatly improved; as an example, today it is
is always challenging. Different kinds of noises are possible to use a common personal computer in

SIP0202-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

order to implement even complex remote converts the analog signal to the equivalent digital
supervisory tasks of simple as well as highly data. The equivalent digital data is then fed to the
sophisticated industrial plants. computer, which acts both as a controller and
This paper gives an overview of modern day display element.
industrial automation system comprising of data Once data has been acquired, there is a need
acquisition system and data loggers. This paper to store it for current and future reference. Today,
develops a secured data acquisition and analysis alternative methods of data storage embrace both
module using virtual instrumentation concept. With digital computer memory and that old traditional
the help of this system the operator can securely standby-paper. There are two principal areas where
login to the system and perform the desired signal recorders or data loggers are used. Recorders and
acquisition and analysis operation. The system also data loggers are used in measurements of process
stores the relevant data for future reference and variables such as temperature, pressure, flow, pH,
record keeping purpose. humidity; and also used for scientific and
engineering applications such as high-speed testing
II. INDUSTRIAL AUTOMATION SYSTEM (e.g., stress/strain), statistical analyses, and other
Most measurements begin with a transducer, a laboratory or off-line uses where a graphic or
device that converts a measurable physical quantity, digital record of selected variables is desired.
such as temperature, strain, or acceleration, to an Digital computer systems have the ability to
equivalent electrical signal. Transducers are provide useful trend curves on CRT displays that
available for a wide range of measurements, and could be analyzed.
come in a variety of shapes, sizes, and
specifications. Signal conditioning can include III. VIRTUAL INSTRUMENTATION IN DISTANT LAB
amplification, filtering, differential applications, To improve the learning methodology in
isolation, simultaneous sample and hold (SS&H), different discipline in engineering virtual
current-to-voltage conversion, voltage-to-frequency instrumentation is used. This technique is easy to
conversion, linearization and more. use, easy to understand and cost effective. The main
feature is that various simulations can be performed
with the help of programming, which is very
difficult to perform in hardware. State of art virtual
instrumentation system has been reported in
literature which enhances the learning experience of
the students of different discipline. Some of the
discipline where state of art virtual instrumentation
system has been developed are mechanical
engineering [6], power plant training [8],
electronics [9], control system [12], chemical
engineering [14], ultrasonic range measurement
[20], biomedical [21,22], power system [23,24],
electrical machine [25], intelligent control [31].
Figure 1: Block diagram of data acquisition and Laboratories in engineering and applied science
logging have important effects on student learning. Most
Figure 1 shows the schematic diagram of educational institutions construct their own
data acquisition system. Sensor is used to sense the laboratories individually. Alternatively, some
physical parameters from the real world. The output institutions establish laboratories, which can be
of the sensor is provided to the signal conditioning conducted remotely via internet. Different
element. The main purpose of signal conditioning researchers have proposed the concept of distant
element is to remove the noise of the signal and laboratory [7, 18, 19] using internet [27], and using
amplify the signal. The output of the signal intranet [26]. Researchers have proposed different
conditioning system is provided to ADC. The ADC hardware and software architectures for remote

SIP0202-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

laboratories. General structure of a remote


laboratory is almost the same in every academic
research: Remote clients, a server computer
equipped with an IO module and remote
experimental setup connected to the server.

Figure 3: Front panel of the system


This operator console consists of four buttons and
for security reason the operator has to login using
Figure 2: Architecture of remote laboratory authenticated username and password to access all
Figure 2 illustrates, the architecture of the other functionality of the system. Figure 4 shows
remote laboratory consists of a server computer the login screen. This screen appears when the login
with an industrial network card. Since the network button is pressed.
card is plugged in a PCI slot, it is called PCI card. It
provides required protocol operations for controller
area network.
IV. CASE STUDY
This section develops a signal processing
application using LabVIEW. This application can
be used to teach students about basics of virtual
instrumentation and signal processing. With this
application the student can get a basic knowledge
about signal processing and perform different
applications oriented experiments using LabVIEW Figure 4: Front panel of login screen for operator
without going in for CRO or DSO. Figure 3 shows
Figure 5 shows the data acquisition module of the
the front panel of the application.
system where there is control to set the desired
amplitude and frequency of the signal. Noise of
certain amplitude can be added with the signal. This
module shows both the time domain and frequency
domain representation of the noisy signal.

SIP0202-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Figure 5: Front panel for time domain analysis of Figure 7: Front panel for analysis of the subset of
the acquired noisy signal signal
Figure 5 shows the time domain These results can be analyzed and logged to a file
representation of the noisy signal where as figure 6 for record keeping and further analysis.
show the frequency domain representation of the
signal. Frequency domain representation involves
the Fourier analysis of the signal. V. CONCLUSIONS
This paper emphasizes on the data acquisition,
supervisory control and data logging aspect of an
industrial process. These areas are of prime
importance for computer control of an industrial
process. The signal is acquired from the filed and
different signal processing and analysis function is
performed on the acquired signal on the selected
portion of the signal. The selected portions of the
signal along with its mathematical values are stored
in a log file for record keeping and future reference
Figure 6: Front panel for frequency domain analysis and analysis.
of the acquired signal In future scope of the paper, a wireless
The third module of the system is the analysis web based data acquisition, data logging and
module. In this analysis module the operator can supervisory control system can be implemented.
select a certain portion of the signal using the The main advantage of wireless web based data
pointer available. The portion of the signal is acquisition system is that any authorized person in
displayed in the subplot and DC value, RMS value, any where in the world can access the real time
average value and mean value of the portion of the process data with the help of internet. The main
signal is displayed. Figure 7 shows the front panel concern area of web based data logging and
for waveform analysis. supervisory control system is the security of data
and authentication of the user. To solve the above
security need a firewall can be implemented.

References

[1] Joseph Luongo, “A Multichannel Digital


Data Logging System,” IRE Transactions on
Instrumentation, Jun 1958, pp. 103-106.

SIP0202-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

[2] Rik Pintelon, Yves Rolain, M. Vanden [11] Sarang Bhutada, Siddarth Shetty, Rohan
Bossche and J. Schoukens, “Towards an Malye, Vivek Sharma, Shilpa Menon,
Ideal Data Acquisition Channel,” IEEE Radhika Ramamoorthy, “Implementation of
Transactions on Instrumentation and a Fully Automated Greenhouse using
Measurement, vol. 39, no. 1, Feb 1990, pp. SCADA Tool like LabVIEW,” in
116-120. Proceedings of the 2005 IEEE/ASME
[3] Deichert, R.L., Burris, D.P., Luckemeyer, J., International Conference on Advanced
“Development of a High Speed Data Intelligent Mechatronics, Jul 2005, pp. 741-
Acquisition System Based on LabVIEW 746.
and VXI,” in Proceedings of IEEE [12] Samuel Daniels, Dave Harding, Mike
Autotestcon, Sep 1997, pp. 302-307. Collura, “Introducing Feedback Control to
[4] F. Figueroa, S. Griffin, L. Roemer and J. First Year Engineering Students Using
Schmalzel, “A Look into the Future of Data LabVIEW,” in Proceedings of 2005
Acquisition”, IEEE Instrumentation and American Society for Engineering
Measurement Magazine, vol. 2, issue 4, Education Annual Conference &
Dec1999, pp. 23–34. Exposition, 2005, pp. 1-12
[5] A. Ferrero, L. Cristaldi and V. Piuri, [13] Mihaela Lascu and Dan Lascu, “Feature
“Programmable Instruments, Virtual Extraction in Digital Mammography Using
Instruments, and Distributed Measurement LabVIEW,” 2005 WSEAS International
Systems: what is Really Useful, Innovative, Conference on Dynamical Systems and
and Technically Sound”, IEEE Control, Nov 2005, pp. 427-432
Instrumentation and Measurement [14] V M Cristea, A Imre-Lucaci, Z K Nagy and
Magazine, vol. 2, issue 3, Sep 1999, pp. 20– S P Agachi, “E-Tools for Education and
27. Research in Chemical Engineering,”
[6] P. Strachan, A. Oldroyd, M. Stickland, Chemical Bulletin, vol. 50, issue 64, 2005,
“Introducing Instrumentation and Data pp. 14-17
Acquisition to Mechanical Engineers Using [15] Ziad Salem, Ismail Al Kamal, Alaa Al
LabVIEW,” International Journal of Bashar, “A Novel Design of an Industrial
Engineering Education, vol. 16, no. 4, Jan Data Acquisition System,” in Proceedings
2000, pp. 315-326 of International Conference on Information
[7] K K Tan, T H Lee, F M Leu, “Development and Communication Techniques, Apr 2006,
of a Distant Laboratory Using LabVIEW,” pp. 2589-2594.
International Journal of Engineering [16] Aditya N. Das, Frank L. Lewis, Dan O.
Education, vol. 16, no. 3, 2000, pp. 273-282 Popa, “Data-logging and Supervisory
[8] Amit Chaudhuri, Amitava Akuli and Abhijit Control in Wireless Sensor Networks,” in
Auddy, “Virtual Instrumentation Systems- Proceedings of 7th ACIS international
Some Developments in Power Plant conference on software engineering,
Training and Education,” IEEE ACE, Dec Artificial Intelligence, Networking and
2002 Parallel Distributed Computing (SNDP’06),
[9] Melanie L Higa, Dalia M Tawy and Susan 2006, pp. 1-12
M Lord, “An Introduction to LabVIEW [17] K. S Swarup and P. Uma Mahesh,
Exercise for an Electronic Class,” 32nd “Computerized Data Acquisition for Power
ASEE/IEEE Frontiers in Education System Automation,” in Proceedings of
Conference, Nov 2002, T1D-13-T1D-16 Power India Conference, Jun 2006, pp. 1-7.
[10] Recayi Pecen, M.D Salim, Ayhan Zora, “A [18] Francesco Adamo, Filippo Attivissimo,
LabVIEW Based Instrumentation System Giuseppe Cavone, Nicola Giaquinto,
for a Wind-Solar Hybrid Power Station,” “SCADA/HMI Systems in Advanced
Journal of Industrial Technology, vol. 20, Educational Courses,” IEEE Transactions
no. 3, Jun-Aug 2004. on Instrumentation and Measurement, vol.

SIP0202-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

56, no. 1, Feb 2007, pp. 4-10. Based on LabVIEW DSC Module and
[19] Vu Van Tan, Dae-Seung Yoo, Myeong-Jae Matlab/Simulink,” in Proceedings of The
Yi, “A Novel Framework for Building Ninth International Conference on
Distributed Data Acquisition and Electronic Measurement & Instruments,
Monitoring System,” Journal of Software, Aug 2009, pp. 1-547-1-552.
vol.2, no.4, Oct 2007, pp. 70-79 [29] Hiram E Ponce, Dejanira Araiza and Pedro
[20] A Hammad, A Hafez, M T Elewa, “A Ponce, “A Neuro-Fuzzy Controller for
LabVIEW Based Experimental Platform for Collaborative Applications in Robotics
Ultrasonic Range Measurements,” DSP Using LabVIEW,” Applied Computational
Journal, vol. 6, issue 2, Feb 2007, pp. 1-8 Intelligence and Soft Computing, Hindawi
[21] Shekhar Sharad, “A Biomedical Publishing Corporation, vol. 2009, 2009, pp.
Engineering Start Up Kit for LabVIEW,” 1-9
Americal Society f Engineering Education, [30] Akif Kutlu, Kubilay Tasdelen, “Remote
2008 Electronic Experiments Using LabVIEW
[22] Steve Warren and James DeVault, “A Bio Over Controller Area Network,” Scientific
Signal Acquisition and Conditioning Board Research and Essays, vol. 5(13), Jul 2010,
as a Cross-Course Senior Design Project,” pp. 1754-1758
in Proceedings of 38th ASEE/IEEE Frontiers [31] Pedro Ponce Cruz, Aruto Molina Gutierre,
in Education Conference, 2008, pp. S3C1- “LabVIEW for Intelligent Control Research
S3C6 and Education,” 4th IEEE International
[23] S K Bath, Sanjay Kumra, “Simulation and Conference on E-Learning in Industrial
Measurement of Power Waveform Electronics, Nov 2010, pp. 47-54
Distortion Using LabVIEW,” IEEE [32] David McDonald, “Work In Progress
International Power Modulators and High Introductory LabVIEW Real Time Data
Voltage Conference, May 2008, pp. 427- Acquisition Laboratory Activities,” ASEE
434 North Central Sectional Conference, Mar
[24] Nikunja K Swain, James A Anderson and 2010, pp. 1B-1-1B-6
Raghu B. Korrapati, “Study of Electrical
Power Systems using LabVIEW VI
Modules,” in Proceedings of 2008 IAJC-
IJME International Conference, 2008
[25] M. Usama Sadar, “Synchronous Generator
Simulation Using LabVIEW,” World
Academy of Science, Engineering and
Technology, 39, 2008, pp. 392-400
[26] Muhammad Noman Ashraf, Syed Annus
Bin Khalid, Muhammad Shahrukh Ahmed,
Ahmed Munir, “Implementation of Intranet-
SCADA using LabVIEW based Data
Acquisition and Management,” in
Proceedings of International Conference on
Computing, Engineering and Information,
2009, pp. 244-249.
[27] Zafer Aydogmus, Omur Aydogmus, “A
Web-Based Remote Access Laboratory
Using SCADA,” IEEE Transactions on
Education, vol. 52, no. 1, Feb 2009.
[28] Li Nailu, Lv Yuegang, Xi Peiyu, “A Real
Time Simulation System of Wind Power

SIP0202-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

METHODS OF INTERCARRIER INTERFERENCE


CANCELLATION FOR ORTHOGONAL FREQUNCY
DIVISION MULTIPLEXING

Dr. R.L.Yadav Mrs.Dipti Sharma


Prof., ECE Dept. Sr.Lecturer
Galgotia College of Engg.&Tech Apex Institute Of Tech.,
Greater Noida Rampur
email:rlyadava@rediffmail.com email:-dipti_sharma510@yahoo.co.in

Abstract-Orthogonal Frequency Division One limitation of OFDM in many applications is


Multiple Access is a scheme which divide the that it is very sensitive to frequency errors caused
available spectrum into subchannels. The by frequency differences between the local
subchannels are narrowband which makes oscillators in the transmitter and the receiver [3]–
equalization very simple.The intercarrier [5]. Frequency offset causes rotation and
interference in the subcarriers occurs due to attenuation of each of the subcarriers and
frequency offset.The OFDM is sensitive to intercarrier interference (ICI) between subcarriers.
frequency offset between transmitted and [4].Many methods have been developed to reduce
received carrier frequencies. This results in this sensitivity to frequency offset which includes
the reduction of signal amplitude in the output windowing of the transmitted signal [6], [7] and
of the filters matched to each of the carriers use of self ICI cancellation schemes [8]. Here in
and the second is introduction of ICI from the this paper, the effects of ICI have been analysed
other carriers. The two methods are and two solutions to combat ICI have been
investigated for combating the effects of ICI: presented. The first method is a self-cancellation
ICI Self Cancellation (SC) and Extended scheme[1], in which redundant data is transmitted
Kalman Filter (EKF) method. The methods are onto adjacent sub-carriers such that the ICI
compared in terms of bandwidth efficiency and between adjacent sub-carriers cancels out at the
bit error rate. EKF methods perform better receiver. The second method, the extended
than the SC method as shown by Kalman filter (EKF), statistically estimate the
simulations(upto 256 QAM). frequency offset and correct the offset [7], using
the estimated value at the receiver. The works
Keywords- Orthogonal frequency Division presented in this paper concentrate on a
Multiplexing(OFDM); Inter Carrier quantitative ICI power analysis of the ICI
Interference(ICI); Carrier to Interference Power cancellation scheme, which has not been studied
Ratio (CIR);Self Cancellation(SC);Carrier previously. The average carrier-to interference
Frequency Offset (CFO); Extended Kalman power ratio (CIR) is used as the ICI level
Filtering(EKF). indicator, and a theoretical CIR expression is
derived for the proposed scheme.

I. INTRODUCTION OFDM SYSTEM DESCRIPTION


The basic principle of OFDM is to split high-rate OFDM system uses the input bit stream which is
datastream into a number of lower rate streams multiplexed into N symbol streams, each with
that are transmitted simultaneously over a number symbol period T, and each symbol stream is used
ofsubcarriers.[1] to modulate parallel, synchronous sub-carriers
[10]. The sub-carriers are spaced by 1 in

SIP0203-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

frequency, thus they are orthogonal over the significantly by the ICI generated by the
interval (0,Ts). Then, the N symbols are mapped subcarrier l +1. In considering a further reduction
to bins of an inverse fast Fourier transform of ICI, the ICI cancellation demodulation scheme
(IFFT). The IFFT bins correspond to the is used. In this scheme, signal at the (k +1)
orthogonal sub-carriers in the OFDM symbol. subcarrier is multiplied by"-1" and then added to
Thus, the OFDM symbol is expressed as the one at the k subcarrier. Then, the resulting data
sequence is used for making symbol decision.
2). ICI Cancelling Modulation
The ICI self-cancellation scheme requires that the
transmitted signals be constrained such that
where the Xm’s are the baseband symbols on each X(1) = -X(0), X(3) = -X(2),......., X(N -1) = -X(N -
sub-carrier. The analog time-domain signal is 2) using this assignment of transmitted symbols
obtained using digital to analog(D/A) converter. allows the received signal on subcarriers k and
This discrete signal is demodulated using an N- k+1 to be written as
point Fast Fourier Transform (FFT) operation at
the receiver. The demodulated symbol is

and the ICI coefficient S’(l-k) referred as


where w (m) corresponds to the FFT of the
samples of w (n), which is the Additive White S’(l-k)=S(l-k)-S(l+1-k) (5)
Gaussian Noise (AWGN) in the channel Then, the Comparrison of |S(l-k)|, |S ` (l-k)|, and |S `` (l-k)| for = 0.4 and N = 128
0

signal is down converted and transformed to a -10


|S(l-k)|
|S ` (l-k)|
|S `` (l-k)|

digital sequence after through an Analog-to- -20

Digital Converter (ADC). Then following step is -30


dB

-40

to pass the remaining TD samples through a -50

parallelto- serial converter and to compute N- -60

-70
0 20 40 60 80 100 120

point FFT. The resulting Yi complex points are Subcarrier index k

the complex baseband representation of the N Fig.1 Comparison of |S(l-k)|, |S`(l-k)|, and |S``(l-k)| for N = 128 and
modulated sub carriers. As the broadband channel ε = 0.4
has been decomposed into N parallel sub 3) ICI Canceling Demodulation
channels.Each sub channel needs an. These blocks ICI modulation introduces redundancy in the
are called Frequency Domain Equalizers received signal since each pair of subcarriers
(FEQ).The bits on the transmitter are received at transmit only one data symbol. This
high data rates at receiver. redundancy can be exploited to improve the
system power performance, while it surely
III. ICI SELF CANCELLATION SCHEME
decreases the bandwidth efficiency. To take
A. Self-Cancellation
ICI self-cancellation is a scheme that was
advantage of this redundancy, the received
introduced by Zhao and Sven-Gustav Häggman[1] signal at the (k + 1)th subcarrier, where k is
in order to combat and suppress ICI in OFDM. even, is subtracted from the kth subcarrier.
The input data is modulated into group of
subcarriers with coefficients such that the ICI
signals so generated within that group cancel each
other.Thus it is called self-cancellation method.
1) Cancellation Method
The data pair (X ,- X ) is modulated on to two
adjacent subcarriers (l,l +1) . The ICI signals
generated by the subcarrier l will be cancelled out

SIP0203-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

0.8 0.3 theoretical CIR curve of the ICI self-cancellation


0.6 0.2
scheme, calculated by, and the CIR of a standard

Real(S(l-k))
|S(l-k)|

0.4 0.1

0.2 0
OFDM system is calculated. As expected, the CIR
0
0 5 10 15
-0.1
0 5 10 15 is greatly improved using the ICI selfcancellation
Subcarrier index k Subcarrier index k

scheme. The improvement can be greater than


15dB for 0 < ε < 0.5.
0.5
Imag(S(l-k))

-0.5

0 5 10 15
Subcarrier index k

Fig.2 An example of S(l - k) for N = 16; l = 0. (a) Amplitude of S(l


- k). (b) Real part of S(l - k). (c) Imaginary part of S(l - k).

This is expressed mathematically as


Fig.4 CIR versus ε for a standard OFDM system

EXTENDED KALMAN FILTERING


A. Problem Formulation
A state-space model of the discrete Kalman filter
Subsequently, the ICI coefficients for this is defined as
received signal becomes z (n) = a (n)d(n) + v(n) (9)
S’(l-k) =-S(l-k-1) +2S(l-k) -S(l-k+1) (7) For the model z(n) has a linear relationship with
When compared to the two previous ICI the desired value d(n). By using the discrete
coefficients S(1-k) for the standard OFDM system Kalman filter, d(n) can be recursively estimated
and S(1-k) for the ICI canceling modulation, S ''(1- based on the observation of z(n) and the updated
k) has the smallest ICI coefficients, for the estimation in each recursion is optimum in the
majority of l-k values, followed by S(1-k) and S(1- minimum mean square sense.
k) .The combined modulation and demodulation The received symbols are
method is called the ICI self-cancellationscheme..
The theoretical CIR can be derived as

At the receiver

As mentioned above, the redundancy in this


scheme reduces the bandwidth efficiency by half.
This could be compensated by transmitting signals
of larger alphabet size. The Fig. 3 shows the In order to estimate ε efficiently in computation,
model of the proposed method. we build an approximate linear relationship using
the first-order Taylor’s expansion:
(12-17)

Where is the estimation of

Fig.3 OFDM Simulation Model


ICI self-cancellation scheme can be combined
with error correction coding. The proposed And (15)

scheme provides significant CIR improvement,


which has been studied theoretically and by
simulationsFig. 4 shows the comparison of the And the following relationship:-

SIP0203-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

computed in each step and are used for adjustment


of estimation in the next step.
which has the same form as, i.e., z(n) is linearly The pseudo code of computation is summarized as
related to d(n).. As linear approximation is Initialize P(n),ε^(0).For n=1,2,…….NP compute
involved in the derivation, the filter is called the
extended Kalman filter(EKF)
.
B. ICI Cancellation
There are two stages in EKF method to reduce
intercarrier interference.
1). Offset Estimation Scheme
For estimating the quantity ε(n) using an EKF in
each OFDM frame, the state equation is built as
ε(n)=ε(n-1) (18)
i.e., in this case we are estimating an unknown
constant ε. This constant is distorted by a non- 2). Offset Correction Scheme
stationary process x(n), an observation of which is The ICI distortion in the data symbols x(n) that
the preamble symbols preceding the data symbols follow the training sequence can then be mitigated
in the frame. The observation equation is by multiplying the received data symbols y(n)
with a complex conjugate of the estimated
frequency offset and applying FFT, i.e.
where y(n) denotes the received preamble
symbols distorted in the channel, w(n) the
AWGN, and x(n) the IFFT of the preambles X(k) SIMULATED RESULT ANALYSIS
that are transmitted, which are known at the A. Performance
receiver. Assume there are Np preambles For the simulations in this paper, MATLAB was
preceding the data symbols in each frame are used employed with its Communications
as a training sequence and the variance σ of the Toolbox,Communication Blockset for all data
AWGN w(n) is stationary. The computation runs. To compare the two schemes BER
procedure is described as follows. performance curve is used The OFDM transceiver
1. Initialize the estimate and corresponding state system was implemented as specified by Fig.
error P(0). 3..Quadrature amplitude modulation QAM(64 ,
2. Compute the H(n), the derivative of y(n) with 128and 256) is used.
respect to ε (n) at , the estimate obtained in the PARAMETERS VALUES
previous iteration. Number of carriers 768
3. Compute the time-varying Kalman gain K(n) Modulation QAM
using the error variance P(n- 1), H(n), and σ2 Frequency offset [0,0.15,0.30]
4. Compute the estimate y^(n)using x(n) and ε^(n- No. of OFDM symbols 100
1)., i.e. based on the observations up to time n-1, Bits per OFDM symbols N*log2(M)
compute the error between the true observation Eb-No 1:15
y(n) and y^(n). IFFT size 1024
5. Update the estimate ε^(n) by adding the K(n)-
weighted error between the observation y(n) and
y^(n) to the previous estimation ε^(n-1).
6. Compute the state error P(n) with the Kalman
gain K(n), H(n), and the previous error P(n-1).
7. If n is less than Np, increment n by 1 and go to
step 2; otherwise stop.
It is observed that the actual errors of the
estimation ε^(n) from the ideal value ε(n) are Fig.5 BER Performance with ICI Cancellation, ε=0.05 for 64-QAM

SIP0203-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

However, it has the most complex implementation


of the two methods. In addition, this method
requires a training sequence to be sent before the
data symbols for estimation of the frequency
offset. It can be adopted for the receiver design for
IEEE 802.11a because this standard specifies
preambles for every OFDM frame. This model
can be easily adapted to a flat-fading channel with
perfect channel estimation. Further work can be
Fig done by performing simulations to investigate the
.6 BER Performance with ICI Cancellation ε=0.15, ε=0.3 for 128 performance of these ICI cancellation schemes in
QAM
multipath fading channels without perfect channel
information at the receiver. In this case, the
multipath fading may hamper the performance of
these ICI cancellation schemes.
REFERENCES:-
[1]P. Tan, N.C. Beaulieu, ―Reduced ICI in
OFDM systems using the better than raised cosine
Fig.7BER Performance with ICI cancellation Pulse,‖ IEEE Commun. Lett, vol. 8, no. 3, pp.
ε=0.15, ε-=0.30 for 256 QAM 135–137, Mar. 2004.
S.No. Method ε= 0.05 ε= 0.15 ε= 0.30
1 SC 13 dB 12 dB 11 dB [2] H. M. Mourad, Reducing ICI in OFDM
2 EKF 12dB 13 dB 14 dB systems using a proposed pulse shape, Wireless
Required SNR and improvement for BER of 10^-6 for QAM
Person. Commun, vol. 40, pp. 41–48, 2006.
[3] V. Kumbasar and O. Kucur, ―ICI reduction
CONCLUSION in OFDM systems by using improved Sinc power
In this paper, the performance of OFDM systems pulse,‖ Digital Signal Processing, vol.17, Issue
in the presence of frequency offset between the 6, pp. 997-1006, Nov. 2007.
transmitter and the receiver has been studied in [4] Tiejun (Ronald) Wang, John G. Proakis, and
terms of the Carrier-to-Interference ratio (CIR) James R.Zeidler“Techniques for suppression of
and the bit error rate (BER) performance. Inter- intercarrier interference in ofdm systems”.
carrier interference (ICI) which results from the Wireless Communications and Networking
frequency offset degrades the performance of the Conference, IEEE Volume 1,Issue, 13-17 pp: 39 -
OFDM system. Two methods were explored in 44 Vol.1,2005.
this paper for mitigation of the ICI. The ICI self- [5]P. H. Moose, “A Technique for Orthogonal
cancellation (SC) is proposed . The extended Frequency Division Multiplexing Frequency
Kalman filter (EKF) method for estimation and Offset Correction,” IEEE Transactions on
cancellation of the frequency offset has been Communications, vol. 42, no. 10, 1994
investigated in this paper, and comparison is made [6]Y. Zhao and S. Häggman, “Inter carrier
with these two existing techniques. The choice of interference self-cancellation scheme for OFDM
which method to employ depends on the specific mobile communication systems,”IEEE
application. For example, self cancellation does Transactions on Communications, vol. 49, no. 7,
not require very complex hardware or software for 2001
implementation. However, it is not bandwidth [7] R. E. Ziemer, R. L. Peterson, ”Introduction to
efficient as there is a redundancy of 2 for each Digital Communications”, 2Nd edition, Prentice
carrier. Its implementation is more complex than Hall, 2002.
the SC method. On the other hand, the EKF [8] J. Armstrong, “Analysis of new and existing
method does not reduce bandwidth efficiency as methods of reducing inter carrier interference due
the frequency offset can be estimated from the to carrier frequency offset in OFDM,” IEEE
preamble of the data sequence in each OFDM Transactions on Communications, vol. 47, no. 3,
frame. pp. 365 – 369., 1999

SIP0203-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

[9] N. Al-Dhahir and J. M. Cioffi, “Optimum


finite-length equalization for multicarrier
transceivers,” IEEE Transactions
onCommunications, vol. 44, no. 1, pp. 56 – 64,
1996Systems”, (IJCSIS) International Journal of
Computer Science and Information Security,
Vol. 6, No. 3, 2009

SIP0203-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

OBJECT DETECTION BASED ON CROSS-CORRELATION


USING PARTICLE SWARM OPTIMIZATION
Sudhakar Singh Yaduvir Singh

Department of Electrical and Instrumentation Engineering

Thapar University, Patiala, Punjab

sudhakarsingh86@gmail.com

Abstract- In this paper a novel method for detection is a fundamental component of


object detection in images is proposed. The artificial intelligence and computer vision.
method is based on image template Interest in pattern recognition is fast growing in
matching. Conventional template matching order to deal with the prohibitive amount of
algorithm based on cross-correlation information, we encounter in our daily life.
Automation is desperately needed to handle this
requires complex calculation and large time information explosion. The way the human brain
for object detection, which makes difficult to filters out useful information is not fully known
use them in real time applications. In the and this skill has not been merged into computer
proposed work particle swarm optimization vision science. This paper proposes to
and its variants based algorithm is used for implement a system that is able to faster
detection of object in image. Implementation detection of object in an image. Artificial
of this algorithm reduces the time required intelligence is an important topic of the current
for object detection than conventional computer science research. In order to be able to
template matching algorithm. Algorithm can act intelligently a machine should be aware of its
detect object in less number of iteration & environment. The visual information is essential
for humans. Therefore, among many different
hence less time and energy than the
possible sensors, the cameras seem very
complexity of conventional template important. Automatically analyzing images and
matching. This feature makes the method image sequences is the area of research usually
capable for real time implementation. called “computer vision. Image matching is key
point for object detection. Image matching has
Keywords: object detection, object tracking large no. of applications which includes in
and image matching. navigation, guidance, automatic surveillance,
robot vision, and in mapping sciences. Cross-
I. INTRODUCTION
correlation and related techniques are
It is easy in image to detect the position dominantly used in image matching
of the letters, objects, numbers, for human even applications. It is difficult to use this
for child, but for computer solve these types of Conventional template matching algorithm
problems in fast manner is a very challenging based on cross-correlation in real time
task. In the last decades the computer‟s ability to applications due to requirement of complex
perform huge amount of calculations, and handle calculation and large time for object detection
information flows we never thought possible ten applications. The shortcomings of this class of
years ago has emerged. Despite this a computer image matching methods have caused a slow-
can only extract little information from the down in the novel development of operational
image in comparison to human being. Object automated correlation systems. In this paper, we
propose a method for object detection. It

SIP0204-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

consists three stages, (i) Image matching using public security systems, and virtual reality
templates. (ii) Object detection. (iii) Then interfaces. Detection and tracking of moving
implementation of PSO technique. object like car and people are more concerned,
especially flexible and robust tracking
In this paper proposed PSO based algorithms under dynamic environments, where
algorithm is better which gives better result as lightening condition may change and occlusions
compare to conventional algorithm. may happen. The general process of object
detection consists of two steps. The first step is
II. LITERATURE REVIEW building models. The second step is according to
F. Ackermann [1] proposed an image matching the prior knowledge of the interested objects, the
algorithm based on least squares window feature model is built up to describe the target
matching. Several common object detection and object and separate it from other objects and
tracking methods are surveyed in [2], such as backgrounds. And since most images are noisy,
point detectors , background subtraction [7], In statistic information are usually adopted to
fact, color is one of the most widely used quantify features. The second step is to find a
features to represent the object appearance for particular region in the image; called area of
detection and tracking [5]. Most of object interest (AOI), which either can best fit the
detection and tracking methods used pre- object model or has the highest similarity with
specified models for object representation. W. the model. Many algorithms developed recently
Forstner [3] proposed a feature based in this area relate to human face detection and
correspondence algorithm for image matching A recognition due to its potential applications in
W Gruent [4]. The Adaptive Least Squares security and surveillance. Yet, generic, reliable,
Correlation is a very potent and flexible and fast human face detection was, until very
technique for all kinds of data matching recently, impossible to achieve in real-time. The
problems, J. Bala, K.[5]. They address the concepts involved in object detection, object
problem of crafting visual routines for detection recognition, and object tracking often overlap.
tasks. C.F.Olson [6] in image matching Each of these computer vision techniques tries to
applications such as tracking and stereo achieve the following: Object Tracking:
matching. Kwan-Ho Lin, Kin-Man [8] new dynamically locates objects by determining their
method for locating object based on valley field position in each frame. Object Detection and
detection and measurement of fractal Recognition has made significant progress in the
dimensions. Yaakov Hel-Or [10] a novel last few years. Many algorithms developed
approach to pattern matching is proposed in recently in this area relate to human face
which time complexity is reduced by two orders detection and recognition due to its potential
of magnitude compared to traditional applications in security and surveillance.
approaches. Kun Peng, Liming Chen [9] IV. TEMLATE MATCHING BASED ON
presented a robust eye detection algorithm for CROSS CORRELATION
gray intensity images.
Template matching is a popular method
III. OBJECT DETECTION for pattern recognition. It is defined below:
Object detection attempts to determine Definition: Let I be an image of dimension m×n
the existence of specific object in an image and, and T be another image of dimension p×q such
if object is present, then it determines the that p<m and q<n then template matching is
location, size and shape of that object. In defined as a search method which finds out the
computer vision, object detection and tracking is portion in I of size p×q where T has the
an active research area which has attracted maximum cross correlation coefficient with it.
extensive attentions from multi-disciplinary The normalized cross correlation coefficient is
fields, and it has wide applications in many defined as:
fields like service robots, surveillance systems,

SIP0204-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

PI ( x P is assumed to “fly” over these search space in


s , y t ) I ( s ,t )
s t order to find promising region of the landscape.
Y(x,y)= (1)
PI2( x PI2( s ,t ) Let, particle i of the swarm is represented by the
s, y t )
s t s t dimensional vector xi = (xi1, xi2, ….,xid ) and the
best particle of the swarm, is denoted by the
Where index g. The best previous position of particle i
is recorded and represented as pi= (pi1,pi2,
PI ( x s, y t ) I ( x s, y t ) I ( x , y ) …,pid). The position change (velocity) of
PI ( x , y ) T ( s, s ) T particle i is Vi=(Vi1, Vi2, …, Vid). Particles
update their velocity and position through
tracing two kinds of „best‟ value. One is its
s (1,2,3....... p) , t (1,2,3....q) and
personal best (pbest), which is the location of its
highest fitness value. Another is the global best
x (1,2,3....m p 1) , (gbest), which is the location of overall best value,
y (1,2,3....n q 1) obtained by any particles in the population.
Particles update their positions and velocities
Also according to the following equations:
1 vj(i) = wvj(i-l) +r1 c1(pbest(j) – xj(i)) +r2 c2(gbest–
I ( x, y ) I ( s t, y t )
pq xj(i)) (4)
s t (2)
pj (i) = pj(i - 1) + vj(i) (5)
1
T T ( s, t ) Where, vj(i) is the velocity of the jth particle in
pq s t (3) the ith iteration, pj (i) is the corresponding
position, pbest and gbest corresponding persona
The value of cross-correlation coefficient Y
lbest and global best respectively, the variables
ranges in [-1, +1]. A value of +1 indicates that T
w is the inertia weight, the variables c1 and c2
is completely matched with I(x, y) and -1
are the accelerate parameters and r1 and r2 are
indicates complete disagreement. For template
random numbers . A number of scientists have
matching the template, T slides over I and Y is
created computer simulations of various
calculated for each coordinate (x, y). After
interpretations of the movement of organisms in
completing this calculation, the point which
a bird flock or fish school. Notably, Reynolds
exhibits maximum Y is referred to as the match
and Heppner and Germander presented
point.
simulations of bird flocking. It became obvious
V. PARTICLE SWARM OPTIMIZATION during the development of the particle swarm
concept that the neighbours of the population of
Particle Swarm Optimization (PSO) agents are more like a swarm than a flock. The
algorithm is a kind of evolutionary term swam has a basis in the literature. In
computational technique developed by Kennedy particular, the authors use the term in
and Eberhart in 1995 [5]. Like other accordance with a paper by Millons, who
evolutionary techniques, PSO also uses a developed his models for applications in
population of potential solutions to search the artificial life, and articulated five basic
explore space. In PSO, the population dynamics principles of swarm intelligence. First is the
resembles the movement of a “birds‟ flock” proximity principle: the population should be
searching for food, while social sharing of able to carry out simple space and time
information takes place and individuals can gain computations. Second is the quality principle:
and from the discoveries previous experience the population should be able to respond to
from all other companions. Thus, the companion quality factors in the environment. Third is the
(called particle) in the population (called swarm) principle of diverse response: the population

SIP0204-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

should not commit its activities along


excessively narrow. Fourth is the principle of
stability: the population should not change its
mode of neighbour every time the environment
changes. Fifth is the principle of ability: the
population must be able to change the behaviour
mode when it‟s worth the computational price.

Note that principles four and five are the


opposite sides of the same coin. Particle swarm
optimization concept and paradigm presented
seem to adhere to all five principles. Basic to the
paradigm are n-dimensional space calculations
carried out over a series of time steps. The
population is responding to the quality factors
local best. Further, liccvcs discusses particle
systems consisting of clouds of primitive
particles as models of diffuse objects such as
clouds, fire and smoke. Thus the label the
authors have chosen to represent the
optimization concept is particle swarm.

Figure 1: Flow chart of PSO

VI.EXPERIMENTAL RESULTS AND


DISCUSSION

The algorithm of particle swarm


optimization is applied for image and different
templates for solving the problem of object
detection. Each image is tested on more than 15
times.

SIP0204-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Figure 2: Test image-1


Figure 4: Test image-3

Figure (2.1) (2.2) (2.3) Figure (4.1) (4.2) (4.3)


(2.1) Left eye taken as a template (T1.1) (4.1) Right eye taken as a template (T3.1)
(2.2) Right eye taken as template (T1.2) (4.2) Left eye taken as template (T3.2)
(2.3) Nose taken as template (T1.3) (4.3) Nose taken as template (T3.3)

Table I below shows comparison of pso and


conventional algorithm.

Ima Temp Iterat Conven PSO %


ges lates ions tional Algor Redu
algorith ithm ced
m Time Time Time
taken in taken by
sec. in sec. PSO
in
sec.
Figure 3: Test image-2 Ima Temp 100 57.38 3.90 93.2
ge late 1
(1) 1.1
Temp 100 110.14 4.14 96.2
Figure (3.1) (3.2) (3.3) late 4
1.2
(3.1) Right eye taken as a template (T2.1) Temp 100 130.56 4.36 96.6
late 6
(3.2) Left eye taken as template (T2.2) 1.3
(3.3) Nose taken as template (T2.3) Ima Temp 100 111.70 4.46 96.0
ge late 1
(2) 2.1
Temp 100 120.60 4.53 96.2
late 6
2.2
Temp 100 143.34 4.65 96.7
late 5

SIP0204-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

2.3 algorithms the time taken is 3.62 sec. and


Ima Temp 100 59.47 3.72 93.7 reduces time up to 93.01% By conventional
ge late 5 algorithm time taken for detection of left eye
(3) 3.1 (object) in test image3 by matching of
Temp 100 52.43 3.67 93.0 template3.2 is 74.37 sec., while in the proposed
late 1 algorithms the time is 3.89 sec. and reduces time
3.2 up to 94.77%. Thus PSO is successfully
Temp 100 74.37 3.89 94.7 employed to solve the object detection problem.
late 7 The results show that the proposed method is
3.3 capable of obtaining higher quality solution
Table 1: Comparison of conventional and PSO efficiency. Here time taken is considered as the
based algorithms efficiency measure. It is clear from the results
that the proposed PSO based method can avoid
By conventional algorithm template the shortcoming (large time taken) of Old
time taken for detection of right eye (object) in template matching algorithm and can provide
test image1 by matching of template1.1 is 57.38 higher quality solution with better computation
sec., while in the proposed algorithm the time efficiency.
taken is 3.90 sec. and hence time reduces up to
93.21% by proposed algorithm. By conventional VII. CONCLUSIONS
algorithm time taken for detection of left eye
(object) in test image1 by matching of When the sample test images are tested
template1.2 is 110.14 sec., while in the proposed on PSO based algorithm for detecting the
algorithm time is 4.14 sec. and reduces time up position of object then it is found that the
to 96.24%. By conventional algorithm time algorithms are capable of detecting the position
taken for detection of nose (object) in test of object in image with very less time as
image1 by matching of template1.3 is 130.56 compared to conventional template matching
sec., while in the proposed algorithms the time algorithm. The PSO based algorithm has
taken is 4.34 sec. and reduces time up to superior features, including high-quality
96.66%. By conventional algorithm time taken solution, stable convergence characteristic and
for detection of right eye (object) in test image2 good computation efficiency. The future work of
by matching of template2.1 is 111.70 sec., while the proposed paper is to detect the exact location
in the proposed algorithm the time taken is 4.46 of object by segmentation by finding area and
sec. and reduces time up to 96.01% By perimeter of object.
conventional algorithm time taken for detection
of left eye (object) in test image2 by matching of REFERENCES
template2.2 is 120.14 sec., while in the proposed
algorithm time taken is 4.53 sec. and reduces 1. Ackermann, F. 1984. “Digital image
time up to 96.26%. By conventional algorithm correlation: Performance and potential
time taken for detection of left eye (object) in application in photogrammetry”.
test image2 by matching of template2.3 is Photogrammetric Record 11
143.34 sec., while in the proposed algorithms 2. T.Peli, “An algorithm for recognition and
the time taken is 4.65 sec. and reduces time up localization of rotated and scaled objects”,
to 96.75%. By conventional algorithm time Proceedings of the IEEE 69, 1981 483–485.
taken for detection of left eye (object) in test 3. Foerstner,W.,“Quality assessment of object
image3 by matching of template3.1 is 59.47 sec., location and point transfer using digital
while in the proposed algorithm the time taken is image correlation techniques. International
3.72 sec. and reduces time up to 93.75% By Archives of Photogrammetry and Remote
conventional algorithm time taken for detection Sensing” vol. XXV, A3a, Commission III,
of left eye (object) in test image3 by matching of Rio de Janeiro, 1984.
template3.2 is 52.43 sec., while in the proposed

SIP0204-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

4. A W Gruent, “Adaptive least squares


correlation: A powerful image matching
Technique.”, South African Journal of
Photogrammetry, Remote Sensing &
Cartography, 14(3):175–187, 1985.
5. J. Bala, K. DeJong, J. Huang, H. Vafaie, H.
Wechsler, “Visual routine for eye detection
using hybrid genetic architectures,”
International Conference on Pattern
Recognition, vol. 3, pp. 606-610, 1996.
6. C.F. Olson, “Maximum-likelihood template
matching,” IEEE Conference on Computer
Vision and Pattern Recognition, vol. 2, pp.
52-57, 2000.
7. Feng Zhao, Qingming Huang, Wen Gao,
“Methods of image matching by normalized
cross-correlation”.
8. Kwan-Ho Lin, Kin-Man Lam and Wan-Chi
Siu, "Locating the Eye in Human Face
Images Using Fractal Dimensions," IEE
Proceedings - Vision, Image and Signal
Processing, vol. 148, no. 6, pp. 413-421,
2001.
9. Kun Peng, Liming Chen, Su Ruan, Georgy
Kukharev, “A Robust and Efficient
Algorithm for Eye Detection on Gray
Intensity Face,” Lecture Notes in Computer
Science – Pattern Recognition and Image
Analysis, pp. 302-308, 2005.
10. Yacov Hel-Or, Hagit Hel-Or, “Real-Time
Pattern Matching Using Projection Kernels”,
IEEE transactions on pattern analysis and
machine pattern analysis and machine
intelligence, Vol. 27, No. 9, September,
2005.
11. Zeng Yan et.al „‟A New Background
Subtraction Method for on-road Traffic,”
Journal of Image and Graphics, vol.13,
No.3, pp.593-599, March, 2008

12. Wei-feng Liu et.al „‟ A Target Detection


Algorithm Based on Histogram Feature and
Particle Swarm Optimization‟‟ 2009 Fifth
ICONC.

SIP0204-7
1

“Multisegmentation through wavelets:


Comparing the efficacy of Daubechies vs
Coiflets"
Madhur Srivastava,Member, IEEE, Yashwant Yashu, Member, IETE, Satish K. Singh,Member, IEEE,
Prasanta K. Panigrahi

Abstract--- In this paper, we carry out a Primarily, thresholding are of two types – Bi-level
comparative study of the efficacy of wavelets and Multi-level [1].
belonging to Daubechies and Coiflet family in
achieving image segmentation through a fast Bi-level thresholding consists of two values – one
statistical algorithm.The fact that wavelets below the threshold and another above it. While in
belonging to Daubechies family optimally capture Multilevel thresholding, different values are assigned
the polynomial trends and those of Coiflet family between different ranges of threshold levels. Various
satisfy mini-max condition, makes this thresholding techniques have been categorized on the
comparison interesting. In the context of the basis of histogram shape, clustering, entropy and
prseent algorithm, it is found that the object attributes [2].
performance of Coiflet wavelets is better, as
compared to Daubechies wavelet. Wavelet Transform is very significant tool in the
field of image processing. The wavelet transform of
Keywords: Peak Signal to Noise Ratio, an image comprises four components –
Segmentation, Standard deviation, Thresholding, Approximation, Horizontal, Vertical and Diagonal.
Weighted mean. The process is recursively used in approximation
component of wavelet transform for farther
Madhur Srivastava is final year B.TECH student in the
Department of Electronics and Communication Engineering at
decomposition of image until only one coefficient is
Jaypee University of Engineering and Technology, Guna, India; left in approximation part [3-5].
e-mail: madhur.manas@gmail.com
Yashwant Yashu is final year B.TECH student in the As is well known, Daubechies family are useful in
Department of Electronics and Communication Engineering at
Jaypee University of Engineering and Technology, Guna, India;
extracting polynomial trends through their low-pass
e-mail: yashwantyashu.jiet@gmail.com coefficients satisfying vanishing moments conditions:
Satish K. Singh is Assistant Professor in the Department of
Electronics and Communication Engineering at Jaypee University 
of Engineering and Technology, Guna, India e-mail:
satish432002@gmail.com

x n j.k dx  0 (1)
Prasanta K. Panigrahi is Professor in the Department of Physics
at Indian Institute of Science Education and Research, Kolkata, This is due to the fact that, the wavelets of
India; (Phone No. +91-9748918201) e-mail:
pprasanta@iiserkol.ac.in

 j ,k  2 j 1  2ix  k  (2)


I. INTRODUCTION
For n ≤ N , the values of n depend on the particular of
Thresholding of an image is done to reduce the this Daubechies basis makes them well suited for
storage space, increase the processing speed and isolating smooth polynomial features in a given
simplify the manipulation as less number of levels image. The Coiflet coefficient on the other hand ,
are there compared to 256 levels of normal image. satisfy the mini -max condition, i.e, the maximum
2

error in extracting local features is minimized in this basis set. Hence, it is worth comparing behavior of

T
H
R Component R Component
A H R
E
S
H
G Component O
G Component

B Component
V D L
D
I B Component
N
G

Fig. 1. Block diagram of the approach used.

the corresponding wavelet at low-pass coefficients from the perspective of the proposed algorithm.

I. APPROACH  Threshold the coefficients using


weighted mean and variance of each
The thresholding applied in wavelet domain takes sub-band of histogram of coefficients.
into account that majority of coefficients lie near to  Thresholding is done by having
zero and coefficients representing large differences broader block size around mean while
are few at the extreme ends of histogram plot for finer block size at the end of histogram.
each horizontal, vertical and diagonal component. 3. Take inverse wavelet transform for each
The coefficients with large differences represent component.
most significant information of the image. Hence, 4. Reconstruct the image by concatenating Red,
the procedure provides for variable size Green and Blue components.
segmentation with bigger block size around the
mean, and having smaller blocks at the ends of III. RESULTS AND OBSERVATIONS
histogram plot[6]. Following is the methodology
used as shown in Fig. 1 The proposed algorithm is tested on variety of
1. Segregate the color image into its Red, Green standard images using Daubechies and Coiflet
and Blue components. wavelets. The results of PSNR and size of
2. Take 2D-wavelet transform of each component reconstructed image at different threshold levels are
at any level. Do the following for each shown in Table 1. The numbers of threshold levels
Horizontal, Vertical and Diagonal part for every taken are 3, 5 and 7. Figure 2 shows the graph of
Red, Green and Blue component. PSNR w.r.t threshold levels of the image of Lenna
.

Table 1. PSNR and size of reconstructed images using different Daubechies and Coiflet wavelets.

Image Threshold Wavelet Name


Name Level dB2 dB4 dB6 dB8 coif1 coif2 coif3 coif4 coif5

Lenna 3 PSNR(dB) 34.45 35.19 35.52 35.71 34.50 35.23 35.48 35.61 35.69
Size(kB) 36.2 36.5 36.3 36.2 36.4 36.2 36.4 36.3 36.3
5 PSNR(dB) 36.41 37.13 37.41 37.53 36.5 37.19 37.42 37.54 37.62
Size(kB) 36.2 36.5 36.3 36.3 36.3 36.3 36.4 36.4 36.4
7 PSNR(dB) 36.79 37.5 37.74 37.88 36.84 37.53 37.76 37.89 37.97
Size(kB) 36.2 36.5 36.3 36.3 36.3 36.3 36.4 36.4 36.4

Baboon 3 PSNR(dB) 25.92 26.31 26.29 26.19 25.94 26.20 26.29 26.33 26.36
Size(kB) 74.4 74.2 74.2 74.3 74.4 74.4 74.3 74.3 74.2
5 PSNR(dB) 27.06 27.56 27.44 27.40 27.13 27.41 27.50 27.55 27.58
Size(kB) 74.4 74.1 74.2 74.2 74.3 74.2 74.2 74.2 74.1
3

7 PSNR(dB) 27.18 27.70 27.57 27.53 27.27 27.53 27.62 27.67 27.71
Size(kB) 74.3 74.1 74.1 74.1 74.2 74.2 74.1 74.2 74.1

Pepper 3 PSNR(dB) 30.63 33.87 31.61 31.25 31.48 31.63 31.70 31.75 31.77
Size(kB) 39.9 39.8 40.3 40.4 40.1 40.3 40.3 40.2 40.2
5 PSNR(dB) 34.12 35.83 34.61 34.30 33.98 34.41 34.55 34.60 34.62
Size(kB) 39.5 39.7 39.6 39.6 39.6 39.6 39.7 39.7 39.7
7 PSNR(dB) 34.56 36.26 34.92 34.58 34.25 34.73 34.88 34.93 34.95
Size(kB) 39.5 39.8 39.5 39.6 39.6 39.6 39.6 39.6 39.7

Fig. 2 Plot of PSNR vs Threshold levels thresholded using different wavelets of Lenna image

IV. CONCLUSION compression and image retrieval because only few


coefficients of Horizontal, Vertical and Diagonal
Thresholding performed by proposed algorithm gives components represent the entire variation of image
better PSNR using coiflet wavelets compared to without deteriorating the quality.
Daubechies wavelets while keeping the size of
reconstructed image almost same. This is due to the REFERENCES
unique property of coiflet wavelets satisfying the
mini-max condition. Hence, it can be concluded that 1. R.C. Gonzales, R.E. Woods, “Digital Image
coiflet wavelets provides best and most desirable Processing,” (2ed., PH, 2001).
results during multilevel thresholding of image in 2. M. Sezgin, B. Sankur, Survey over image
thresholding techniques and quantitative
wavelet domain. In future works, the proposed
performance evaluation, Journal of Electronic
algorithm using coiflet wavelets can be used for Imaging, 13(1) (2004) 146-165.
image segmentation, object separation, image
4

3. S.G. Mallat, A Wavelet Tour of Signal 6. M. Srivastava, P. Katiyar, Y. Yashu, S.K. Singh,
Processing. New York: Academic (1999). P.K. Panigrahi,” A Fast Statistical Method for
4. Daubechies, Ten Lectures on Wavelets, Vol. 61 Multilevel Thresholding in Wavelet Domain,”
of Proc. CBMS-NSF Regional Conference unpublished.
Series in Applied Mathematics, Philadelphia, PA:
SIAM (1992).
5. J.S. Walker,” A Primer on Wavelets and Their
Scientific Applications,” 2nd ed. Chapman &
Hall/CRC Press, Boca Raton, FL, 2008.
Analysis of Signals in Fractional Fourier Domain
Ajmer Singh, Student of Lovely Professional University(LPU)-India, Nikesh Bajaj, Asst. Prof., ECE Dept.(LPU)
ajmersingh155-2006@lpu.in, nikesh.14730@lpu.co.in

Abstract- Fractional Fourier Transform (FRFT) is the • Linearity.


generalization of the classic Fourier Transform (FT). When we • Zero rotation/ Time domain.
dealing with time-varying signals, FRFT is an important tool to
analysis these signals. This paper contain the results for
When A = 0 or 4; α = 0 or 2л; the FRFT operator Fα(u) is
variation of basic signals like Rectangular pulse, sine wave and correspond to identity operator. Or F0(u) = f(t). Where f(t) is
Gaussian signal in the Fractional Fourier Domain (FRFD). The the time domain signal and F0(u) is the FRFT operator at
correlation results for FRFD signal to the time domain(TD) and α=0.
correlation results for FRFT at α-domain to the FRFT at (α-1)- • FT is the special case of FRFT.
domain also shown and discussed. The graphically proof of When A = 1; α = л/2; the FRFT operator Fα(u) is
scaling property of FRFT is also given.
correspond to FT. Or Fл/2(u) = F(t). Where F(t) stand for the
Index Terms— FRFT, FRFD, Signal Processing, α-domain, Fourier Transform of the time domain signal f(t) and Fл/2(u)
Analysis FRFT, FRFT scaling property, α-domain’s correlation is the FRFT operator at α = л/2.
• Flipped operation/ time inversion.
I. INTRODUCTION When A = 2; α = л; the FRFT operator Fα(u) is
correspond to flipped operator. Or Fл(u) = f(-t). Where f(-t)
The FT is one of the most frequently used tools in signal is the flipped version of the input signal f(t) and Fл(u) is the
analysis. However, the FT is not very suitable in dealing FRFT operator at α = л.
with signals whose frequency changes with time because of
• Inverse Fourier Domain.
its assumption that the signal is stationary. The
When A = 3; α = 3л/2; the FRFT operator Fα(u) is
generalization of FT has been proposed in [1] by V. Namias,
correspond to inverse Fourier domain. Or F3л/2(u) = F(-t).
and is known as FRFT. FRFT also state as “FRFT perform a
Where F(-t) stand for the flipped version of Fourier
spectral rotation of the signal in time-frequency plane with
Transform of the time domain signal f(t) and F3л/2(u) is the
variation of α parameter”. In recent years, FRFT has been
FRFT operator at α = 3л/2.
applied in many areas such as solving differential equations
The above properties of FRFT are easily understood by
[2], quantum mechanics [1], optical signal processing [6],
the figure 1.
time variant filtering and multiplexing [3]–[5], swept-
frequency filters [6]. Several properties of the FRFT in
signal analysis have been summarized in [6].
This paper is divided as following sections. Section II is
about the basic concept of FRFT, and also discussed about
some properties of FRFT. Section III is about the analysis of
different signals, in this section we discussed about the
Rectangular pulse, Sine wave and Gaussian signal, also
check correlation results for these signals in FRFD. In
section IV the conclusion of the paper is discussed.

II. BASIC CONCEPT OF FRFT


The FRFT with angle parameter α of a signal f(t) is
defined as,


 ∞         !
      "

Figure 1: Time- frequency plane for FRFT.
∞
# $% & ' In this paper, we use the Digital Computation method of
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%# $%  ' the FRFT which is given in [7].
  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%# $%  ' ( 

Fα(u) called as the α- order FRFT of signal f(t). Where α III. ANALYSIS OF DIFFERENT SIGNALS
= Aл/2, and ‘A’ is a real number and is called the order of We always store our information or data in some type of
the FRFT, which is in interval [-2, 2] and can be extended to memory space, that set of information or data is known as
any real number according to A+4k = A. Where k is any signal. There are some basic signals like Rectangular pulse,
integer like [….-3, -2, -1, 0, 1, 2, 3,….], and A can be any Sine wave, Gaussian signals. These signals are basically use
fractional value in interval [-2, 2].
for signal processing. In signal processing there are different
Some basic properties of FRFT are:
types of transform techniques which are used to analysis the

1
frequency spectrum of the signals. Because the frequency domains shows the FRFT results for rectangular pulse at α =
spectrum tells more about the signal behavior as compare to л/10, л/5, 3л/10, and 2л/5.
the time domain representation. Two FRFDs for rectangular pulse at α=0 and α= л/2 are
But the FRFT tell about the signal representations in time ordinary time and frequency domains respectively. By taking
domain and frequency domain while using the different a look on figure 2(a) to 2(e) any one can easily understand
FRFT operator Fα(u), where α = 0; give the time domain the concept that how an rectangular pulse become a sinc
representation and α = л/2; give the frequency domain function in frequency domain, without any mathamaticaly
representation. Also 0 < α < л/2 give the intermediates expression. We can also see how much these domains are
domain which are known as α–domains. These domains are correlated to each other. But not tell the actual value of
not giving any exact information about the time / frequency correlation cofficent. To analysis this in figure 3 there are
component. But gives some mixed information about that. two graphs first one of which tells about the normalized
correlation value of α-domain signal to the time domain
So, in this section we are going to discuss about the
signal and second one tells about the normalized correlation
variation of some signals with variation in α-domain.
value of α-domain to (α-1)-domain. For the better results we
A. Analysis of Rectangular pulse in FRFD take 90 domains at 90 different values of A between 0 < A <
The rectangular pulse (also known as the rectangle 1.
function, rectangular function, gate function, or unit pulse) is In figure 3(a) and 3(b) where the α = 0 correlation
defined as: coefficent has the maxium value is 1. It proof that the FRFT
at α = 0 give the actual time domain signal or no rotation.
)*  %%%%%%%%%$%++ , - But when there is a small change of 1° (one degree) in α
%%%%%%%%%%%%%% %.%%%%%%%%$%++ / - value the correlation coefficent give the minimum value
quite different from time domain signal but still correlate up
And FT of a rectangular function is defined as: to 95%, and so on. In figure 3(b) we can see that when 1° <
α < 45° then the α-domain signal is highly corrrelated to the
01)*2  345%-6-
previous α-domain, an simillar result for 45°< α < 90°.
Correlation of α domain signal to time domain signal
Now, let us discuss an example for rectangular pulse in 1
FRFD and discuss results. In figure 2 we shows the results
1 1.5 0.95
MAX(Correlation)

0.8
1

0.6 0.9
0.5
0.4

0
0.2
0.85
0 -0.5
-30 -20 -10 0 10 20 30 -30 -20 -10 0 10 20 30

(a) A=0/α=0 (b) A=0.2/α=л/10


1.5 1.5 0.8
0 20 40 60 80 100
1 1
value of α in degrees
(a)
0.5 0.5 Correlation of α domain signal to (α -1) domain signal
1
0 0

0.99
-0.5 -0.5
MAX(Correlation)

-30 -20 -10 0 10 20 30 -30 -20 -10 0 10 20 30

(c) A=0.4/α=л/5 (d) A=0.6/α=3л/10 0.98


1.5
1.5

1
1 0.97

0.5
0.5
0.96
0 0

0.95
-0.5
-30 -20 -10 0 10 20 30
-0.5
-30 -20 -10 0 10 20 30 0 20 40 60 80 100
(e) A=0.8/α=2л/5 (f) A=1/α=л/2 value of α in degrees
Figure 2: FRFT of rectangular pulse for different values of angle α/A. (b)
solid line: real part. Dashed line: imaginary part. Figure 3: Correlation results for rectangular pulse.

B. Analysis of Sine wave in FRFD


for six different FRFD values, out of which figure 2(a) for
The sine wave or sinusoid is a mathematical function that
α = 0; which shows the rectangular pulse in time domain and
describes a smooth repetitive oscillation. It occurs often in
figure 2(f) for α = л/2; which shows the spectrum of
pure mathematics, as well as physics, signal processing,
rectangular pulse that is sinc function, rest of the four

2
electrical engineering and many other fields. It’s most basic is not correlated to TD signal, for these domains the
form as x(t) known as a function of time (t) is defined as: correlation coefficient values tends to zero.

7  8 345 ( 9
In figure 5(b) the correlation results for α-domain to (α-1)
domain are shows. When 1°< α < 90°, these domain are
equally correlated to each other. But very less correlated to
Where M is the amplitude of the sine wave, f is the TD signal
Correlation of α domain signal to time domain signal
frequency component, t is time and θ is the phase, specifies 1.4
where in its cycle the oscillation begins at t = 0.
1.2
Now, let discuss the results for Sine wave in FRFD. In
figure 4 we shows the results for six α’s values, out of which 1

MAX(Correlation)
figure 4(a) for α = 0; which shows the Sine wave in time 0.8
domain and figure 4(f) for α = л/2; which shows the
spectrum of Sine wave that is impulse function, rest of the 0.6

four domains shows the FRFT results for Sine wave at α = 0.4
л/10, л/5, 3л/10, and 2л/5.
0.2
As similar to the results discuss in section 3(A), now in
figure 4 shows the six α-domains for Sine wave out of which 0
0 20 40 60 80 100
two domains are identical to the ordinary time domain and value of α in degrees
frequency domain, which are in figure 4(a) and 4(b) (a)
respectively. Rest four figures 4(b), 4(c), 4(d) and 4(e) Correlation of α domain signal to (α -1) domain signal
1.02
shows the results for FRFT of Sine wave at different value
of α. The correlation results for Sine wave in the α-domain 1.01
with the TD signal, and with the (α-1)-domain signal are
MAX(Correlation)
1
shown in figure 5(a) and 5(b) respectively.
0.99
1 1.5

1 0.98
0.5
0.5
0.97
0 0
0.96
-0.5 0 20 40 60 80 100
-0.5 value of α in degrees
-1
(b)
-1 -1.5 Figure 5: Correlation results for Sine wave in α-domain.
0 20 40 60 80 100 0 20 40 60 80 100

(a) A=0/α=0 (b) A=0.2/α=л/10 C. Analysis of Gaussian signal in FRFD


1.5 1.5
A Gaussian signal has a bell-shaped curve. Gaussian
1
1 tuning curves are extensively used because their analytical
0.5 0.5 expression can be easily manipulated in mathematical
0 0
derivations. Mathematically Gaussian signal defined as:

7   
-0.5 -0.5  -

-1 -1

-1.5 -1.5 As we discuss two type of signals in section 3(a) and 3(b)
0 20 40 60 80 100 0 20 40 60 80 100
which are Rectangular pulse and Sine wave respectively.
(c) A=0.4/α=л/5 (d) FRFT
A=0.6/α=3л/10
of Sine Wave A= 1 The third point of interest is Gaussian signal. Because
2 5
Gaussian functions are widely used in statistics where they
1.5
describe the normal distributions, in signal processing where
1
they serve to define Gaussian filters, and many more
0.5
application they have.
0 0 At last, by taking an example of Gaussian signal to
-0.5 compute the FRFT for analysis it in FRFDs. In figure 6 we
-1 show six different FRFDs for Gaussian signal. Out of which
-1.5 two domains are again identical to TD and FD. And rest four
-2 -5
domains are intermediate domains of TD and FD.
0 20 40 60 80 100 0 20 40 60 80 100 For Gaussian signals the Fourier transform is again a
(e) A=0.8/α=2л/5 (f) A=1/α=л/2 Gaussian signals. Now if we have a look from 6(a) to 6(f)
Figure 4: FRFT of Sine wave for different values of angle α/A. solid then the variation from TD to FD is easily understandable.
line: real part. Dashed line: imaginary part.
Our point of interest is that how FRFD signals are correlated
In figure 5(a) it is clear that when 1° < α <10° then the α- to each other. For this in figure 7 we have two plots which
domain signal for sine wave is somehow correlated to the show the correlation of α-domain signal with TD signal and
with (α-1)-domain signal in figure 7(a) and 7(b) respectively.
TD signal. But when 10° < α < 90° then the α-domain signal

3
By analyzing these three signals in FRFDs. It is clear that
1 1.5
the α-domain signal are highly correlated to the (α-1)-
0.8 1 domain. That can understand from figure 3(b), 5(b) and 7(b).
These figures show that for the interval 1° < α < 90° these
0.6 0.5
signals are similar to each other. And by taking a look to
0.4 0 figure 2(b) to 2(e), 4(b) to 4(e) and 6(b) to 6(e), we can
realized that the FRFDs signal are just scaled version of the
0.2 -0.5
previous FRFD.
0 -1
-100 -50 0 50 100 -100 -50 0 50 100

(a) A=0/α=0 (b) A=0.2/α=л/10 IV. CONCLUSION


1.5 1.5
We have discussed about FRFT concept and its some
1 1 properties. Also we have discussed the behavior of three
0.5 0.5
different signals in FRFD, and presented these signal in
FRFD. The correlation concept have discussed for α-domain
0 0 signal to TD signal and to (α-1) domain signal. That shows
-0.5 -0.5
that the α-domain signal is just the scaled version of the
previous α-domain signals. That graphically proofs the
-1
-100 -50 0 50 100
-1
-100 -50 0 50 100
scaling property of FRFT which is discussed in [6].
The work presented in this paper is helpful for further
(c) A=0.4/α=л/5 (d) A=0.6/α=3л/10
2 4 research. And the graphically proof of the scaling property
1.5 of FRFT is helpful to understand that how FRFT change the
3
1
time domain signal to the frequency domain signal.
2
0.5

0
1 REFERENCES
-0.5
0 [1] V. Namias, “The fractional order Fourier transform and its
-1 application to quantum mechanics,” J. Inst. Math. Applicat., vol. 25,
-1.5 -1 pp. 241–265, 1980.
-100 -50 0 50 100 -100 -50 0 50 100
[2] A. C. McBride and F. H. Kerr, “On Namias’ fractional Fourier
(e) A=0.8/α=2л/5 (f) A=1/α=л/2 transforms,” IMA J. Appl. Math., vol. 39, pp. 159–175, 1987.
Figure 6: FRFT of Sine wave for different values of angle α/A. solid [3] H. M. Ozaktas, B. Barshan, D. Mendlovic, and L. Onural,
line: real part. Dashed line: imaginary part. “Convolution, filtering, and multiplexing in fractional Fourier
domains and their relationship to chirp and wavelet transforms,” J.
Correlation of α domain signal to time domain signal Opt. Soc. Amer. A, vol. 11, pp. 547–559, Feb. 1994.
1.1 [4] R. G. Dorsch, A. W. Lohmann, Y. Bitran, and D. Mendlovic, “Chirp
filtering in the fractional Fourier domain,” Appl. Opt., vol. 33, pp.
1
7599–7602, 1994.
0.9 [5] A. W. Lohmann and B. H. Soffer, “Relationships between the
Radon–Wigner and fractional Fourier transforms,” J. Opt. Soc. Amer.
MAX(Correlation)

0.8 A, vol. 11, pp. 1798–1801, June 1994.


[6] L. B. Almeida, “The fractional Fourier transform and time-frequency
0.7
representation,” IEEE Trans. Signal Processing, vol. 42, pp. 3084–
0.6 3091, Nov. 1994.
[7] Haldun M. Ozaktas, Orhan Arikan, M. A. Kutay and G. Bozdag,
0.5 “Digital Computation of the Fractional Fourier Transform”, IEEE
0.4 Trans. Signal Processing vol. 44, pp. 2141-2150, Sept. 1996.

0 20 40 60 80 100
Ajmer Singh (M’22) was born in Punjab, India. He is
value of α in degrees
pursuing the master’s degree in signal processing from
(a) Lovely Professional University, Punjab, India, in 2011.
Correlation of α domain signal to (α -1) domain signal Currently, he is doing dissertation under the supervision
1.0005
of Mr. Nikesh Bajaj, the assistant professor of electronic
1
department. Research interests include different aspects
of FRFD filter designing.
0.9995
MAX(Correlation)

0.999
Nikesh Bajaj received his bachelor degree in Electronics
0.9985 & Telecommunication from Institute of Electronics And
Telecommunication Engineers. And he received his
0.998 master degree in Communication & Information System
from Aligarh Muslim University, India. Now, he is
0.9975
working in LPU as Asst. Professor, Department of ECE.
Research interests include Cryptography, Cryptanalysis,
0.997
0 20 40 60 80 100 and Signal & Image Processing.
value of α in degrees
(b)
Figure 7: Correlation results for Gaussian signal in α-domain.

4
Parzen-Cos6 (πt) combinational window family based QMF bank
Narendra Singh (*) and Rajiv Saxena,
Jaypee University of Engineering and Technology, Raghogarh, Guna (MP)
(*) Corresponding Author: narendra_biet@rediffmail.com ; narendra.singh@jiet.ac.in

ABSTRACT H1(z) respectively. These sub-band signals are


down sampled by factor of two to reduce processing
A new approach for the design of prototype complexity. At the output, corresponding synthesis
FIR filter of two-channel Quadrature Mirror Filter bank has two-fold interpolator for both sub-band
(QMF) bank is introduced. Three variable windows, signals, followed by G0(z) and G1(z) synthesis
viz., Blackman window, Kaiser window, and filters. The outputs of the synthesis filters are
Parzen-cos6 (πt) (PC6) window are used to design combined to obtain the reconstructed signal y(n).
This reconstruction of signal at output is not perfect
prototype filters. The design equations of these
replica of the input signal x(n), due to three types of
window functions based filter banks are also given
errors: aliasing error, amplitude error and phase
in this article. Reconstruction error, which is used error [12]-[13]. Since inception of the QMF banks
as an objective function, is minimized by optimizing most of the researchers giving main stress on the
the cutoff frequency of designed prototype filters. elimination or minimization of these errors and
The Gradient based iterative optimization obtain near perfect reconstruction (NPR). In several
algorithm is used. The performances of filter banks design methods [14]-[18] aliasing and phase
designed with these window functions are compared distortion has been eliminated completely by
on the basis of reconstruction error. The designing all the analysis and synthesis FIR linear
combinational window, PC6 window provides the phase filters by a single low pass prototype even
QMF bank with better reconstruction error. order symmetric FIR linear phase filter. Amplitude
distortion is not possible to eliminate completely,
Keywords: QMF, Filter Bank, Combinational but can be minimized using optimization techniques
Window. [12]-[13]. Figure-1 shows the two - channel
quadrature mirror filter bank designed by Johnston
1. INTRODUCTION [1] in which Hanning window was used to design
low pass prototype FIR filter and nonlinear
optimization technique to minimize reconstruction
Window functions widely used in digital signal
error was employed.
processing for the applications in signal analysis
and estimation, digital filter design and speech This paper uses the algorithm as proposed in
processing. Digital filter banks are used in a number Creusere and Mitra [6] with certain modifications to
of communication applications. The theory and optimize the objective function. The combinational
design of QMF bank was first introduced by window functions [19]-[21] with large SLFOR have
Johnston [1]. These filter banks find wide been devised and used for designing FIR prototype
applications in many signal processing fields such filters. Due to the closed form expressions of the
as trans-multiplexers [2]-[3], equalization of window functions, the optimization procedure gets
wireless communication channel [4], sub-band simplified. Finally, a comparative evaluation has
coding of speech and image signals [5]-[8], sub- been done with reconstruction error and far-end
band acoustic echo cancellation [9]-[12]. attenuation being selected as the main figure of
In QMF bank the input signal x(n) splits into merit.
two sub-band signals having equal bandwidth using
the low pass and high pass analysis filter H0 (z) and

1
can be significantly reduced by appropriate choice
of smoothing function w (n). Hence, a filter p (n) of
order N is of the form [15-17]-

p n hid n w n (3)

where, w(n) is the time domain weighting


function or window function. Window functions are
of limited duration in time domain, while
approximates band limited function in frequency
Fig. 1 Two - channel quadrature mirror filter domain.
bank
3. WINDOW FUNCTIONS
2. FILTER DESIGN USING WINDOW
TECHNIQUE The window functions used in designing the
prototype FIR filter for the QMF banks are given in
The most straightforward technique for designing
FIR filters is to use a window function to truncate Table-1.The Table-1includes the expressions of
and smooth the impulse response of an ideal zero- variable window functions, expressions of variables
phase infinite-impulse-response filter. This impulse (β, γ: which are defining the window families) and
response can be obtained by using the Fourier series expressions of window shape parameters (D) of
expansion. Kaiser, PC6 and Blackman window. The filter
The impulse response of the ideal low pass designed using one of the above window functions
filter with cutoff frequency ωc is given as-
is specified by three parameters cut-off frequency
sin( c n)
(fc), filter order (N), and window shape parameter
hid n , n (1)
n ( ). For desired stop
band attenuation (ATT) and transition bandwidth,
hid(n) is doubly infinite , not absolutely the order of the filter (N) can be estimated by
summable and therefore unrealizable [15]. Hence
shifted impulse response of hid(n) will be- D (4)
N 1,
Fs

hid n
sin( c (n 0.5 N ))
, 0 n N-1 (2)
(n 0.5 N )

where, D is window shape parameter, ΔFs,


For making a causal filter, direct truncation
the normalized transition width=(fs-fp)/Fs, and Fs is
of infinite-duration impulse response of a filter
the sampling frequency in Hertz. The window shape
results in large pass band and stop band ripples near
parameter can be determined by the desired stop
transition band. These undesired effects are well
band attenuation.
known Gibbs phenomenon. However, these effects

2
Table 1: Window Functions with Filter Design Equations
Sr. Name of window Expression for
No Expression for Window Window variable Window shape parameter
. function
1. Blackman window w n 0.42 0.5 cos 2 n M 0.08 cos 4 n M

for M n M

2. Kaiser window I0 1 nM 2 0, for ATT 21 0.9222, for ATT 21


w n , 0.5842 ATT 21 0.07886 ATT 21 , for 21 ATT 50 D ( ATT 7.95)
I0 ( ) , for ATT 21
0.1102 ATT 8.7 , forATT 50 14.36
for M n M

3. Parzen- cos6 (πt) l n 1 d n , n N 2 0 3.7 a b ATT c ATT 2 D a b ATT c ATT 2


w 0, n N 2
combinational PC 6
for 30.32≤ATT≤51.25 for 30.32 ≤ ATT ≤ 43.60
window (PC6): n
2

n
a=8.15414,b= - a = 1.82892, b= - 0.027548,
1 24 1 2 , n N 4
N N 0.236709,c=0.00218617 c = .00157699
l ( n) 3

n
2 1 2 , N 4 n N 2
N for 51.25<ATT≤68.69 for 43.6 < ATT ≤ 49.44
d (n) cos 6 ( n / N ), n N 2 a=21.269,b= -0.605789,c=0.00434808 a = 1.67702, b = 0.0450505,
c = 0.00000

for 49.44 <ATT ≤57.48


a = 85.4733, b = -3.419690,
c = 0.03578400

for 57.48<ATT≤38.69
a = -8.60006, b= 0.477004,
c = -0.00355655

3
4. OPTIMIZATION ALGORITHM designed using windowing technique. With each
iteration, fc of p(n) and reconstruction error (error)
The amplitude distortion in reconstructed is computed, which is also the objective function. If
signal can be minimized by optimization the error increases in comparison to previous
techniques. The gradient based iterative iteration (prev-error), step size (step) is halved and
optimization algorithm is described in this section. the search direction (dir) is reversed. This step size
and direction is used to re-compute fc for new
a. Objective Function prototype filter. The optimization process is halted
when the error of the current iteration is within the
To get the high-quality reconstructed output y(n), specified tolerance (depicted as t-error), which is
the frequency response of low pass prototype filter, initialized before the optimization process begins or
H(ej2πf), must satisfy the following [13]- when prev-error equals error [26].
j 2 f Fs 4 2
Fs / 4 (5)
2 f 2
|H e | |H e | 1, for 0 f

2 f 5. PERFORMANCE ANALYSIS OF QMF


|H e | 0 forf Fs / 4
(6)
QMF banks were designed by using window
by assuming that filters have even number functions described in Table-1 and optimization
of coefficients. algorithm in Annexure-1. In these design examples
the stop-band edge frequency and pass-band edge
By satisfying exactly (5) the aliasing error is frequency are taken as Fs/4 and Fs/6 respectively. In
eliminated between nonadjacent bands. Similarly, Table 2, the value of stop band attenuation was kept
the amplitude distortion is eliminated by satisfying at 50 dB, resulting in different filter orders for
(6) [11]. Phase distortion is removed by selecting different window functions, which clearly indicates
even-order FIR prototype filter [1, 4]. Constraints the improvement in reconstruction error is obtained
(5) and (6) cannot be satisfied exactly for finite with PC6 window function.
length filter order, so it is necessary to design a In Table-3, the results corresponding to filter
filter which approximately satisfies (5) and (6). order are shown. In Table-4, a comparison is made
Johnston [1] combined the pass band ripple energy of the optimum performance that can be attained
and out-of-band energies into a single cost function with the three window functions. Apart from the
having nonlinear nature and then minimized it using reconstruction error, the far-end attenuation
Hooke and Jeaves algorithm [25]. Creusere and (amplitude of the last ripple in the stop band) is also
Mitra [11] designed filters using Parks–McClellan selected as one of the figures of merits for the
algorithm that approximately satisfied (5) and (6). comparative study. This parameter is of significance
The filter length, relative error weighting, and stop when the signal to be filtered has great
band edge were fixed before optimization procedure concentration of spectral energy. In a sub-band
started, while the pass-band edge was adjusted to coding, the filter is intended to separate out various
minimize the objective function ε. frequency bands for independent processing. In the
case of speech, e.g. the far-end rejection of the
j2 f 2 ( j (2 f Fs /4) 2 energy in the stop band should be more so that the
max | H (e | | H (e | 1, for 0 f Fs / 4
energy leak from one band to other will be
(7) minimum. As the stop band attenuation increases
the value of reconstruction error decreases. It is
b. Algorithm evident from table-2 and table-3.The PC6 window-
A gradient based linear optimization designed FIR filter gives better performance as
algorithm (as given in Annexure-1) is used to adjust compared to the other window functions.
the cutoff frequency. Filter design parameters and
optimization control parameters like step size (step),
target error (terror), direction (dir) and previous
error (prev-error) are initialized. Prototype filter is

4
Table 2: Performance of QMF filter at 50 dB stop-band attenuation

Window Reconstruction error Filter order Far-end


function (dB) (N) attenuation
(dB)
Blackman 0.6509 105 85
window
Kaiser 0.3208 90 107
window
PC6 window 0.1060 22 72

Table 3: Optimum performance in terms of order

Window Reconstructi Stop-band Filter Far-end


function on error attenuation (db) order (N) attenuatio
(dB) n
(dB)
Blackman 0.0049 108 86 102
window
Kaiser 0.0097 88.00 90 107
window
PC6 window 0.0120 55.00 22 72

Table 4: Performance in terms of far-end attenuation

Window Reconstructi Stop-band Filter Far-end


function on error attenuation (db) order (N) attenuatio
(dB) n
(dB)
Blackman 0.0785 52.168 45 56
window
Kaiser 0.0473 52.168 37 66
window
PC6 window 0.0290 52.168 73 73

5. CONCLUSION References
1. Johnston, J. D.: A filter family designed for
A simple algorithm for designing the low pass use in quadrature mirror filter banks. In:
prototype filters for QMF banks has been used to Proceedings of IEEE International
optimize the reconstruction error by varying the Conference Acoustics, Speech and Signal
filter cut-off frequency. Prototype filters designed Processing, Denver, 291–294(1980)
using high SLFOR combinational window, Kaiser 2. Bellanger, M.G., Daguet, J.L.: TDM-FDM
window and Blackman window functions have been transmultiplexer: Digital Poly phase and
compared. Combinational window functions provide FFT. IEEE Trans. Commun. 22(9) ,1199-
better far-end rejection of the stop-band energy. This 1204 (1974)
feature helps to reduce the aliasing energy leak into a 3. Vetterly,M.: Prefect transmultiplexers. In:
sub-band from that of the signal in the other sub- Proceedings of IEEE International
band.

5
Conference on Acoustics, Speech, and Signal image filter banks. IEEE Trans. Signal
Processing, vol. 4, 2567- 2570 (1986). Process. , 46 (6), 1275-1281(1998)
4. Gu, G., Badran, E.F.: Optimal design for 16. Goh, C. K., Lim, Y. C.: An efficient
channel equalization via the filter bank algorithm to design weighted minimax PR
approach. IEEE Trans. Signal Process.52 QMF banks. IEEE Trans. Signal
(2),536-544 (2004) Process.47(12), 3303-3314)(1999)
5. Esteban, D., Galand, C.: Application of 17. Chen, C.K., Lee J.H.: Design of quadrature
quadrature mirror filter to split band voice mirror filters with linear phase in frequency
coding schemes. In: Proceedings of IEEE domain. IEEE Trans Circuits System, 39 (9),
International Conference on Acoustics, 593-605(1992)
Speech, and Signal Processing (ASSP), 191- 18. Lin, Yuan-Pei, Vaidyanathan, P. P.: A Kaiser
195(1977) window approach for the design of prototype
6. Crochiere, R.E.: Sub–band coding. Bell Syst. filters of cosine modulated filterbanks. IEEE
Tech. J., 9, 1633-1654(1981) Signal Processing Lett., 5, 132–134 (1998).
7. Vrtterli, M.: Multidimensional sub-band 19. Saxena, R.: Synthesis and characterization of
coding: Some theory and algorithm, Signal new window families with their applications,
Process 6, 97- 112(1984) Ph. D. Thesis, Electronics and Computer
8. Woods,J.W.,Neil,S.D.O.:Sub-band coding of Engineering Department, University of
images. IEEE Trans Acoustic. Speech and Roorkee, Roorkee, India (1997).
Signal Process. (ASSP)-34 (10), 1278- 20. Sharma, S. N., Saxena, R., Jain, A.: FIR
1288(1986) digital filter design with Parzen and cos6 (πt)
9. Liu,Q.G.,Champagne,B.,Ho,D.K.C.:Simple combinational window family, Proc. Int.
design of over sampled uniform DFT filter Conf. Signal Processing, Beijing, China,
banks with application to sub-band acoustic IEEE Press, 92–95 (2002).
echo cancellation. Signal Process, 80(5),831- 21. Sharma, S. N., Saxena, R., Saxena, S. C.:
847(2000) Design of FIR filters using variable window
10. Crochiere,R.E., Rabiner , L. R.: Multirate families – A comparative study, J. Indian
digital signal processing. Prentice– Inst. Sci., 84, 155–161 (2004).
Hall(1983) 22. DeFatta, D. J., Lucas J. G., Hodgkiss, W. S.
11. Creusere, C.D., Mitra, S.K.: A simple Digital signal processing: A system design
method for designing highquality prototype approach, Wiley (1988).
filters for M band pseudo QMF banks. IEEE 23. Gautam, J. K., Kumar, A., Saxena, S.C.:
Trans. Signal Process. 43(4), 1005–1007 WINDOWS: A tool in signal processing.
(1995) IETE Tech. Rev., vol. 12(3), 217-226
12. Mitra, S.K.: Digital signal processing: A (1995).
computer based Approach, TMH, 24. Paulo, S. R. Diniz, Eduardo A. B. da Silva
ch.7&10(2001) and Sergio L. Netto.: Digital signal
13. Vaidyanathan, P.P.: Multirate systems and processing: System, analysis and design,
filter banks. Prentice- Hall, Englewood Cambridge University Press (2003).
Cliffs, NJ (1993) 25. Hooke, R., Jeaves, T.: Direct search solution
14. Jain, V.K., Crochiere,R.E.: Quadrature of numerical and statistical problems, J.
mirror filter design in time domain. IEEE Assoc. Comp. Machines, 8, 212–229 (1961).
Trans, Acoustic,. Speechand Signal Process. 26. Jain, A., Saxena, R., Saxena, S.C.: An
ASSP- 329 (4), 353-361(1984) improved and simplified design of cosine
15. H. Xu, W.S. Lu, A. Antoniou, “An improved modulated pseudo-QMF filter banks. Digit.
method for design of FIR quadrature mirror Signal Process. 16(3), 225–232 (2006).

6
Annexure 1

Flowchart for gradient based optimization technique

Specify desired stop-band and pass-band ripple

Initialize: pass-band and stop-band frequencies. m- error, step,


dir, Cut-off frequency(ωc), Filter order, and Window coefficients

Design prototype filter and determine reconstruction error


(|error|)

Yes Is |error| ≤|m-error|

or

|prev-error| =|error|
Stop
No

|prev-error| |error|

ωc*= ωc+(step ×dir)

Redesign prototype filter using ωc* and determine reconstruction


error (|error|)

Is No

|error| >|m-error|

Yes

step =step/2

dir=-dir

7
Performance Analysis of Sub Carrier Spacing Offset in
Orthogonal Frequency Division Multiplexing System
Shivaji Sinha, Member IETE, Rachna Bhati, Dinesh Chandra, Member IEEE & IETE
email:shivaji2006@gmail.com, dinesshc@yahoo.co.in, rachna.bhati1988@gmail.com
Department of Electronics & Communication Engineering, JSSATE Noida

Abstract — A very important aspect in OFDM is time in oscillators at the modulator and the demodulator.
and frequency synchronization. In particular, frequency These frequency errors cause a frequency offset
synchronization is the basis of the orthogonality between comparable to the frequency spacing, thus lowering the
frequencies. Loss in frequency synchronization is caused overall SNR [3].
due to Doppler shift because of large number of
frequencies closely spaced next to each other in OFDM II. OFDM SYSTEM IMPLEMENTATION
frame. So the intersymbol interference (ISI) and Inter
Carrier Interference(ICI) are also produced. This paper
In OFDM, a frequency-selective channel is subdivided
presents the effects of frequency offset error in OFDM
system introduced by the fading sensitive channel.
into narrower flat fading channels. Although the
Performance of the OFDM system is evaluated using r.m.s. frequency responses of the channels overlap with each
value of error across all subcarriers for different values of other as shown in Figure 1, the impulse responses are
the subcarrier spacing, SNR degradation and received orthogonal at the carriers, because the nulls of the each
signal constellation in Matlab environment. The impulse response coincides with the maximum values of
performance is compared under various conditions of another impulse response and thus the channels can be
noise variance and frequency Offset. separated [3].

Index Terms— Cyclic Prefix, FFT, Frequency Offset,


ICI, IFFT , OFDM, SNR

I. INTRODUCTION

High data rate transmission is one of the major


challenges in modern communications. OFDM which is
seen as the future technology for the wireless local area
systems and used as part of the IEEE 802.11a standard,
Fig 1. Orthogonality Principle
provide high data rate transmission [1]. The need for
OFDM (Orthogonal Frequency Division Multiplexing) In OFDM the data are transmitted in blocks of length .
system came from the idea of efficient use of spectrum The N. The Nth data block {Xn[0],…….Xn[N-1]} is
as well as bandwidth where the data transmission transformed into the signal block {xn[0],…xn [N-1]} by
becomes four times faster than the present one. OFDM the IFFT as given by
supports the technologies like DAB (Digital Audio
Broadcasting) or DVB (Digital Video broadcasting). It
is a special case of multicarrier transmission, where a
single data stream is transmitted over a number of lower .....(1)
rate subcarriers. All the subcarriers within the OFDM Each frequency 2πk/N , k=,0,..., N-1 represents a
signal are time and frequency synchronized to each carrier.
other, allowing the interference between subcarriers to A basic OFDM implementation scheme is
be carefully controlled [2] [3]. In systems based on the shown in Figure 2. Data at each sub-carrier (Xm) are
IEEE 802.11a standard, the Doppler effects are input into the inverse fast Fourier transform (IFFT) to
negligible when compared to the frequency spacing of be converted to time-domain data (xm) and after parallel
more than 300 kHz. What is more important in this to serial conversion (P/S), a cyclic prefix is added to
situation is the frequency error caused by imperfections

1
prevent ISI. At the receiver, the cyclic prefix is re- The areas, colored with yellow, show the ICI. When the
moved, because it contains no information symbols. centers of adjacent subcarriers are shifted because of the
After the serial-to-parallel (S/P) conversion, the frequency offset, the adjacent subcarriers nulls are also
received data in the time domain (ym) are converted to shifted from the center of the other subcarrier. The
the frequency domain (Ym) using the fast Fourier received signal contains samples from this shifted
transform (FFT) algorithm. subcarrier, leading to ICI [6]. The destructive effects of
the frequency offset can be corrected by estimating the
frequency offset itself and applying proper correction.
This calls for the development of a frequency
synchronization algorithm. Three types of algorithms
are used for frequency synchronization: algorithms that
use pilot tones for estimation (data-aided), algorithms
that process the data at the receiver (blind), and
algorithms that use the cyclic prefix for estimation
[4 ][5].
Among these algorithms, blind techniques are
Fig 2. OFDM System attractive because they do not waste bandwidth to
transmit pilot tones . However, they use less information
III. FREQUENCY OFFSET & FREQUNCY at the expense of added complexity and degraded
SYNCHRONIZATION ALGORITHM performance [6]. The degradation of the SNR, Dfreq,
caused by the frequency offset, is approximated as
The first source of frequency Offset is relative motion
between transmitter & receiver (Doppler or Frequency
drift) and is given by ..….(3)
Where is the frequency offset, T is the symbol
……..(2)
duration in seconds , Eb is the energy per bit of the
Where fc is carrier frequency & v is relative velocity
OFDM signal and N0 is the one-sided noise power
between Transmitter & Reciver. While second source is
spectrum density (PSD) [6][7] .
frequency errors in oscillator.. Single-carrier systems
are more sensitive to timing offset errors while OFDM
IV. SIMULATION PARAMETERS
generally exhibits good performance in the presence of
First we have analyzed the impact of frequency offset
timing errors. In practice, the frequency, which is the
resulting in Inter Carrier Interference (ICI) while
time derivative of the phase, is never perfectly constant,
receiving an OFDM modulated symbol. The analysis is
thereby causing ICI in OFDM receivers. One of the
accompanied by Matlab simulation.
destructive effects of frequency offset is loss of
orthogonality. The loss of orthogonality causes the ICI TABLE 1
as shown in Figure 3.
R.M.S. ERROR RELATED PARAMETERS

PARAMETERS VALUES
FFT Size 64
No. of Data Subcarriers 52
No. of bits per OFDM
52
symbol
No. of symbols 1
Modulation Scheme BPSK

Fig 3. ICI in OFDM

2
We have generated an OFDM symbol with all Error magnitude with frequency offset
10
subcarriers BPSK modulated then added frequency theory at Eb/No=20 db
simulation at Eb/No=20 db
offset with Gaussian noise of unit variance & zero mean 0

to result in Eb/N0 = 30 dB. We have find the difference


between the desired and actual constellation and -10

compute the r.m.s. value of error across all subcarriers.

Error, dB
-20
This is repeated for different values of frequency offset.
The parameters are listed in Table 1. -30

The Parameters taken for the SNR degradation -40

calculation & received signal calculation are listed in


-50
Table 2. -0.6 -0.4 -0.2 0 0.2
freqency offset/subcarrier spacing
0.4 0.6

Fig 4. Energy Magnitude with frequency Offset at Eb/No=20db


TABLE 2
Error magnitude with frequency offset
10
OFDM TIMING RELATED PARAMETERS theory at Eb/No=30 db
simulation at Eb/No=30 db
0
PARAMETERS VALUES
No of Subcarriers 48
-10
No. of Pilot Carriers 4
Error, dB
Total number of subcarriers 52
-20

Subcarrier frequency spacing 0.3125 MHz


-30

IFFT/FFT period
3.2(1/ ) s -40

Preamble duration
16 s -50
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
Signal duration BPSK_OFDM freqency offset/subcarrier spacing

symbol 4(TGI+TFFT) s
Fig 5. Energy Magnitude with frequency Offset at Eb/No=30db
Guard interval (GI) duration
0.8(TFFT /4) s
Figure 6 shows the calculated degradation of the SNR
Modulation Scheme QPSK
-7
10
17 db
-8
10 15 db
V. RESULTS ANALYSIS 10 db
-9 5 db
10
SNR degradation (Dfreq) in dB

In Figure 4 and 5 we have calculated the SNR loss for -10


10
different values of subcarrier spacing. we have seen that
the simulated results are slightly better than theoretical 10
-11

results because the simulated results are computed using -12


10
average error for all subcarriers (and the subcarriers at
-13
the edge undergo lower distortion. From figure 5 for 10

Eb/N0 = 30 db, the theoretical & simulated results are 10


-14

overlapped at zero frequency Offset for -30 db r.m.s.


-15
10
error. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
frequency offset in percent

Fig 6. SNR degradation of frequency offset for different Eb/N0


values

3
due to the frequency offset. For smaller SNR values, the When compared to Figure 8 in figure 9, it can be seen
degradation is less than for bigger SNR values as shown that the received signal with 0.5% frequency offset
in Figure 6. In order to study the SNR degradation in value for the same 0.002 noise variance is more
OFDM systems we have examined the received signal distorted than the received signal with 0.3% frequency
with no frequency offset. In this case, the data were sent offset.
by two of the carriers. We have generated 512 random
The simulation results reveal that the distortion in the
QPSK signals as data. We send data using only two of
received signal is increased. which is set to zero as
the subcarriers, and the other subcarriers have no data.
shown in figure 9 & figure 10. The effects of the
Figure 7 shows that for no frequency offset & noise
frequency offset can also be observed when, data are
variance (ideal condition), there is no ICI and no
sent with every subcarrier, except one.
interference between the data and the other zeros

1.5
1.5

1
1

0.5
0.5

imaginel
imaginel

0
0

-0.5
-0.5

-1
-1

-1.5
-1.5
-1.5 -1 -0.5 0 0.5 1 1.5
-1.5 -1 -0.5 0 0.5 1 1.5 real
real
Fig 9. Received signal constellation with 0.5% frequency offset

Fig 7. . Received signal constellation with 0% frequency offset


1

0.8

0.6

0.4
1.5
0.2
Imaginel axis

1
0

0.5 -0.2

-0.4
imaginel

0
-0.6

-0.5 -0.8

-1
-1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
Real axis

-1.5
Fig 10. Received signal at the zero subcarrier with no
-1.5 -1 -0.5 0 0.5 1 1.5 frequency offset
real

Fig 8. Received signal constellation with 0.3% frequency offset


If we have the frequency offset in the channel, we
When 0.3% frequency offset & 0.002 noise variance is cannot receive a zero (no data) at the subcarrier that was
introduced in the carrier, its effects are observed in set to zero. Figure 10 shows the effect of ICI due to no
terms of ICI. The result with 0.3% frequency offset is frequency offset on the subcarrier with zero data from
shown in Figure 8. In particular, we can see that the all other subcarriers. In the ideal case of no frequency
signal from neighbouring carriers causes interference offset, the demodulated value should be zero for the
and we have a distorted signal constellation at the whole time. When frequency offset is present, the effect
receiver. is like random noise which increases with the frequency
4
offset. As shown in Figure 11, the effect of ICI system” in 2010 Second International Conference on Communication
Software and Networks.
increases considerably when the frequency offset is on
the order of 0.4% - 0.6%. [2] Richard Van Nee and Ramjee Prasad, OFDM for Wireless
Multimedia Communications, The Artech House Universal Personal
When we compare the results in Figures 10 and 11, we Communications, Norwood, MA, 2000
can see that when we increase the frequency offset
[3] A. Y. Erdogan, “Analysis of the Effects of Frequency Offset in
value, the received signal is distorted more and for the
OFDM Systems,” Master’s Thesis, Naval Postgraduate School,
frequency offset values bigger than 0.6%, the received Monterey, California, 2004.
data are unreadable
[4] B. Mcnair, L.J. Cimini and N. Sollenerber, “A Robust and
0.3
Frequency Offset Estimation Scheme for OFDM Systems,” AT&T
freq. offset=0.4%
freq.offset=0.6% Labs- Research, New Jersey,2000
0.2

[5] Jan-Jaap van de Beek, Magnus Sandelland Per Ola B.rjesson, “ML
0.1
Estimation of Time and Frequency Offset in OFDM Systems,” In
0
IEEE Transactions on Signal Processing, vol. 45, no. 7, pp. 1800-
Imaginary

1805, July 1997.


-0.1
[6] P.H. Moose, “A technique for orthogonal frequency division
-0.2 multiplexing frequency-offset correction,” IEEE Trans. on Commun.,
vol. 42, no. 10, pp. 2908-2914, Oct. 1994.
-0.3

[7] Ersoy Oz, “A Comparison of Timing Methods In Orthogonal


-0.4
-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 Frequency Division Multiplexing (OFDM) Systems,” Master’s thesis,
Real Naval Postgraduate School, Mon-terey, California, 2004.

Fig 11. Received signal at the zero subcarrier with 0.4% and 0.6%
IX. AUTHOR’S BIOGRAPHY
frequency off-set
1.Shivaji Sinha is Asst. Prof. in J.S.S. Academy of Technical
VI. CONCLUSION Education, Noida since Oct. 2003. He is member of IETE. He has
done his B. Tech from G.B. Pant Engg. College Pauri, Garhwal in
Simulation results demonstrated the distortive effects of Electronics & Communication Engineering & M. Tech in VLSI
frequency offset on OFDM signals; frequency offset design from U.P. Technical University.
affects symbol groups equally. Additionally, it was seen
that an increase in frequency offset resulted in a
2. Rachan Bhati is a student of B. Tech Final Year in JSS Aademy of
corresponding increase in these distortive effects and
Technical Education.
caused degradation in the SNR of individual OFDM
symbols.
3. Dinesh Chandra is Head & Professor in deptt. of Electronics &

VII. FUTURE WORK Communication Engineering, J.S.S Academy of Technical Education,


Noida since April 2001.He is Fellow Member of IETE & Member of
For the system developed above we can implement IEEE. He has done his B. Tech form University of Roorkee (I.I.T.
three methods for frequency offset estimation: data- Roorkee) in Electrical Engineering & M. Tech From I.I.T. Kharagpur
driven, blind and semi-blind. The data-driven and semi- in Microwave & Optical Communication Engineering in 1987.He is
blind rely on the repetition of data, while the blind also Coordinator of M. Tech Program of G. B. technical University &
technique determines the frequency offset from the Member of Board of Studies (BOS) G.B. Technical University for
QPSK data. The use of preambles & cyclic Prefix in revision of Syllabus for Electronics & Communication and
frequency offset estimation can also be implemented. Instrumentation & Control Engineering

VIII. REFERENCES

[1] Md. Amir Ali Hasan, Faiza Nabita, Imtiaz Ahmed Amith
Khandakar “Analytical Evaluation of Timing Offset error in OFDM
5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

A Comparative Analysis of ECG Data Compression Techniques


Sugandha Agarwal
Amity School of Engineering and Technology
Amity University Uttar Pradesh,
Lucknow
Sugandhaa7@gmail.com

Abstract A typical ECG cycle is defined by the various features (P,


Computerized electrocardiogram (ECG), Q, R, S, and T) of the electrical wave. As shown in figure
electroencephalogram (EEG), and magneto encephalogram 1. The P wave marks the activation of the atria, which are
(MEG) processing systems have been widely used in the chambers of the heart that receive blood from the body.
clinical practice and they are capable of recording and
Next in the ECG cycle comes the QRS complex. The QRS
processing long records of biomedical signals. The need
of sending electrocardiogram records over telephone lines complex represents the activation of the left ventricle,
for remote analysis is increasing, and so the need for which sends oxygen-rich blood to the body, and the right
effective electrocardiogram compression techniques is ventricle, which sends oxygen-deficient blood to the lungs.
great. The aim of any biomedical signal compression During the QRS complex, which lasts about 80 msec, the
scheme is to minimize the storage space without losing any atria prepare for the next beat, and the ventricles relax in
clinically significant information, which can be achieved the long T wave [1,2]. It is these features of the ECG
by eliminating redundancies in the signal, in a reasonable
signal by which a cardiologist uses to analyze the health of
manner. The algorithms that produce better compression
ratios and less loss of data are needed. Various data the heart and note various disorders.
compression techniques have been proposed for reducing
the digital ECG volume for storage and transmission. Due
to the diverse procedures that have been employed,
comparison of the ECG compression method is a major
problem. The main purpose of this paper is to address
various ECG compression algorithms and determine which
would be more efficient. ECG data compression
techniques are broadly divided into two major groups:
Direct data compression and transformation methods. Figure1. A typical representation of the ECG waves.
Direct data reduction techniques are: Turning point,
AZTEC, CORTES, DPCM and entropy coding, fan and Digital analysis of electrocardiogram (ECG) signal
SAPA, peak-picking and cycle-to-cycle compression. The imposes a practical requirement that digitized data be
transformation method include: Fourier, cosine and K-L selectively compressed to minimize analysis efforts
transform. The paper concludes with the comparison of and data storage space. Therefore, it is desirable to
some important data compression techniques. Comparison carry out data reduction or data compression. The
of various ECG compression techniques like TURNING main goal of any compression technique is to achieve
POINT, AZTEC, CORTES, FFT and DCT it was found maximum data volume reduction while preserving the
that DCT is the best suitable compression technique with significant signal features upon reconstruction.
compression ratio of about 100:1. Conceptually, data compression is the process of
Keywords: ECG Compression; detecting and eliminating redundancies in a given set.
Shannon has defined redundancy as “that fraction of
I. INTRODUCTION message or datum which is unnecessary and hence
repetitive in the sense that if it was missing the
An electrocardiogram (ECG or EKG) is a graphic
message would still be essentially complete or at least
representation of the heart's electrical activity, formed as could be completed. ECG data compression is broadly
the cardiac cells depolarize and repolarise. Electrical classified into two major groups: Direct data
impulses in the heart originate in the sinoatrial node and compression and transformation method. The direct
travel through the heart muscle where they impart data compressions base their detection of
electrical initiation of systole or contraction of the heart. redundancies on direct analysis of the actual signal
The electrical waves can be measured at selectively placed sample. Whereas transformation method utilize
spectral and energy distribution analysis for detecting
electrodes (electrical contacts) on the skin. Electrodes on redundancies [2,7]. Data compression is achieved by
different sides of the heart measure the activity of different discarding digitized samples that are not important for
parts of the heart muscle. An ECG displays the voltage subsequent pattern analysis and rhythm interpretation.
between pairs of these electrodes, and the muscle activity Examples of such data compression algorithms are:
that they measure, from different directions. AZTEC, turning point (TP). AZTEC retains only the
samples for which there is sufficient amplitude
change. TP retains points where the signal curves
SIP0301-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

(such as at the QRS peak) and discards every alternate reconstruction of ECG data; and a measure of error loss,
sample [6]. The data reduction algorithms are often measured as the percent mean-square difference
empirically designed to achieve good reduction (PRD) [5]. The PRD is calculated as follows:
without causing significant distortion error.

II. SYSTEM DESCRIPTION

The acquired signal is taken and is fed to an


instrumentation amplifier that amplifies the signal. The
amplifier is used to set the gain and it also amplifies very
low amplitude ECG signal into perceptible view. The
where ORG is the original signal and REC is the
acquisition of pure ECG signal is of higher importance. As
reconstructed signal. The lower the PRD, the closer the
we know that the ECG signal will be in the range of milli-
reconstructed signal is to the original ECG data [10,11].
volts, which is difficult to analyze. So the prior
The various compression techniques like AZTEC, TP,
requirement is to amplify the acquired signal. The
CORTES, DFT, FFT algorithms are compared with PRD
amplified output is then fed to the analog to digital
and Compression ratio and best suitable was considered.
converter for digitalizing the ECG data using ADC and
The amplitude zone time epoch coding algorithm
microcontroller. In this process, micro-controller is used so
(AZTEC) converts the original ECG data into horizontal
as to set the clocks for picking up the summation of the
lines (plateaus) and slopes [4]. Slopes are formed when the
signals that are generated from the heart. The heart
length of a plateau is less than three. The information
generates different signals at various nodes [3]. The
saved from a slope is the length of the slope and its final
summation of the signals that are generated by the heart is
amplitude. The turning point technique (TP) always
taken and then it is sent for filtering processes. Then the
produces a 2:1 compression ratio. It accomplishes this by
digital output of the ECG is displayed in LCD.
replacing every three data points with the two that best
represent the slope of the original three points. The
coordinate reduction time encoding system (CORTES)
combines the high compression ratios of the AZTEC
system and the high accuracy of the TP algorithm.
A. Direct Data compression Method
1. Turning Point Algorithm
1) Acquire the ECG signal
2) Take the first three samples and check for the condition
as mentioned below:
(x1-x0)*(x2-x1)<0
(or)
(x1-x0)*(x2-x1)>0
3) If the above condition-1 is correct then x1 is stored
else x2 is stored.
Figure2. Basic block diagram of the ECG module. 4) Reconstructing the compressed signal.
The compression ratio of Turning point algorithm is 2:1, if
After the filtering process, the signal is set for the higher compression is required then the same algorithm
transmission, but it is important to compress it so as to can be implemented on the already compressed signal so
that it is further compressed to a ratio of 4:1. But after the
transmit at a faster rate. As shown in figure 2.
2nd compression, the required data in the signal may be
lost since the signal is overlapped on one another.
III.COMPRESSION TECHNIQUES Therefore, TP algorithm is limited to compression ratio of
2:1. TP algorithm can be applied on the already
Data compression techniques are categorized as those in compressed data to increase the compression ratio to 4:1
which the compressed data is reconstructed to form the [7, 13]. As shown in figure 3, The Turning Point is
original signal and techniques in which higher compression basically an adaptive down sampling method developed
ratios can be achieved by introducing some error in the especially for ECGs. It reduces the sampling frequency of
reconstructed signal. The effectiveness of an ECG an ECG signal by a factor of two.
compression technique is described in terms of
compression ratio (CR), a ratio of the size of the
compressed data to the original data; execution time, the
computer processing time required for compression and

SIP0301-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

2) The direction of the slope is determined by check-ing


the following conditions.
a) If (X2 - X1) * (X1 - X0) is +ve then the slope is +ve.
b) If (X2 - X1) * (X1 - X0) is -ve then the slope is -ve.
3) The slope is terminated if the no. of samples is >=3 and
if direction of slope is changed.

Figure3. Turning point compression analysis.

2. AZTEC ALGORITHM Figure 4. AZTEC compression analysis.

Another commonly used technique is known as AZTEC 3. CORTES Algorithm


(Amplitude Zone Time Epoch Coding). This converts the An enhanced method known as CORTES (Coordinate
ECG waveform into plateaus (flat line segments) and Reduction Time Encoding System) applies TP to some
sloping lines. As there may be two consecutive plateaus at portions of the waveform and AZTEC to other portions
different heights, the reconstructed waveform shows and does not suffer from discontinuities. AZTEC line
discontinuities. Even though the AZTEC provides a high length threshold Lth, CORTES saves the AZTEC line
data reduction ratio, the fidelity of the reconstructed signal otherwise it saves the TP data. As shown in Figure 5.
is not acceptable to the cardiologist because of the 1) Acquire the ECG signal
discontinuity (step-like quantization) that occurs in the 2) Define the Vth and Lth.
reconstructed ECG waveform [12,13]. As shown in Figure 3) Find the current Maximum and minimum.
4. AZTEC Algorithm is implemented in 2 phases: 4) If the Sample greater than threshold than compare the
2.1. Horizontal Mode length with Lth
1) Acquire the ECG signal 5) If (len>lth)
2) Assign the first sample to Xmax and Xmin which AZTEC Else
represents highest and lowest elevations of the current line. TP
3) Check for the following condition and store the plateau 6) Plot the compressed signal.
if a) If X1>Xmax then Xmax =X1 and
b) If X1<Xmin then Xmin =X1 and so on till Xn
samples, repeat this until the following 2 conditions are
satisfied, the difference between VMAX and VMIN is
greater than a predetermined threshold or if line length is
> 50 are satisfied
4) The stored values are the length L=S-1, where S is no.
of samples and L is length and the average amplitude of
the plateau (VMAX+VMIN)/2.
5) Algorithm starts assigning the next samples to Xmax
and Xmin.
2.2. Slope Mode
1) If no. of samples <=3, then the line parameters are not
saved. Instead the algorithm begins to produce slopes.
SIP0301-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

3) Find the dct of ECG signal check for dct coefficients


(before compression) =0, increment the counter A if it is
between +0.22 to -0.22 and assign to Index=0.
4) Check for DCT coefficients(after compression)=0,
increment the Counter B.
5) Calculate inverse dct and plot decompression, error.
6) Calculate the compression ratio, PRD. As shown in
Fig.7

Figure 5. CORTES compression analysis.


B. Transformation Methods
1. FFT Compression
1) Separate the ECG components into three components x,
y, z.
2) Find the frequency and time between two samples.
3) Find the FFT of ECG signal check for fft coeffi-cients
(before compression) =0, increment the counter A if it is
between +25 to-25 and assign to Index=0. Figure 7. DCT compression analysis.
4) Check for FFT coefficients (after compression) =0,
increment the Counter B.
IV SUMMARRY
5) Calculate inverse FFT and plot decompression, error.
6) Calculate the compression ratio, PRD.As shown in Summary of ECG data compression schemes. The
Figure 6. comparison table shown in Table 1, details the resultant
compression techniques. This gives the choice to select the
best suitable compression method. From the table we
conclude that the DCT with the compression ratio of 90.43
and PRD of 0.93.Is the most efficient algorithm for ECG
data compression.
Table 1. Comparison of compression techniques.
METHOD COMPRESION PRD
RATIO
CORTES 4.8 3.75
TURNING POINT 5 3.20
AZTEC 10.37 2.42
FFT 89.57 1.16
DCT 90.43 0.93

Graph showing compression ratio and PRD

Figure 6. FFT compression analysis.


2. DCT Compression
1) Separate the ECG components into three components x,
y, z.
2) Find the frequency and time between two samples.

SIP0301-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

[1] Held, Gilbert., 1987, Data Compression : Techniques


and Applications Hardware and Software
Considerations, John Wiley & Sons Ltd.
[2] Lynch, Thomas J., 1985, Data Compression :
Techniques and Applications, Van Nostrand Reinhold
Company
[3] D. C. Reddy, (2007) biomedical signal processing-
principles and techniques, 254-300, Tata McGraw-Hill,
Third reprint.
[4] P. Abenstein and W. J. Tompkins, “New Data
Reduction Algorithm for Real-Time ECG Analysis,”
[5] Al-Nashash, H. A. M., 1994, "ECG data compression
using adaptive Fourier coefficients estimation", Med. Eng.
Phys., Vol. 16, pp. 62-67
[6] B. R. S. Reddy and I. S. N. Murthy, (1986) ECG data
CONCLUSION compression using Fourier descriptors, IEEE Trans. Bio-
Compression techniques have been around for many years. med. Eng., BME-33, 428-433.
However, there is still a continual need for the [7] V. Kumar, S. C. Saxena, and V. K. Giri, (2006) Direct
advancement of algorithms adapted for ECG data data compression of ECG signal for telemedicine, ICSS ,
compression. The necessity of better ECG data 10, 45-63.
compression methods is even greater today than just a few [8] Jalaleddine, C. Hutchens, R. Stratan, and W. A. Co-
years ago for several reasons. The quantity of ECG records berly, (1990) ECG data compression techniques-a unified
is increasing by the millions each year, and previous approach, IEEE Trans. Biomed. Eng., 37, 329-343.
records cannot be deleted since one of the most important [9] Trans. Biomed. Eng., 15, 128–129, 1968.
uses of ECG data is in the comparison of records obtained [10] Hamilton, Patrick S., 1991, "Compression of the
over a long range period of time. The ECG data Ambulatory ECG by Average Beat Subtraction and
compression techniques are limited to the amount of time Residual Differencing", IEEE Transactions on Biomedical
required for compression and reconstruction, the noise Engineering, Vol. 38, No. 3., pp. 253-259
embedded in the raw ECG signal, and the need for accurate [11] Grauer, Ken., 1992, A Practical Guide to ECG
reconstruction of the P, Q, R, S, and T waves. Interpretation, MosbyYear Book, Inc.
From this paper author try to unify various data [12] J. R. Cox, F. M. Nolle, H. A. Fozzard, and G. C.
compression techniques, used for ECG data compression Oliver, (1968) AZTEC, a pre-processing program for real
[8,9]. The results of the research will likely provide an time ECG rhythm analysis, IEEE Trans. Biomed. Eng.,
improvement on existing compression techniques. BME-15, 129-129.
[13] J. L. Simmlow, Bio signal and biomedical image
REFERENCES processing- MATLAB based applications, 4-29.
[14] N. S. Jayant, P. Noll, Digital Coding of Waveforms,
Englewood Cliffs, NJ, Prentice-Hall, 1984.

SIP0301-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Biologically inspired Cryptanalysis- A Review


Ashutosh Mishra*, Dr. Harsh Vikram Singh**, S.P. Gangwar**
*Student (M.Tech), KNIT SULTANPUR, ** Astt. Prof. Dept. of Electronics Engineering, KNIT SULTANPUR

Abstract:- Data security to ensure authorized access of bit-wise exclusive-OR (XOR) [2]. BIT is a field that has
information and fast delivery to a variety of end users with caught the interest of many researchers. The ability of using
guaranteed Quality of Services (QoS) are important topics of BIT approaches in various fields has been proven. Clark [6]
current relevance. In data security, cryptology is introduced to hopes for those who do research in BIT especially related to
guarantee the safety of data, whereby it is divided into ants, swarm and Artificial Neural Network, to examine the
cryptography and cryptanalysis. Cryptography is a technique application of those techniques in cryptology. He also states
to conceal information by means of encryption and decryption that a good place to start is on classical cipher cryptanalysis or
while cryptanalysis is used to break the encrypted information Boolean function design. This paper is organized as follows:
using some methods. Biological Inspired techniques (BIT) are first, we review simple substitution cipher, columnar
a method that takes ideas from biology to be used in transposition cipher and permutation cipher which are types of
cryptography. BIT is a field that has been widely used in many classical cipher, in Section 2. In Section 3, some biological
computer applications such as pattern recognition, computer inspired techniques employed are explained and the use of
and network security and optimization. Some examples of BIT these approaches in cryptography is reviewed in Section 4.
approaches are genetic algorithm (GA), ant colony and Finally, conclusions are given in Section 5.
artificial neural network (ANN). GA and ant colony have been 2 Classical Ciphers
successfully applied in cryptanalysis of classical ciphers. Classical ciphers are often divided into substitution ciphers
Therefore, this paper will review these techniques and explore and transposition ciphers. There are many types of these
the potential of using BIT in cryptanalysis. ciphers. In this paper, we focus on simple substitution cipher
Keywords: Cryptanalysis, Genetic Algorithm, Artificial and two types of transposition cipher namely columnar
neural network, Ant Colony. transposition cipher and permutation cipher. The ciphers are
1 Introduction vulnerable to cipher text-only attacks by using frequency
There are many cryptographic algorithms (cipher) that have analysis.
been developed for information security purposes such as the Basically, a simple substitution cipher is a technique of
Data Encryption Standard (DES), Advanced Encryption replacing each character with another character. The mapping
Standard (AES) and Rivest-Shamir-Adleman (RSA). These function of replacing the characters is represented by the key
are some examples of a modern cipher. The foundation of the used. For this purpose of study, white spaces are ignored while
algorithms, especially block ciphers, is mainly based on the other special characters like comma and apostrophe are
concepts of a classical cipher such as substitution and removed. Example 1 shows a simple substitution cipher:
transposition. For instance, DES uses only three simple Alphabet: A B C D E F G H I J K L M N O P Q R S T U V
operator namely substitution, permutation (transposition) and WXYZ
SIP0303-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Key: M N F Q Y A J G R Z K B H S L C I V U D O W T E P can also be written in group of five characters. Using the same
X plaintext and key of the previous example, the cipher text of
Example 1 the permutation cipher is produced as depicted in example 3 as
Plain Text: - KAMLA NEHRU INSTITUTE OF follows:
TECHNOLOGY Key: - plain text order: - 1 2 3 4 5 6 7
Cipher text: - KMHBM SYGVO RSUDRDODY LA Cipher text order: - 4 7 2 6 1 3 5
DYFGSLBLJP Order: - 1234567 1234567 1234567 1234567 1234567
The idea of a transposition cipher is to alter the position of a Example 3
character to another position. In columnar transposition cipher, Plain text: KAMLANE HRUINST ITUTEOF TECHNOL
the plaintext is written into a table of fixed number of GYPQRSX
columns. The number of columns depends on the length of the Cipher Text: - LEANK MAITR SHUNT FTOIU EHLEO
key. The key represents the order of columns that will become TCNQX YSGPR (P, O, R, S, X, are dummy variable)
the cipher text. We only consider 26 characters in the In both simple substitution and transposition cipher, there are
alphabet, so all special characters are removed. For example, same disadvantage which regards to the frequency of
the plaintext “KAMLA NEHRU INSTITUTE OF characters. Based on the Example 1, the character K is
TECHNOLOGY” with the key “4726135” is transformed to replaced with K, A with M and so forth. Therefore, the
cipher text by inserting it into a table as shown in the example frequency of each character in the plaintext will be exactly the
in Example 2. same as the frequency of its corresponding cipher text
character. Hence, the encryption algorithm preserves the
4 7 2 6 1 3 5 frequency of characters of the plaintext in the cipher text
K A M L A N E because it merely replaces one character with another. Still,
H R U I N S T the frequency of characters depends on the length of the text
I T U T E O F and probably, some characters are not even used in plaintext.
T E C H N O L As shown in the above example, the character P, Q and R are

O G Y P Q R S some characters that do not exist in the plaintext. Therefore,

Example 2 many researchers use frequency analysis for cryptanalysis of

Four dummy alphabets (here, P, Q, R and S) are added for simple substitution cipher. Analyses were done by using

complete the rectangle and the cipher text can be written in frequency of single character (unigram), double character

group of five characters [4]. So the cipher text of this cipher is (bigram), triple character (trigram) and so on (n-grams). The

“KHITO ARTEG MUUCY LITHP ANENQ NSOOR technique used to compare candidate keys to the simple

ETFLS”. substitution cipher is to compare the frequency of n-grams of

The permutation cipher operates by rearranging each character the cipher text with the language of the text. In the effort of

in a plaintext block by block based on a key. The size of the attacking the transposition cipher, the multiple anagramming

block is the same as the length of the key and the cipher text attack can be used. The cipher text is written into a table

SIP0303-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

which the number of columns represents the length of the key. will further explore the usage of this algorithm in
For columnar cipher, the cipher text is written into the table cryptanalysis in Section 4.
column by column from left to right while in permutation 3.2 Ant Colony Optimization
cipher, the cipher text is written row by row from top to Ant colony optimization is inspired by the pheromones trail
bottom. After that, the columns are rearranged to form laying and following behavior of real ants which use
readable plaintext in every row. pheromones as a communication medium. This approach was
3 Biological Inspired Techniques proposed for solving hard combinatorial optimization
BIT is a method that takes ideas from biology to be used in problems [9]. An important aspect of ant colonies is the
computing. It relies heavily on the fields of biology, computer collective action of many ants result in the location of the
science and mathematics. Some of BIT approaches are GA, shortest path between a food source and a nest. Standard ant
artificial neural network (ANN), DNA, Cellular Automata, ant colony optimization (ACO) algorithm contains probabilistic
colony, particle swarm optimization and membrane transition rule, goodness evolution and pheromone updating
computing. Four of these techniques namely GA, ant colony [6]. In cryptanalysis, ACO algorithm has been applied in
and ANN, Cellular automata describe later in this section. breaking transposition cipher and block cipher. Cryptanalysis
3.1 Genetic Algorithm of transposition cipher published in [6] is reviewed in Section
Genetic Algorithm (GA) is a technique that is used to optimize 4 of this paper.
searching process and was introduced by Holland in 1975 [5]. 3.3 Artificial Neural Network
This algorithm is based on natural selection in the biological Artificial Neural Networks (ANN) can be defined as
sciences [7]. There are several processes in GA namely computational systems inspired by theoretical immunology,
selection, mating and mutation. In the beginning of the cycle, observed immune functions, principles and mechanisms in
a set of random population is created as the first generation. order to solve problems [8]. ANN can be divided to
Elements that make up the population are the potential population-based algorithm such as negative selection and
solution to the problem. The population is represented by clonal selection algorithm and network-based algorithm such
strings. Then, pairs of strings are selected based on a certain as continuous and discrete immune networks. ANN has been
criteria called a fitness function. These pairs are known as applied to a wide variety of application areas such as pattern
parents and will be mated to produce children. The children recognition and classification, optimization, data analysis,
are then mutated based on a mutation rate because not all computer security and robotic [8]. Hart and Timmis et. l.
children are mutated. After the mutation process, a new set of categorized these application areas and some others into three
population is formed (the next generation). The cycle major categories namely learning, anomaly detection and
continues until some stopping condition is met such as a optimization. In optimization, most of the papers published are
maximum number of generations. This algorithm has been based on the application of clonal selection principle using the
successfully applied in cryptanalysis of classical and modern algorithm such as Clonalg, opt-AINET and B-cell algorithm.
ciphers such as simple substitution, polyalphabetic, De Castro & Von Zuben [8] proposed a computational
transposition, knapsack, rotor machine, RSA and TEA. We implementation of the clonal selection algorithm (it is now

SIP0303-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

called Clonalg). The authors compared their algorithm‟s Classical cipher was successfully attacked using various
performance with GA for multi-modal optimization and argue metaheuristic techniques. Metaheuristic is a heuristic method
that their algorithm was capable of detecting a high number of for solving a very general class of computational problems.
sub-optimal solutions, including the global optimum of the Therefore, this technique is commonly used in combinatorial
function being optimized. Castro [8] extended this work by optimization problems. Some of metaheuristic techniques that
using immune network metaphor for multi-modal were successfully applied in the cryptanalysis of classical
optimization. Clonal selection has also been used in cipher are genetic algorithm, simulated annealing, tabu search
optimization of dynamic functions. The result is compared , ant colony optimization and hill climbing. In this paper, we
with evolution strategies (ES) algorithm. The comparison is will review BIT techniques that have been successfully
based on time and performance and shows that clonal applied in cryptanalysis of classical ciphers (simple
selection is better than ES in small dimension problems. substitution and transposition cipher). Spillman et al have
However, in higher dimension, ES outperformed the clonal published their paper on the cryptanalysis of simple
selection in time and performance. Other than that, somr substitution cipher using genetic algorithm in 1993. The paper
author applied the Clonalg in a scheduling problem, with the is an early work done by using GA in cryptanalysis and it is a
name clonal selection algorithm for examination timetabling good choice for re-implementation and comparison [4]. In [4],
(CSAET). The research shows that CSAET is successful in the authors review some idea about genetic algorithm before
solving problems related to scheduling. From the comparison they show the steps on how the algorithm is applied in the
performed between CSAET with GA and memetic algorithm, cryptanalysis. The aim of the attack is to find the possible key
CSAET produced quality output as good as those algorithms. values based on frequency of characters in the cipher text. The
Therefore, literature shows that ANN is capable of producing key is sorted from the most frequent to the least frequent
good results in various fields especially regarding characters in the English language. In the selection process,
optimization. It is hoped that ANN will also find its way in pairs of keys (parents) are randomly selected from the
cryptanalysis. population (contains a set of keys that is randomly generated
3.4 Cellular Automata for the first generation) based on fitness function. The fitness
A cellular automaton is a decentralized computing model function compares unigram and bigram frequencies characters
providing an excellent platform for performing complex in the known language with the corresponding frequencies in
computation with the help of only local information. Nandi et the cipher text. Keys with higher fitness value have more
al. presented an elegant low cost scheme for CA based cipher chance of being selected. Mating is done by combining each
system design. Both block ciphering and stream ciphering of the pairs of parents to produce a pair of children. The
strategies designed with programmable cellular automata children are formed by comparing every element (character) in
(PCA) have been reported. Recently, an improved version of each pair of parents. After that, one character in the key can be
the cipher system has been proposed. change with a randomly selected character based on a
4 BIT in cryptanalysis mutation rate in the mutation process. The selection, mating
and mutation processes continue until a stopping criterion is

SIP0303-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

met. Another paper published in 1993 utilizing genetic transposition cipher which involves differing heuristics,
algorithm in cryptanalysis was by Matthews. However, the processing time and success criteria. The comparison shows
paper is focuses on transposition cipher. The attack is known that the ACS algorithm can decrypt cryptograms which are
as GENALYST. The attack finds the correct key length and significantly shorter than other methods due to the use of
correct permutation of the key of a transposition cipher. dictionary heuristics in addition to bigrams.
Matthews uses a list containing ten bigram and trigram yang 5 Conclusion
that have been given weight values to calculate the fitness. For This paper reviews works on cryptanalysis of classical ciphers
instance, the trigram „THE‟ and „AND‟ are given a score of using BIT approaches. The types of classical ciphers involved
„+5‟ while „HE‟ and „IN‟ are given a score of „+1‟. Matthews are the simple substitution and transposition cipher while GA
also give „-5‟ score for the trigram of „EEE‟. This is because, and ant colony optimization is the techniques used. GA has
although „E‟ is very common in English, but a word been applied to both ciphers but only transposition cipher was
containing a sequence of three „E‟s is very uncommon in found to have been implemented using ant colony. ANN is
normal English text. Higher fitness values have more chance also discovered to be a promising approach to be employed in
of being selected. After the selection process has been done, cryptanalysis based on its ability to solve optimization
mating is performed using a position-based crossover method. problems. Therefore, the application of ANN in cryptanalysis
Then, the mutation process is applied. There are two possible should be further studied,
mutation types that can be applied. First, randomly swap two References
elements and second, shift forward all elements by a random [1] Rsa from wikipedia. http://en.wikipedia.org/wiki/RSA.
number of places. The experiment was done by using [2]. A. Menezes, P. van Oorschot, and S. Vanstone. Handbook
population size of 20, 25 generations and crossover decreases of Applied Cryptography. CRC Press, New York, NY, 1997.
from 8.0 to 0.5. The result shows that GENALYST is [3] S. Nandi, B. K. Kar, and P. Pal Chaudhuri. Theory and
successful in breaking the cipher with key lengths of 7 and 9. applications of cellular automata in cryptography. IEEE
Ant colony optimization has also been successfully Transactions on Computers, 43(12):1346–1357,1994.
implemented in the cryptanalysis of transposition cipher 4]. Lin, Feng-Tse, & Kao, Cheng-Yan. (1995). A genetic
published in [8]. The paper uses specific ant algorithm named algorithm for ciphertext-only attack in cryptanalysis. In IEEE
Ant Colony System (ACS) with known success on the International Conference on Systems, Man and Cybernetics,
Traveling Salesman Problem (TSP), to break the cipher. The 1995, (pp. 650-654, vol. 1).
authors used the bigram adjacency score, Adj(I,J) to define the [5]. Holland, J. H. (1975). Adaptation in natural and artificial
average probability of the bigram created by juxtaposing systems. Ann Arbor: The University of Michigan Press.
columns I and J. The score will be higher for two correctly [6]. Clark, J. A. (2003) Invited Paper. Natured- Inspired
aligned columns. Other than that, they also used dictionary Cryptography: Past, Present and Future. IEEE Conference on
heuristic, Dict(M) for the recognition of plaintext. The authors Evolutionary Computation 2003. Special Session on
also made a comparison between the results produced by ACS Evolutionary Computation and Computer Security. Canberra.
with the result of previous metaheuristic techniques in

SIP0303-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

[7]Goldberg, D., (1989) Genetic Algorithms in Search, [15] Schmidt, T.; Rahnama, H.; Sadeghian, A. , A review of
Optimization, and Machine Learning. Reading MA: Addison- applications of artificial neural networks in cryptosystems ,
Wesley. Automation Congress, 2008. WAC 2008. World , Page(s): 1 –
[8].de Castro, L. N. (2002) Immune, Swarm and Evolutionary 6
Algorithms Part I: Basic Models. International Conference on [16] Godhavari, T.; Alamelu, N.R.; Soundararajan,
Neural Information Processing Vol. 3 pp 1464-1468. R.,Cryptography Using Neural Network ,INDICON, 2005
[9]. S.N.Sivanandam · S.N.Deepa “Introduction to Genetic Annual IEEE , Page(s): 258 - 261
Algorithms, Springer-Verlag Berlin Heidelberg 2008. [17] R. Spillman, M. Janssen, B. Nelson, and M. Kepner. Use
[10] Xu Xiangyang, The block cipher for construction of S- of a genetic algorithm in the cryptanalysis of simple
boxes based on particle swarm optimization, 2nd International substitution ciphers. Cryptologia, 1993,17(1):31–44.
Conference on Networking and Digital Society (ICNDS), [18] Diffie, W. and Hellman, M. (1976). New Directions in
2010 , Page(s): 612 - 615 Cryptography. IEEE Transactions on Information Theory,
[11] Uddin, M.F.; Youssef, A.M, Cryptanalysis of Simple 22(6): 644-654.
Substitution Ciphers Using Particle Swarm Optimization”, [19] Tarek Tadros, Abd El Fatah Hegazy, and Amr Badr
IEEE Congress on Evolutionary Computation, 2006. Page(s): ,Genetic Algorithm for DES Cryptanalysis,IJCSNS
677 – 680 International Journal of Computer Science and Network
[12] Mohammad Faisal Uddin; Amr M. Youssef , An Security, VOL.10 No.5, May 2010
Artificial Life Technique for the Cryptanalysis of Simple [20]Forrest, S., Perelson, A. S. Allen, L. and Cherukuri, R.
Substitution Ciphers , Canadian Conference on Electrical and (1994). Self-nonself Discrimination in A Computer.
Computer Engineering, 2006, Page(s): 1582 - 1585 Proceedings of IEEE Symposium on Research in Security and
[13] Khan, S.; Shahzad, W.; Khan, F.A. , Cryptanalysis of Privacy, Los Alamos, CA. IEEE Computer Society Press.
Four-Rounded DES Using Ant Colony Optimization [21] Stallings, W. (2003). Cryptography and Network
rd
,International Conference on Information Science and Security: Principles and Practices, 3 Edition. Upper Saddle
Applications (ICISA), 2010 , Page(s): 1 - 7 River, New Jersey: Prentice Hall.
[14] Ghnaim, W.A.-E.; Ghali, N.I.; Hassanien, A.E., Known- [22] Spillman, R. (1993). Cryptanalysis of Knapsack Ciphers
ciphertext cryptanalysis approach for the Data Encryption Using Genetic Algorithms. Cryptologia, XVII(4):367-377.
Standard technique, International Conference on Computer [23] Clark, J.A. (2003). Nature-Inspired Cryptography: Past,
Information Systems and Industrial Management Applications Present and Future. In Proceedings of Conference on
(CISIM), 2010 , Page(s): 600 - 603 Evolutionary Computation, 8-12 December. Canberra,
[14] AbdulHalim, M.F.; Attea, B.A.; Hameed, S.M., A binary Australia.
Particle Swarm Optimization for attacking knapsacks Cipher [24] Clark, A. (1998). Optimization Heuristics for Cryptology.
Algorithm ,International Conference on Computer and Ph.D. Dissertation, Faculty of Information Technology,
Communication Engineering ,2008. Page(s): 77 - 81 Queensland University of Technology, Australia.

SIP0303-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

[25] Bagnall, A.J. (1996). The Applications of Genetic Annual Workshop on Selected Areas in Cryptography, Aug.
Algorithms in Cryptanalysis. M.Sc. Thesis. School of 11-12, SAC 1997.
Information System, University of East Anglia. [34] Millan, W., Clark, A. and Dawson, E. (1998). Heuristic
[26] Dimovski, A., Gligoroski, D. (2003). Attack on the Design of Cryptographically Strong Balanced Boolean
Polyalphabetic Substitution Cipher Using a Parellel Genetic Functions. Advances in Cryptology – EUROCRYPT ‟98,
Algorithm. Technical Report, Swiss-Macedonian Scientific LNCS 1403, 489-499, Springer-Verlag, Berlin Heidelberg.
Cooperation through SCOPES Project, March 2003, Ohrid, [35] Dimovski, A., Gligoroski, D. (2003). Generating Highly
Macedonia. NonLinear Boolean Functions Using a Genetic Algorithm. In
st
[27] Dimovski, A., Gligoroski, D. (2003). Attacks on Proceedings of 1 Balcan Conference on Informatics,
Transposition Cipher Using Optimization Heuristics. In November, Thessaloniki, Greece.
Proceedings of ICEST 2003, October, Sofia, Bulgaria.
[28] Morelli, R.A. and Walde, R.E. (2003). A Word-Based
Genetic Algorithm for Cryptanalysis of Short Cryptograms.
Proceedings of the 2003 Florida Artificial Intelligence
.
Research Symposium (FLAIRS – 2003), pp. 229-233.
[29] Morelli, R.A., Walde, R.E., Servos, W. (2004). A Study
of Heuristic Search Algorithms for Breaking Short
Cryptograms. International Journal of Artificial Intelligence
Tools (IJAIT), Vol. 13, No. 1, pp. 45-64, World Scientific
Publishing Company.
[30] Servos, W. (2004). Using Genetic Algorithm to Break
Alberti Cipher. Journal of Computing Science in Colleges,
Vol. 19(5): 294-295.
[31] Hernandez, J.C., Sierra, J.M., Isasi, P., Ribagorda, A.
(2002). Genetic Cryptanalysis of Two Rounds TEA. ICCS
2002, LNCS 2331, 1024 – 1031, Springer-Verlag Berlin
Heidelberg.
[32] Ali, H. and Al-Salami, M. (2004). Timing Attack
Prospect for RSA Cryptanalysis Using Genetic Algorithm
Technique. The International Arab Journal of Information
Technology, 1(1).
[33] Millan, W., Clark, A. and Dawson, E. (1997). Smart Hill
th
Climbing Finds Better Boolean Functions. Proceedings of. 4

SIP0303-7
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

EYE BASED CURSOR MOVEMENT USING EEG IN


BRAIN COMPUTER INTERFACE
Tariq S Khan#, Mudassir Ali#, Omar Farooq#, Yusuf U Khan*,

#Department of Electronics Engineering, Zakir Husain College of Engineering & Technology


*Department of Electrical Engineering, Zakir Husain College of Engineering & Technology
Aligarh Muslim University, Aligarh

Abstract— The aim of this study is to motor impairments (severe cerebral palsy,
detect eye movement (left to right) from head trauma and spinal injuries) may use
Electroencephalograph (EEG) signal. such a BCI system as an alternative form of
Four electrodes of EEG in the frontal communication by mental activity [1]. Using
area were used. The statistical features improved measurement devices, computer
were extracted from the four channels of power, and software, multidisciplinary
frontal channel. These features were then research teams in medicine,
fed into a classifier based on the linear psychophysiology, medical engineering, and
discriminator function. The most information technology are investigating and
prominent features for the classification realizing new noninvasive methods to
of left and right movements were monitor and even control human physical
identified. These features were then functions.
interfaced with computer so that cursor
movement can be controlled. Electrodes In a bigger picture – there can be devices
are placed along the scalp following the that would allow severely disabled people to
10-20 International System of Electrode function independently. For a quadriplegic,
Placement. Recorded data was filtered, something as basic as controlling a computer
windowed and analysed in order to cursor via mental commands would
extract features. Four different classifiers represent a revolutionary improvement in
were used. Best results were found in quality of life. With an EEG or implant in
support vector machine (SVM) and linear place, the subject would visualize closing his
classifiers each of which gave the average or her eyes or moving eyes from left to right
accuracy of 90%. and vice versa [2]. The software can learn
eye movement through training, using
repeated trials. Subsequently, the classifier
Keywords: BCI, Eye movement, EEG.
may be used to instruct the closure/opening
I. INTRODUCTION of eye. A similar method is used to
manipulate a computer cursor, with the
A brain-computer interface (BCI) subject thinking about forward, left, right
provides an alternative communication and back movements of the cursor [3]. With
channel between the human brain and a enough practice, users can gain enough
computer by using pattern recognition control over a cursor to draw a circle, access
methods to convert brain waves into control computer programs and control a television.
signals. Patients who suffer from severe It could theoretically be expanded to allow

SIP0304-1
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27
2011

users to "type" with their thoughts. This can The term ―Brain-Computer Interface‖ first
be achieved by controlling cursor movement appeared in scientific literature in the 1970's,
on a computer screen through EEG signals though the idea of hooking up the mind to
from brain, specifically, generated due to computers was nothing new [5]. Currently,
eye movement. The signals can be analysed the systems are ―open loop‖ and responds to
by different methods. user‘s thoughts only. The ―closed loop‖
systems are aimed to be developed that can
Traditional analysis methods, such as the give feedback to user as well.
Fourier Transform and autoregressive
modelling are not suitable for non-stationary In order to meet the requirements of the
signals. Recently, wavelets have been used growing technology expansion, some kind of
in numerous applications for a variety of standardization was required not only for the
purposes in various fields. It is a logical way guidance of future researchers but also for
to represent and analyse a non-stationary the validation and checking of new
signal with variable sized region windows developments with other systems, thus a
and to provide local information. In the general purpose system was developed
Fourier Transform (FT), the time called BCI2000 which made analysis of
information is lost and in short Term Fourier brain siganl recording easy by defining the
Transform (STFT) there is limited time output formats and operating protocols to
frequency resolution. Even though basic facilitate the researchers in developing any
filters can be used for decomposition of type of application. This made it easier to
desired bands, ideal filters are never realised extract specific features of brain activity and
in practice, which results in aliasing effects. translate them into device control signals
However, wavelet analysis enables perfect [7]..
decomposition of the desired bands, which
helps us to obtain better features [4]. III. OUR METHODOLOGY

In this paper different features are used The procedure in this study was to initially
for training the classifier for eye movement acquire EEG data. The stored data was then
in left and right directions. A time-frequency pre-processed to remove artifacts.
analysis was applied to the EEG signals Subsquently features were extracted in the
from different channels, to determine clean EEG and used for classification. Thus
combination of features and channels that methodology is shown in Fig. 1.
yielded the best classification performance.

BACKGROUND RESEARCH Data


II. Data Processing
acquisition

EEG waves are created by the firing of


neurons in the brain and were first measured
Feature
by Vladimir Pravdich-Neminsky who Classification
Extraction
measured the electrical activity in the brains
of dogs in 1912, although the term he used
was ―electrocerebrogram.‖ Ten years later
Hans Berger became the first to measure Device/Application
Control
EEG waves in humans, in addition to giving
them their modern name, began what would Fig. 1: Block diagram for feature extraction and device
become intense research in utilizing these control of eye movement
electrical measurements in the fields of
neuroscience and psychology.

SIP0304-2
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27
2011

A.Experimental Setup and Data Acquisition BrainTech software. EEG of the frontal lobe
The subject was seated on wooden armchair channels for subject 1 is illustrated in Fig. 3.
and legs were rested on wooden footrest Frontal lobe channels
(wooden items should be used so as to fp1f3
reduce interference) with eyes closed. The fp1f7
1500 fp2f4
subject was instructed to avoid speaking and fp2f7
to avoid body movement in order to ensure

Amplitude
1000
relaxed body. EEG data were recorded using
a Brain Tech clarityTM system [9] with the 500
electrodes positioned according to the
standard 10-20 system in the biomedical 0

Signal Processing lab, AMU Aligarh.


50 100 150 200 250 300 350 400 450 500
No. of Samples
To ensure the same rate of eye movement in Fig. 3: Plot of channels associated with frontal lobe
both directions, a ball was shown on the
screen and the subject was asked to visually 50 Hz power supply often causes
follow the ball. The movement of ball was interference in the EEG recording. Fig. 4
set to 60 pixels per second. A series of trials shows a plot of PSD on the EEG record of
were recorded. FP1-F3 channel. To eliminate these spikes
The subject was instructed to open eyes signal was passed through Infinite Impulse
slowly and then to follow the movement of Response (IIR) notch filter before analysis.
the ball in the program on prompt from the
PSD before & after Passing Through Notch
experimenter. Movement of eyes was 30
PSD of FP2F8

recorded for two different directions i.e. left 20


PSD after passing
through Notch Filter

to right and right to left. Block diagram of


10
experimental procedure is shown in fig. 2.
Power (dB)

-10

Right to -20
left Relax
movement -30

-40
0 20 40 60 80 100 120 140
Frequency (Hz)
Fig. 4: Power Spectral Density of FP1F3

Left to
Relax right
IIR second order notch filter with the
movement quality factor (or Q factor) of 3.91 was used
to remove the undesired frequency
Fig. 2 Sequence followed during experimental recording components.
Signal after removing the artifacts of 4
channels stacked over one another is shown
B. Data Processing in Fig. 5.

26 channels of EEG were recorded. Since


only frontal lobe is mainly involved in eye
movement, only those channels that are
associated with the frontal lobe i.e. FP1-F3,
FP1-F7, FP2-F4, FP2-F8 were analysed. The
signal values associated with these signals
were extracted in ASCII form using

SIP0304-3
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27
2011

Signal after passing through Notch filter


2000 mouse movement as the instruction same as
fp1f3
fp1f7 that of the mouse movement. This
1500 fp2f4
fp2f8
instruction will then be interfaced with the
1000
eye movement which will then control the
movement of cursor [9].
500

0
IV. RESULTS AND DISCUSSIONS
-500
0 100 200 300 400 500
For each frame of EEG, four features were
Fig. 5: Signal Plot of filtered Frontal lobe associated calculated namely, variance, mean, skewness
channels
EEG by nature is non stationary signal. So it and cross correlation. The seperabrability
was fragmented into frames so that it can be provided by each feature was individually
assumed stationary for small segment. EEG tested. The best three features were
data is divided into frames of 1s duration i.e. subsequently used as an input to the
frame size of 256 samples. classifier. Four classifiers were used in this
work. The classifiers results are illustrated in
Table 1.For each movement of LTR and
C. Feature extraction RTL 20 seconds (20 frames) of data were
collected. From these 20 frames 15 frames
Feature extraction is the process of
were used for training and rest 5 are used for
discarding the irrelevant information to the
testing for both movements.
possible extent and representing relevant data
in a compact and meaningful form. Two eye
Table 1: Percentage accuracy of classification for eye
movements were recorded: right to left
movements
(RTL), left to right (LTR).Standard statistical
parameters such as mean, variance, Classifier RTL LTR
skewness, cross-correlation were calculated
for all the channels in each movement type. SVM 80 100

D. Classification Linear 80 100


Following classifiers were used to classify Quad 60 40
the two eye movements: Diaglinear 80 60
SVM: It is non-probabilistic binary linear
classifier.
Linear: Fits a multivariate normal density to From the observations in Table1 it can be
each group, with a pooled estimate of seen that linear or SVM classifier gives the
covariance. best possible results with high classification
Diaglinear: Similar to 'linear', but with a percentage accuracy for both eye
diagonal covariance matrix estimate (naive movements.
Bayes classifiers).
Quadratic: Fits multivariate normal densities
with covariance estimates stratified by
group.
E. Cursor Control
A program was written which controls the
cursor movement according to instruction
given. This program will be calibrated
according the instructions given i.e. the
cursor movement will be invoked instead of

SIP0304-4
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27
2011

60 classifying the two movements (RTL &


RTL
LTR
LTR).
50 Support Vectors
ACKNOWLEDGEMENT
Classifier
40 The authors are indebted to the UGC. This
work is a part of the funded major research
30 project C.F. No 32-14/2006(SR)

20
.
REFERENCES
10
-15 -10 -5 0 5 10 15 20
1. The "10-20 System of Electrode Placement‖
http://faculty.washington.edu/chudler/1020.html
Fig. 6: Plot of Classifier in Signal Space
2. Y. U. Khan,(2010) ‘Imagined wrist movement classification in
single trial EEG for brain computer interface using wavelet
packet‘, Int. J. Biomedical Engineering and Technology, Vol. 4,
No. 2, pp169-180.
A linear classifier classifying both eye
movements is shown in Fig. 6. 3. Daniel, J. Szafir (2009-10) ‗Non-Invasive BCI through EEG ―An
Exploration of the Utilization of electroencephalography to Create
Variance of FP2F4 Thought-Based Brain-Computer Interfaces‖.
1000
RTL 4. Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G.,
LTR Vaughan, T.M. (2002): Brain–computer interfaces for
800
communication and control. Clinical Neurophys. pp767–791

600 5. Y. U. Khan and O. Farooq(2009), ―Autoregressive features based


Variance

classification for seizure detection using neural network in scalp


Electroencephalogram‖, International Journal of Biomedical
400 Engineering and Technology, vol.2, no. 4, pp. 370-381.

6. J. Vidal(1973) "Toward Direct Brain–Computer Communication."


200
Annual Review of Biophysics and Bioengineering. Vol. 2, pp. 157-
180
0
0 5 10
Time(sec)
15 20 7. Syed M.Siddique, Laraib Hassan Siddique (2009): EEG based
Brain computer Interface: Journal of software, vol.4, no.6, pp.550-
555
Fig. 7: Variance Plot of FP2-F4
8. EEG Channels in Detecting Wrist Movement Direction Intention:
From Fig. 6 which shows the variance for the Proceedings of the 2004 IEEE Conference on Cybernetics and
channel FP2-F4 clearly shows that the Intelligent Systems
variance of LTR is greater than RTL for 9. Fabiani, Georg E. et al. Conversion of EEG activity into cursor
movement by a brain-computer interface.
most of the time. Variance basically shows <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=?doi=10.1.1.12
the concentration of probability density 8.5914>. 2004
function about the mean. 10. Clarity Braintech system, Standard edition, Software
version 3.4, Hardware version 1.4, Clarity Medical Private Limited

V. CONCLUSIONS

EEG data was investigated for two eye


movements using a 4 channel setup on three
subjects. Features were extracted from the
variance for both the movements. A linear
classifier was used to classify between the
two eye movements. These algorithms can
provide high classification accuracy only
after training for few sessions. In this work
90% of accuracy has been achieved, in

SIP0304-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

AN INTERNET BASED INTELLIGENT


TELEDIAGNOSIS SYSTEM FOR
ARRHYTHMIA
K.A.Sunitha1, N.Senthil kumar2, K.Prema3, Sandeep Kotikalapudi4
1&3
Assistant professor, Instrumentation and Control Engineering Department, SRM University,
2
Professor, Mepco schlenk Engineering College, Sivakasi,
4
Student, Instrumentation and Control Engineering Department, SRM University
1 2 3 4
sunithasrm@yahoo.in, nsk_vnr@yahoo.com, premsb4u@gmail.com, sandeep_yours18@yahoo.com

by using a Online publishing tool. After

Abstract—Due to changing trends, there is an


increasing risk of people having Cardiac passing the report developed by the system to
Disorders. This is the impetus behind, for the doctor,he or she can pass the medical
developing a system which can diagnose the advice to the server, i.e. generally the system
cardiac disorder and also risk level of the where the patient ECG is extracted and
patient, so that effective medication can be analyzed.
taken in the initial stages. This paper helps in
comprehensive diagnosis of the patient without Index Terms–LabVIEW,Arrhythmia- Sinus
the doctor in the same geographical tachycardia (ST), supra-ventricular tachycardia
location.This will prove to be advantageous for (SVT), ventricular tachycardia (VT), junctional
implementation in villages where doctors are tachycardia (JT), ventricular and Atrial
not easily accessible. In this paper, Atrial rate, fibrillation (VF & AF, Online publishing tool,
Ventricular rate, QRS Width and PR Interval QRS width, trial rate,ventricular rate
are extracted from ECG signal, so that
arrhythmia disorders- Sinus tachycardia (ST), I. INTRODUCTION
supra-ventricular tachycardia (SVT),
ventricular tachycardia (VT), junctional According to the World Health Organization
tachycardia (JT), ventricular and Atrial (WHO) heart disease and stroke kills around 17
fibrillation (VF & AF) are diagnosed with their million people a year, which is almost one-third of
all deaths globally. By 2020, heart disease and
respective risk levels. So that the system acts as
stroke will become the leading cause of both death
an risk analyzer, which tells how far the and disability worldwide. So, it is very clear that
subject is prone to arrhythmia. LabVIEW proper diagnosis of heart disease is important for
signal express is used to read ECG and for patients to survive. Electrocardiogram (ECG) is an
analysis this information is passed to the Fuzzy important tool for Diagnosis of heart diseases .But it
Module. In the Fuzzy module Various ―If-then has some drawbacks such as: 1) Special skill is
rules‖ have been framed to identify the risk required to administer and interpret the
level of the patient. The Extracted information results of ECG.
is then published to the client from the server
2) Cost of ECG equipment is high.

SIP0333-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

3) Limited availability of ECG equipment. II. PROPOSED SYSTEM


Due to these drawbacks, telemedicine contacts were
mostly used for consultations between special Figure 1. Shows the proposed Fuzzy analyser
telemedicine centres in hospitals and clinics in the
with online
past. More recently, however, providers have begun
to experiment with telemedicine contacts between System.
health care providers and patients at home to
monitor conditions such as chronic diseases [1].
LabVIEW (Laboratory Virtual Instrument
Engineering Workbench) is a graphical
programming environment suited for high-level or
system-level design. As it has been proven that
LabVIEW based telemedicine system does have the
following features.

1) It replaces multiple stand-alone devices at the


cost of a single instrument using virtual
instrumentation and its functionality is expandable Fig 1. proposed system
[2].
2) It facilitates the extraction of valuable The ECG waveforms are obtained from MIT-
diagnostic information using embedded advanced BIH Database.LabVIEW signal express is used to
biomedical signal processing algorithms [2]. read and make analysis of the ECG and pass the
3) It can be connected to the internet to create an information to the Fuzzy Module. In the Fuzzy
internet –based telemedicine infrastructure, which module Various “If-then rules ” have been written
provides a comfortable way for physicians to to identify the risk level of the patient.The Extracted
communicate with friends, family and colleagues information is then published to the client from the
[3]. server by using different Online publishing tools.
After passing the information i.e, Atrial rate,
Several systems had been developed on Ventricular rate, QRS Width and PR Interval which
acquisition and analysis of ECG [4]-[8] using were extracted from ECG signal,from patients
labVIEW . Some systems [5] and [7],[8] also dealed system to the doctor’s system the doctor can pass
with identifying the cardiac disorder but it lacks , the medical advice to the server, i.e. generally the
identifying the risk levels of the patient for the system where the patient ECG is extracted and
cardiac disorder and the online publishing system. analyzed.
In this paper, we developed a program not only to
access the patient’s data but also we had tried to A. Internet based System:
diagnose the heart abnormalities, which can be a The internet is used as a to and fro vehicle to
reference to the doctor or physician for further deliver both the virtual medical instruments,
procedure. This can be taken up from anywhere if an medical data and prescription from the doctor in
internet connection is available. And a fuzzy system real time .An internet-based telemedicine system is
is developed to identify the risk level of the patient. . shown in fig:2. This work involves an internet –
Fuzzy system is more accurate than the normal based telemonitoring system, which has been
controller because instead of being either true or developed as an instance of the general client-server
false, a partial true case can also be declared. The architecture presented in fig:.
risk scores can be accurately and exactly calculated The client server architecture is defined as
for specific records of a person. follows: the client application provide visualization,
archiving, transmission, and contact facilities to the
remote user (i.e., the patient). The server, which is

SIP0333-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

located at the physicians end takes care of the


incoming data, and organizes patient sessions.

Fig 3. Fuzzy system

A. Fuzzification
In this system we are considering the atrial and
ventricular heart rates, QRS complex width and
PR interval values as the input linguistic variables,
which are passed to the inference engine.
Based on the rule base and linguistic variables,
the fuzzy system output is obtained.
Fig.2 Internet based system
B. Defuzzification
B. LABVIEW The defuzzified values are the risk levels high
LabVIEW is a graphical programming language risk, medium risk, low risk which are obtained
developed by National instruments. Programming according to the weights of fuzzy variables.
with LabVIEW gives a vivid picture of data flow by C. Relation between input and output variables
the graphical representation in blocks. labview is The relationship between input and output is
used here for getting the ECG waveform and also shown by a 3-Dimensional figure 4. shown below
for analyzing the parameters like PR interval, QRS
width, heart rates which are later passed to the fuzzy
system.
LabVIEW offers modular approach and parallel
computing , which makes easier for developing
complex systems. Debugging tools like probes,
Highlight execution are handy in analyzing where
actually the error occurred.
C. Fuzzy system
Fuzzy controllers are the widely employed as they
are efficient controllers when working with the Fig 4. Relation between input and output
vague values. A Fuzzy controller has a rule base in
“IF-THEN” fashion, which is used for identification D. Fuzzy Rules
of the risk level of disease using the weight. In this Fuzzy system we are using the centre of
A Fuzzy system is generally given by Fig 3. area method as the fuzzificaton method. The rule
base of the fuzzy system consists of rules in the
form of “If-Then”. The risk levels are dependent on
the number of conditions are met by the input
variables for the respective cardiac disorder. As
there is no particular rule of identifying the
arrhythmia based on heart rate, since it can differ
from patient to patient and so this system thus is
more accurate in determining the arrhythmia since it
is not based only on heart rate.
SIP0333-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Fuzzy rule base is acts like a database of rules for IV. RESULTS
selecting the output, basing on the input quantities. This system is able to measure the arrhythmias
Some of the rules are:- accurately and also publish it online.
1. IF 'PR interval' IS 'Normal' AND 'vHR' IS
'30,40' AND 'aHR' IS '60,75' THEN 'First Degree
Block' IS 'No ' ALSO 'Third Degree block' IS
'Medium Risk'
2. IF 'PR interval' IS 'Normal' AND 'vHR' IS '30,40'
AND 'aHR' IS '75,90' THEN 'First Degree Block' IS
'No ' ALSO 'Third Degree block' IS 'Medium Risk'
3. IF 'PR interval' IS 'Normal' AND 'vHR' IS '30,40'
AND 'aHR' IS '90,100' THEN 'First Degree Block'
IS 'No ' ALSO 'Third Degree block' IS 'High Risk'.
4. IF 'vHR' IS '150,180' AND 'QRS Width' IS
'Narrow QRS' THEN 'Ventricular Tachycardia at' IS
'Low risk' ALSO 'Junctional Tachycardia at' IS 'Low
Risk' ALSO 'Supra Ventricular Tachy at' IS 'High
Risk'
5. IF 'vHR' IS '180,210' AND 'QRS Width' IS
'Normal QRS' THEN 'Ventricular Tachycardia at' IS
'Low risk' ALSO 'Junctional Tachycardia at' IS
'High Risk' ALSO 'Supra Ventricular Tachy at' IS Fig 5. Block Diagram for extracting
'Low Risk' ECG waveform

In the above Fig 5 block diagram , it perform the


In this manner, based upon the PR interval,QRS
width, atrial and Ventricular heart rates a Fuzzy function of passing the HR value obtained from the
system is developed to identify the Cardio disorder signal express to the fuzzy system .
as well as its level of risk.

III. ONLINE PUBLISHING


One of the Unique feature of this system is its
ability to publish or pass the extracted information
to the Client, usually to a doctor`s computer. This
helps in implementing a telediagnosis system. The
doctor will be able to see the diagnosis result along
with risk levels and then pass the information to the
doctor for further advice. Since internet issued for
passing the values to the doctor ,This becomes
immensely help for immediate action to be taken.
This will cater to the need of public health care
centres rural areas where it is difficult to have
cardiologists. And also this system can be used to
assist the doctor in monitoring the patient’s heart
during surgery. Fig 6. Block diagram for calling fuzzy system in
labVIEW

SIP0333-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Above figure 6. Shows the block diagram of [1] N.Noury and P.Pilichowski,”A telematic system
risk level detection , we show how we called the tool for home health care,”-in proc. IEEE 14th
fuzzy system into the main panel for diagnosing and Annu.Int.conf.EMBS, Paris, oct.1992, PP.1175-
risk level indication. 1177
Fig 7 shows the Front panel which is developed [2] Zhenyu Guo and John c.Moulder “An internet
from the fuzzy system ,and is sent to the doctor based Telemedicine system”IEEE transactions, pp.
using web publishing tool for the second advice 2000
.System also have a database to save the details of
[3].Volodymyr Hrusha, Olexandr Osolinskiy,
patient like Name, Age, Sex, Symptoms which can
used for the next time.. Pasquale Daponte, Domenico Grimaldi”Distributed
Web-based Measurement system” IEEE Workshop
on Intelligent Data and Advanced Computing
System Technology and Applications pp, on 5-7
2005
1. Acquisition and Analysis System of the ECG
Signal Based on LabVIEW by Lina Zhang,
Xinhua Jiang.
2. QRS DETECTION USING A FUZZY NEURAL
NETWORK Kevin P. Cohen, Willis J.
Tompkins, Adrianus Djohan, John G. Webster
and Yu H. Hu.
3. Classification of ECG Arrhythmias using Type-2
Fuzzy
Clustering Neural Network
4. Robust techniques for remote real time
arrhythmias classification system
5. ECG Arrhythmia Detection Using Fuzzy
Classifiers by
S. Zarei Mahmoodabadi ,A. Ahmadian, M. D.
Abolhassani, J. Alireazie P. Babyn
6. Discrimination of Cardiac Arrhythmias Using a
Fuzzy Rule-Based Method by E Chowdhury,
Fig 7. Front panel LC Ludeman.
7. Automated ECG Rhythm Analysis Using Fuzzy
Reasoning by W Zong, D Jiang.
V. CONCLUSION
8. Fuzzy Classification of Intra-Cardiac
In this way we had developed a fuzzy Arrhythmias by Jodie Usher, Duncan Campbell,
system with good accuracy in determining the Jitu Vohra, Jim Cameron.
cardiac disorders with risk levels when compared
to the normal system considering the atrial and
ventricular heart rates, QRS complex width and
PR interval values as the input linguistic variables
using labVIEW. This report is successfully sent to
the doctors system using web publising tool for the
second advice.

REFERENCES:-

SIP0333-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Projected View & Novel Application of Context


Based Image Retrieval Techniques
Shivam Agrawal#, Rajeev Singh Chauhan*, Vivek Vyas**
#
B.Tech Student, CS Department, *B.Tech Student, CS Department, **M.Tech Student, ECE Department,
Arya College of Engineering and I.T., Kukas, Jaipur,
Rajasthan Technical University, Kota
#shivam.agrawal@live.com, *rajeev0507@hotmail.com and **101.vivek@gmail.com

Abstract— Image searching is one of the fascinating The initial techniques which are used are based on the
topics for the advanced research since the 1990s. As fast textual annotation of the images. Using the text
there is advancement in the computer and network descriptions, images can be organized by topical or
technologies coupled with relatively cheap high volume semantic hierarchies to facilitate easy navigation and
data storage devices have brought tremendous growth in browsing based on standard Boolean queries. Content
the amount of digital images, hence the development of Based Image Retrieval is one of the major approaches of
pattern recognition is also increases exponentially. Pattern image retrieval that has drawn significant attention in the
recognition is the act of taking in raw data and classifying past decade, which uses visual contents to search images
it into predefined categories using statistical and empirical from large scale image database according to users
methods. Content based image retrieval (CBIR) is one of interests Low Level image features such as color, texture,
the widely used applications of pattern recognition for shape and structure are extracted from images. Relevant
finding images from vast and un-annotated image images are retrieved based on the similarity of their image
database. In CBIR images are indexed on the basis of features. Examples of some of the prominent systems are
low-level features, such as color, texture, and shape, QBIC, Photobook, and NETRA. In this paper we discuss
which can automatically be derived from the visual the different algorithms used to extract the different
content of the images. The paper discusses techniques and features of an image. In this paper we also discuss the
algorithm that are used to extract these image features future advancement of the Context Based Image Retrieval
from the visual content of the images & the advancement techniques, how can be it beneficial in different fields.
which can be done using the CBIR. The various similarity We also discuss the futuristic approaches to attain this
measures are used to identify the closely associated technique in more advanced way.
patterns. These methods compute the distance between
the features generated for different patterns and identify
the closely related patterns and these patterns are then 1. Image Retrieval
generated as the result. This paper unfolds a novel
application using context based image retrieval for search A recent study of literature in image indexing and
the detailed description of an image without knowing a retrieval has been conducted based on 100 papers from
single word about it. This paper also proposes algorithms Web of Science. Two major research approaches, text-
to create such a utility. based (description-based) and content-based, were
identified. It appears that researchers in the information
science community focus on the text-based approach
Keywords: Context Based Image Retrieval, Image
while researchers in computer science focus on the
Searching.
content-based approach. Text-based image retrieval
INTRODUCTION (TBIR) makes use of the text descriptors to retrieve
relevant images. Some recent studies found that text
descriptors such as time, location, events, objects,

SIP0401-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

formats, aboutness of image content, and topical terms are Barbara (UCSB) [5, 6]. NETRA supports features of
most helpful to users. The advantage of this approach was color, texture, shape, and spatial information of
that it enabled widely approved text information retrieval segmented image regions to region-based search. Images
systems to be used for visual retrieval systems. are segmented to homogenous regions. Using the region
as the basic unit, users can submit queries based on
1.1. Content-based image retrieval features that combine regions of multiple images. For
example, a user may compose queries such as retrieve all
In CBIR, the images are indexed by features that are images that contain regions having color of a region of
derived directly from the images. The features are always image A, texture of a region of image B, shape of a region
consistent with the image and they are extracted and of image C.
analyzed automatically by means of computer processing,
instead of manual annotation. Due to the difficulty of
automatic object recognition, information extracted from 1.1.1 Image features
images in CBIR is rather low level, such as colors,
textures, shapes, structure and combinations of the above. One of the main foci in CBIR is the means for extraction
A number of representative generic CBIR systems have of the features of the images and evaluation of the
been developed in the last ten years. These systems have similarity measurement between the features. Image
been implemented in different environments, some of features refer to the characteristics which describe the
which are Web based while some are GUI-based contents of an image. In this paper, image features are
applications. QBIC, Photobook, and NETRA are the most confined to visual features that are derived from an image
prominent examples. directly. There have been extensive studies of various
sorts of visual feature. The simplest form of visual feature
QBIC is developed at the IBM Almaden Research Centre is directly based on pixel values of the image. However,
[1, 2, 3]. It is the first commercial CBIR application and these types of visual feature are very sensitive to noise,
plays an important role in the evolution of CBIR systems. brightness, hue and saturation changes, and are not
The QBIC system supports low level image features of invariant to spatial transformations such as translation and
average color, color histogram, color layout, texture and rotations. As a result, CBIR systems that are based on
shape. Additionally, users can provide pictures or draw pixel values do not generally have satisfactory results.
sketches as example images in query. The visual queries Much of the research in this area has placed the emphasis
can also be combined with textual keyword predicates. on computing useful characteristics from images using
Photobook [4], developed at the MIT Media Lab. It is a image processing and computer vision techniques.
tool for performing queries on image databases based on Usually, general purpose features in CBIR have included
image content. It works by comparing features associated Text, color, texture, shape and Layout.
with images, not the images themselves. These features
are in turn the parameter values of particular models fitted Color representations
to each image. These models are commonly color,
texture, and shape, though Photobook will work with Color histogram is the standard representation of color
features from any model. Features are compared using feature in CBIR system, initially investigated by Swain
one out of a library of matching algorithms that and Ballard. The histograms of intensity values are used
Photobook provides. It is a set of interactive tools for to represent the color distribution. This captures the
searching and querying images. It is divided into three global chromatic information of an image and is invariant
specialized systems, namely Appearance Photobook (face under translation and rotation about the view axis. Despite
images), Texture Photobook, and Shape Photobook, changes in view, change in scale, and occlusion, the
which can also be used in combination. The features are histogram changes only slightly. A Color histogram H
compared by using one of the matching algorithms. These (M) of image M is a 1-D discrete function representing
include Euclidean, Mahalanobis, divergence, vector space the
angle, histogram, Fourier peak, and wavelet tree Probabilities of occurrence of colors in images, which, is
distances, as well as any linear combination of those typically defined as:
previously discussed.
NETRA is a prototype image retrieval system that has H (M) = [ ]
been developed at them University of California, Santa

SIP0401-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

= k= 1, 2, 3 …. , n [Equation 1]
Where N is the number of pixels in image M and is the
number of pixels with image value k. The division
G (x, y) = ( ) × exp [- ( ) + 2 jWx]
normalizes the histogram such that:

= 1.0 [Equation 2] [Equation 3]


A self-similar filter dictionary can be obtained as a
mother Gabor wavelet G (x, y) by appropriate dilations
and rotations of Eq. (2) as:

= G( )

Where h = height of image, w = width of image, hside =


(h-1)/2; wside = (w-1)/2

= (x – hside) cos (n /k)

+ (y – wside) sin (n /k)

= - (x – hside) cos (n /k)

+ (y – wside) sin (n /k)

Texture representations
a > 1; m, n are integers
Many texture features have been investigated in the past,
including the conventional pyramid-structured wavelet Given an image with luminance, I (m, n), Gabor
transform (PWT) features, tree-structured wavelet decomposition can be obtained by multiplying the
transform (TWT) features, the multi-resolution luminance by the magnitude of the Gabor wavelet:
simultaneous autoregressive model (MR-SAR) features
and the Gabor wavelet features. Experiments have been
| |= I ( )
conducted and have found that the Gabor features [7, 8]
produce the best performance. The computation of Gabor [Equation 4]
features is given as follows. A two dimensional Gabor
function can be formulated as: The mean and standard deviation of the magnitude of the
transform coefficient are used to represent the texture
feature for classification and retrieval purposes:

SIP0401-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

= [Equation 5] and shows the output in the form of the detailed


information about that monument.
We can create a desktop and mobile application for this
purpose. There is lots of GPL & Closed License driven
= projects on the Image Retrieval. Tineye, gazopa are the
[Equation 6] most famous & effective project website for the image
search. These Projects are using different feature extract
The Gabor feature vector is constructed by using algorithm for Context Based Image Retrieval. But, the
search results provided by these websites are limited to
and as feature components: the other images results. If we upload an image of some
celebrity, we got the other similar images of that celebrity
but not about that person. Here we are giving the concept
of an application which works as a combination of Tineye
and Wikipedia.
Where S is the number of scales and K is the number of
To achieve this goal we design our web crawlers such
orientation.
that whenever they are indexing the images into the
database it will also index the data related to that image
Shape representations
using the Meta character and some keywords based on
different algorithms apply on that page. There might be a
problem that the page contains a lot of words with a
single image than how can we identify that which word is
exact for that image. For achieving this we follow the
procedures described below:

(A) First of all filter out all the unuseful words like
preposition, adjective etc. from the whole text.
And then apply the given algorithms for assigning the
priority to remaining words.

(I) Words in the Meta data contain higher priority


Instead of other words on the page.
(II) Words in the top 3 or 4 lines contain the higher
priority after the filtration.
(III) The frequently repeated word on the page contains
the higher priority.
(IV) Words in the bold letters contain the higher priority.

APPLICATION BASED ON CONTEXT BASED (B) Now we have an Image and some words which
IMAGE RETRIEVAL AND WORKING PROCEDURE contain the top priority from each page.
(C) I upload an image to search the related images and its
The one of the future advancement of the CBIR is to description.
develop a platform for the users on which someone (D) The Context Based Image Searching is done to find
upload a image, query processor calculate the distance the related images.
between the images of the database & according to the (E) After searching, the words are also collected along
closeness of the images(distance between the images) it with the related images of the desired Image.
shows the related results for that image. Let suppose I am (F) Now one more filtering algorithm is apply for finding
a noob for Egypt and walking into the streets of Cairo. I the exact keyword related to that image, the frequency of
saw a monument, and I am eager to know about that then each word is calculated from the different results.
I just capture the image of it and upload on an application (G) Now we assign the top priority to the word which
of my mobile. The application processed the query image contains the highest frequency.

SIP0401-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

(H) That word goes to Wikipedia & shows the resultant


description along with the query image.

CONCLUSION & FUTURE WORK

There are lots of methods for the features extraction in


the Context Based Image Retrieval. We can perform as
many comparison algorithms for more exact search result.
Here we discuss the color, texture and shape
representations in the context based image retrieval. We
also discuss to generate a application using the CBIR
which can play the vital role in the current generation.
There is a lot of space in the future advancement of the
Context Based Image Retrieval. There are lots of
application can be generated which can play a vital role in
different fields. There are some visual abilities which is
absent from the current CBIR & there is a scope to work
on like perceptual organization, similarity between
semantic concepts etc.

ACKNOWLEDGMENTS

The Authors gratefully acknowledges ARYA


Development and Research Center, ACEIT, Jaipur.

REFERENCES

[1] M. Flickner, H. Sawhney and W. Niblack, Query by image and video


content: the QBIC system, IEEE Computer September (1995).
[2] J. Hafner, H.S. Sawhney, W. Equitz, M. Flicker and W. Niblack,
Efficient color histogram indexing for quadratic form distance functions,
IEEE Transactions on Pattern Analysis and Machine Intelligence 17(7)
(1995) 729–36.
[3] W. Niblack, R. Barber, W. Equitz, M. Flickner, E. Glasman, D.
Petkovic, P. Yanker, C. Faloutsos and G. Taubin, The QBIC project:
querying images by content using colors, texture and shape. In: W.
Niblack (ed.), SPIE Proceedings Vol. 1908, Storage and Retrieval for
Image and Video Databases, 2–3 February 1993, San Jose, California
(SPIE, San Jose, 1993) 3173–87.
[4] A. Pentland, R. Picard and S. Sclaroff, Photobook: content-based
manipulation of image databases. Storage and Retrieval for Image and
Video Databases II, number 2185, San Jose, CA., February 1994.
[5] W.Y. Ma and B.S. Manjunath, NeTra: a toolbox for navigating large
image databases, Multimedia Systems 7(3) (1999) 184–198.
[6] B.S. Manjunath and W.Y. Ma, Texture features for browsing and
retrieval of image data, IEEE Transactions on Pattern Analysis and
Machine Intelligence, 8(18) (1996) 837–42.
[7] C.C. Chen and C.C. Chen, Filtering methods for texture
discrimination, Pattern Recognition Letters 20(8) (1999) 783–90.
[8] B.S. Manjunath and W.Y. Ma, Texture features for browsing and
retrieval of image data, IEEE Transactions on Pattern Analysis and
Machine Intelligence, 8(18) (1996) 837–42.

SIP0401-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH
26-27 2011

Recursive Algorithm and Systolic Architecture for the Discrete Sine


Transform
M.N. Murty, Satyabrata Das
Department of Physics Department of Electronics & Communication
NIST, Berhampur-761008, Orissa, India NIST, Berhampur-761008, Orissa, India
mnarayanamurty@rediffmail.com satyabratadas.m@gmail.com

S.S. Nayak B. Padhy


Department of Physics Department of Physics
JITM, Paralakhemundi, Orissa, India Khallikote (Auto) College, Berhampur-760001,
sudhansusekharnayak@yahoo.com Orissa, India
binayak_padhy2@rediffmail.com

S.N. Panda,
Department of Physics,
Gunupur College, Gunupur,
Orissa, India

Abstract - In this paper, a novel recursive processing[1,6,7], adaptive digital


algorithm and a systolic architecture for filtering[8] and interpolation[9]. The
realising the discrete sine transform performance of DST can be compared to that
(DST) are presented. By using some of the discrete cosine transform (DCT) and it
mathematical techniques, any general may therefore be considered as a viable
length DST can be converted into a alternative to the DCT. Yip and Rao[10]
recursive equation. The recursive have proven that for large sequence length
algorithms apply to arbitrary length (N ≥ 32) and low correlation coefficient ( <
algorithms and are appropriate for VLSI 0.6), the DST performs even better than the
implementation. DCT.

In this paper, a novel algorithm to


Keywords - discrete sine transform; convert DST into a recursive form and a
discrete cosine transform; recursive; systolic architecture for parallel computation
systolic architecture of DST are presented. The advantage of this
algorithm is its regular structure and
I. INTRODUCTION parallelism, which makes it suitable for
implementation using VLSI techniques.
The Discrete sine transform (DST)
was first introduced to the signal processing The rest of the paper is organised as
by Jain[1], and several versions of this follows. The recursive algorithm for DST is
original DST were later developed by Kekre presented in Section-II. The comparison of
et al.[2], Jain[3] and Wang et al.[4]. There our results with other research works is
exist four even DST’s and four odd DST’s, presented in Section-III. The systolic
indicating whether they are an even or an architecture for computation of DST is
odd transform[5]. Ever since the presented in Section-IV. Finally, we
introduction of the first version of the DST, conclude our paper in Section-V.
the different DST’s have found wide
applications in several areas in Digital signal
processing (DSP), such as image

SIP0402-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

II. THE PROPOSED RECURSIVE ALGORITHM


FOR DST
The time recursive transfer function of
The DST of a sequence {x(n), n = X (k) is obtained by multiplying (2) by sin z
1,2,3,…, N} can be written as
N
z
X (k ) x(n) sin (2n 1)k sin(nz) cos sin z
n 1 2N N
2
for k = 1,2,3,…, N. (1) X (k ) sin z x ( n)
n 1 z
k cos(nz) sin sin z
Let z , then 2
N
N
z z
z x(n) cos sin z cos z sin sin(nz)
N
sin(nz) cos n 1 2 2
2 N
X (k ) x ( n) (2) z
z x(n) sin(nz) cos z cos(nz) sin z sin
n 1
cos(nz) sin n 1 2
2 N
z N
z
A time recursive kernel Vm for DST x(n) sin sin(nz) x(n) sin sin (n 1) z
n 1 2 n 1 2
is introduced as given below
N
Vm sin z x(n) sin (n m 1) z Using (3), we have
n m z z
(3) X (k ) sin z sin V1 sin z sin V2 sin z
2 2
N k
X (k ) sin (V1 V2 ) (5)
= x(m) sin z x(n) sin (n m 1) z 2N
n m 1

= x(m) sin z
N 2 sin (n m) z cos z
x(n) Equations (4) and (5) show that no
n m 1 sin (n m 1) z
N
complex multiplication is required during
= x(m) sin z 2 cos z x(n) sin (n m) z the recursive computation. Equation (5) is a
n m 1 discrete time recursive transfer function of
N
x(n) sin (n m 1) z finite duration input sequence, x(n), n = N,
n m 2 N-1, …,2,1. As a consequence, X(k) is
N
obtained as the output of a finite impulse
= x(m) sin z 2 cos z x(n) sin (n m) z response system. Fig. 1 shows the recursive
n m
N structure with the input sequence in reverse
x(n) sin (n m 1) z order for the realisation of X(k).
n m 2

= x(m) sin z 2 cos z Vm 1 sin z Vm 2 sin z

Hence, Vm x(m) 2 cos z Vm 1 Vm 2


for m = 1,2,…, N and Vm= 0 for m >N (4)

SIP0402-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

Input sequence
x(1), …, x(n-1), x(n)

Z-1
sin k
2N Output X(k)
x(1), …, x(n-
2 cos k
1), x(n)
N
-1 x(1), …, x(n-
Z-1
1), x(n)

Figure 1. Recursive structure for computing the DST

III. COMPARISONS WITH RELATED WORKS proposed algorithm are compared with the
corresponding parameters based on the other
The proposed approach requires N methods.
multiplications per point, and (2N-2)
additions per point for the realisation of N Table III gives the comparison of the
length DST. computation complexities of the proposed
algorithm with other algorithms found in the
In Tables I and II, the number of related research works.
multipliers and the number of adders in the

TABLE I
COMPARISON OF THE NUMBER OF MULTIPLIERS REQUIRED BY DIFFERENT ALGORITHMS

N [11] [13] [17] [19,20,23] [21] [12] [26] [22] Proposed


4 6 5 5 4 11 2 5 4 4
8 16 13 13 12 19 8 13 8 8
16 44 35 33 32 36 30 29 16 16
32 116 91 81 80 68 54 61 32 32
64 292 227 193 192 132 130 125 64 64

TABLE II
COMPARISON OF THE NUMBER OF ADDERS REQUIRED BY DIFFERENT ALGORITHMS

N [17] [13] [19,20,23] [11] [12] [21] [26] [22] Proposed


4 9 9 9 8 4 11 14 7 6
8 35 29 29 26 22 26 26 15 14
16 95 83 81 74 62 58 50 31 30
32 251 219 209 194 166 122 98 63 62
64 615 547 513 482 422 250 194 127 126

SIP0402-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

TABLE III
COMPUTATION COMPLEXITIES

of multiplications of additions
Proposed algorithm N 2N-2
[13] (3/4) N log2N - N + 3 (7/4) N log2N - 2N + 3
[14,16,20,23] (1/2) N log2N (3/2) N log2N - N + 1
[15,24,25] N log2N /2 + 1 3 N log2 N / 2 -N +1
[18] (1/2) N log2N + (1/4) N-1 (3/2) N log2N + (1/2) N-2
[21] 2(N+3)(N-1) / N 2(2N-1)(N-1) / N
[22] (N+1)(N-1) / N (2N+1)(N-1) / N
[26] if N is even 2N-3 3N+2
[26] if N is odd 2(N-1) 3N+4

IV. SYSTOLIC ARCHITECTURE and the rest (N-1) output are obtained in
subsequent (N-1) time-steps. However,
The structure of the proposed linear successive sets of N-point DSTs are obtained
systolic array for computation of N-point in every N time-steps. Each PE of the linear
DST is shown in Fig. 2. It consists of (N+1) array comprises of one multiplier and two
locally connected processing elements (PEs) adders, while the last PE contains one adder
of which the first N PEs are identical. The and one multiplier. The duration of the cycle
recurrence relation given by (3) is period is T = TM + 2TA, where TM and TA are,
implemented in the first N PEs, while the respectively, the times involved in
last PE computes the DST components. performing one multiplication and one
Function of each of the first N PEs is shown addition in the PE. This architecture requires
in Fig. 3 and that of the last PE is shown in N multiplications per point and (2N-2)
Fig. 4. One sample of the input data is fed to additions per point for realisation of N-point
each PE, one time-step staggered with DST. The hardware - and time-complexities
respect to the input of previous PE in the of the proposed systolic realisation along
reverse order i.e, i th input sample is fed to with those of the existing structures [27] -
(N+1-i) th PE in (N+1-i) th time-step. The [31] are listed in Table IV.
first output is obtained after (N+1) time steps

SIP0402-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

x(n-1)
x(n) 0

2 cosz
(N-1) TH N TH V1 (N+1) TH
0 1ST PE 2ND PE OUTPUT
PE PE PE [S]
V2
0

Figure 2 . The linear systolic array for N- Point DST


k
2 cosz = 2 cos
2N
k  k+1 in each time - step

SIP0402-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

xin

ain aout
bin PE bout
cin cout

aout =ain
bout = xin + ain bin - cin
cout = bin
xin = Input sample

Figure 3. Function of each of the first N PEs of the linear array

Vout = (V1in + V2 in)S


k
S = sin
2N
k = 1 for first (N+1) time - steps. Then k  k+1 in each time - step.

Figure 4. Function of (N+1)th PE of the linear array.

TABLE IV
HARDWARE - AND TIME-COMPLEXITIES OF PROPOSED STRUCTURE AND THE EXISTING SYSTOLIC STRUCTURES
FOR THE DST / DCT

Average Computation
Structures Multipliers Adders Cycle-Time (T)
- Time
Pan and Park [27] N 2N TM + TA NT/2

Fang and Wu [28] N/2 + 3 N+3 TM + 2TA NT

Chiper et al. [29] N-1 N+1 TM + TA (N-1) T/2

Meher [30] N/2 - 1 N/2 + 9 2(TM + TA) (N/4-1) T

Meher [31] N/2 + 3 N/2 +5 TM + TA (N/2-1) T

Proposed N 2N - 2 TM + 2TA (N+1) T

SIP0402-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

V. CONCLUSION Eng., vol. BME-32, pp 185-192, March


1985.
[7] K. Rose, A. Heiman and I. Dinstein,
In this paper, we proposed a “DCT/DST alternate transform image
recursive algorithm, which is most suitable coding,” Proc. BLOBE COM 87, vol. I,
for parallel computation of the DST. It pp. 426-430, November 1987.
involves significantly less number of
multipliers and adders compared with that of [8] J.L. Wang and Z.Q. Ding, “Discrete
the existing structures. The proposed systolic sine transform domain LMS adaptive
architecture is parallel, simple and regular, filtering,” Proc. Int. Conf. Acoust.,
which is suitable for VLSI implementation. Speech, Signal Processing, pp 260-263,
1985.
[9] Z. Wang and L. Wang, “Interpolation
REFERENCES using the fast discrete sine transform,”
Signal Processing, vol. 26, pp 131-137,
1992.
[1] A.K. Jain, “A fast Karhunen-Loeve
transform for a class of random [10] P. Yip and K.R. Rao, “On the
processes,” IEEE Trans. Commun., vol. computation and the effectiveness of
COM-24, pp 1023-1029, September discrete sine transform”, Comput.
1976. Electron., vol. 7, pp. 45-55, 1980.

[2] H.B. Kekre and J.K. Solanka, [11] W.H. Chen, C.H. Smith and S.C.
“Comparative performance of various Fralick, “A fast computational
trigonometric unitary transforms for algorithm for the discrete cosine
transform image coding,” Int. J. transform”, IEEE Trans.
Electron., vol. 44, pp 305-315, 1978. Communicat., vol. COM-25, no. 9,
pp. 1004-1009, Sep. 1977.
[3] A.K. Jain, “A sinusoidal family of
unitary transforms,” IEEE Trans. Patt. [12] P. Yip and K.R. Rao, “A fast
Anal. Machine Intell., vol. PAMI-I, pp computational algorithm for the
356-365, September 1979. discrete sine transform”, IEEE Trans.
Commun., vol. COM-28, pp. 304-
[4] Z. Wang and B. Hunt, “The discrete W 307, Feb. 1980.
transform,” Applied Math Computat.,
vol. 16, pp 19-48, January 1985. [13] Z. Wang, “Fast algorithms for the
discrete W transform and for the
[5] S. Poornachandra, V. Ravichandran and discrete Fourier transform”, IEEE
N.Kumarvel, “Mapping of discrete Trans. Acoust., Speech, Signal
cosine transform (DCT) and discrete Processing, vol. ASSP-32, pp. 803-
sine transform (DST) based on 816, Aug. 1984.
symmetries” IETE Journal of Research,
Vol. 49, no. 1, pp 35-42, January- [14] P. Yip and K.R. Rao, “Fast
February 2003. decimation-in-time algorithms for a
family of discrete sine and cosine
[6] S. Cheng, “Application of the sine transforms”, Circuits, Syst., Signal
transform method in time of flight Processing, vol. 3, pp. 387-408,
positron emission image reconstruction 1984.
algorithms,” IEEE Trans. BIOMED.

SIP0402-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

[15] H.S. Hou, “A fast recursive Electronics Letters, vol. 30, no. 3,
algorithm for computing the discrete Feb. 1994.
cosine transform”, IEEE Trans.
Acoust., Speech, Signal Processing, [23] Peizong Lee and Fang-Yu Huang,
vol. ASSP-35, no. 10, pp. 1455- “Restructured recursive DCT and
1461, Oct. 1987. DST algorithms”, IEEE Transactions
on Signal Processing,” vol. 42, no.
[16] O. Ersoy and N.C. Hu, “A unified 7, pp. 1600-1609, July 1994.
approach to the fast computation of
all discrete trigonometric
transforms,” in Proc. IEEE Int. Conf. [24] V. Britanak, “On the discrete cosine
Acoust., Speech, Signal Processing, computation”, Signal Process., vol.
pp. 1843-1846, 1987. 40, no. 2-3, pp. 183-194, 1994.
[17] H.S. Malvar, “Corrections to fast [25] C.W. Kok, “Fast algorithm for
computation of the discrete cosine computing discrete cosine
transform and the discrete hartley transform”, IEEE Trans. Signal
transform,” IEEE Trans. Acoust., Process., vol. 45, pp. 757-760, Mar.
Speech, Signal Processing, vol. 36, 1997.
no. 4, pp. 610-612, Apr. 1988.
[26] V. Kober, “Fast recursive algorithm
[18] P. Yip and K.R. Rao, “The for sliding discrete sine transform”,
decimation-in-frequency algorithms Electronics Letters, vol. 38, no. 25,
for a family of discrete sine and pp. 1747-1748, Dec. 2002.
cosine transforms”, Circuits, Syst.,
Signal Processing, vol. 7, no. 1, pp. [27] S.B. Pan and R.H. Park, “Unified
3-19, 1988. systolic array for computation of
DCT / DST / DHT”, IEEE Trans.
[19] A. Gupta and K.R. Rao, “A fast Circuits Syst. Video Technol., vol. 7,
recursive algorithm for the discrete no. 2, pp.413-419, April 1997.
sine transform” IEEE Transactions
on Acoustics, Speech and Signal [28] W.H. Fang and M.L. Wu, “Unified
Processing, vol. 38, no. 3, pp. 553- fully-pipelined implementations of
557, March, 1990. one- and two-dimensional real
discrete trigonometric trnasforms”,
[20] Z. Cvetković and M.V. Popović, IEICE Trans. Fund. Electron.
“New fast recursive algorithms for Commun. Comput. Sci., vol. E82-A,
the computation of discrete cosine no. 10, pp. 2219-2230, Oct. 1999.
and sine transforms”, IEEE Trans.
Signal Processing, vol. 40, no. 8, pp. [29] D.F. Chiper, M.N.S. Swamy, M.O.
2083-2086, Aug. 1992. Ahmad, and T. Stouraitis, “A systolic
array architecture for the discrete
[21] J. Caranis, “A VLSI architecture for sine transform”, IEEE trans. Signal
the real time computation of discrete Process., vol. 50, no. 9, pp. 2347 -
trigonometric transform”, J. VLSI 2354, Sept. 2002.
Signal Process., no. 5, pp. 95-104,
1993. [30] P.K. Meher, “A new convolutional
formulation of the DFT and efficient
[22] L.P. Chau and W.C. Siu, “Recursive systolic implementation”, in Proc.
algorithm for the discrete cosine IEEE Int. Region 10 Conf.
transform with general lengths”, (TENCON’05), pp. 1462-1466, Nov.
2005.

SIP0402-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

[31] P.K. Meher, “Systolic designs for


DCT using a low-complexity
concurrent convolutional
formulation”, IEEE Trans. Circuits &
Systems for Video Technology, vol
16, no. 9, pp. 1041-1050, Sept. 2006.

SIP0402-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

Multiscale Edge Detection Based on Wavelet Transform


Divesh Kumar, Dr. Yaduvir Singh
Department of Electrical and Instrumentation Engineering
Thapar University, Patiala, Punjab
kambojdivesh@gmail.com, dryaduvirsingh@gmail.com

Abstract: This paper presents a new approach Edge detection is an important task in image
to edge detection using wavelet transforms. processing. It is a main tool in pattern
First, we briefly introduce the development of recognition, image segmentation, and scene
wavelet analysis. Then, some major classical analysis. An edge detector is basically a high
edge detectors are reviewed and interpreted pass filter that can be applied to extract the
with continuous wavelet transforms. The edge points in an image. This topic has
classical edge detectors work fine with high- attracted many researchers and many
quality pictures, but often are not good enough achievements have been made [14]-[20]. In
for noisy pictures because they cannot this paper, we will explain the mechanism of
distinguish edges of different significance. The edge detectors from the point of view of
proposed wavelet based edge detection wavelets and develop a way to construct edge
algorithm combines the coefficients of wavelet detection filters using wavelet transforms.
transforms on a series of scales and Many classical edge detectors have
significantly improves the results. Finally, a been developed over time. They are based on
cascade algorithm is developed to implement the principle of matching local image segments
the wavelet based edge detector. with specific edge patterns. The edge
detection is realized by the convolution with a
Keywords: wavelet transform, canny edge set of directional derivative masks [21]. The
detector, sobel edge detector, noise.
popular edge detection operators are Roberts,
INTRODUCTION Sobel, Prewitt, Frei-Chen, and Laplacian
An edge in an image is a contour
operators ( [17], [18], [21], [22] ). They are all
across which the brightness of the image
defined on a 3 by 3 pattern grid, so they are
changes abruptly. In image processing, an
efficient and easy to apply. In certain situations
edge is often interpreted as one class of
where the edges are highly directional, some
singularities. In a function, singularities can be
edge detector works especially well because
characterized easily as discontinuities where
their patterns fit the edges better.
the gradient approaches infinity. However,
image data is discrete, so edges in an image Noise and its influence on edge detection
often are defined as the local maxima of the However, classical edge detectors
gradient. This is the definition we will use here. usually fail to handle images with strong noise,

SIP0403-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

as shown in Fig. 1.1. Noise is unpredictable


contamination on the original image. It is
usually introduced by the transmission or
compression of the image.
M (i, j) = 0, if (i, j) is not in the grid
These patterns are represented as filters,
which are vectors (1-D) or matrices (2-D). For
fast performance, usually the dimension of
these filters are 1×3 (1-D) or 3×3 (2-D). From
(a) Lena image (b) Edges using Canny
the point of view of functions, filters are
discrete operators of directional derivatives.
Instead of finding the local maxima of the
gradient, we set a threshold and consider
those points with gradient above the threshold
(c) Image with noise (d) Edges from the image with
as edge points. Given the source image f(x,y),
noise
the edge image E(x,y) is given by
Fig. 1.1: Impact of noise on edge detection

There are various kinds of noise, but (1.2)


the most widely studied two kinds are white Where s and t are two filters of different
noise and “salt and pepper” noise. Fig. 1.1 directions.
shows the dramatic difference between the
result of edge detection from two similar Roberts edge detector
images, with the later one affected by some
white noise.
Review of Classical Edge Detectors
Classical edge detectors use a pre-
The edge patterns are shown in Fig.1.2
defined group of edge patterns to match each
image segments of a fixed size. 2-D discrete
convolutions are used here to find the
correlations between the pre-defined edge
(a) (b)
patterns and the sampled image segment.
Fig. 1.2: Edge patterns for Roberts edge detector:(a) s; (b)
( f * m)( x, y) f (i, j)m( x i, y j ), ..........(1. t
i j
These filters have the shortest
1)
support, thus the position of the edges is more
Where f is the image and m is the edge pattern
accurate. On the other hand, the short support
defined by
of the filters makes it very vulnerable to noise.

SIP0403-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

The edge pattern of this edge detector makes A 3×3 sub image b of an image f may
it especially sensitive to edges with a slope be thought of as a vector in R . For example,
9

around π/4. Some computer vision programs


use the Roberts edge detector to recognize
edges of roads.

Prewitt edge detector Let V denote the vector space of 3 × 3


sub images. Bv, an orthogonal basis for V, is
used for the Frei-Chen method. The subspace
E of V that is spanned by the sub images v1,
The edge patterns are shown in Fig. 1.3 v2, v3, and v4 is called the edge subspace of
V. The Frei-Chen edge detection method
bases its determination of edge points on the
size of the angle between the sub image b and
its projection on the edge subspace.
(a) (b)
Fig. 1.3: Edge patterns for Prewitt and Sobel edge
detectors: (a)s; (b)t

These filters have longer support.


They differentiate in one direction and average
in the other direction. So the edge detector is
less vulnerable to noise. However, the position
of the edges might be altered due to the
(1.3)
average operation.

Sobel edge detector The edge patterns are shown in fig. 1.4

The edge patterns are similar to those


of the Prewitt edge detector as shown in Fig.
1.3. These filters are similar to the Prewitt
edge detector, but the average operator is
more like a Gaussian, which makes it better for
removing some white noise.
(g) (h) (i)

Frei-Chen edge detector


SIP0403-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

Fig. 1.4: Edge Patterns for the Frei-Chen edge


detector: (a) v1; (b) v2; (c) v3; (d) v4; (e) v5;
(f) v6; (g) v7; (h) v8; (i) v9.
As shown in above patterns, the sub That is, increasing the filter size increases the

images in the edge space are typical edge signal-to-noise ratio but also decreases the

patterns with different directions; the other sub localization by the same factor. This suggests

images resemble lines and blank space. maximizing the product of the two. So the

Therefore, the angle θE is small when the sub object function is defined as:

image contains edge-like elements, and θE is


large otherwise.
(1.4)
Canny edge detection
Canny edge detection [4] is an Where f(x) is the filter for edge detection. The
important step towards mathematically solving optimal filter that is derived from these
edge detection problems. This edge detection requirements can be approximated with the
method is optimal for step edges corrupted by first derivative of the Gaussian filter,
white noise. Canny used three criteria to
design his edge detector. The first requirement
is reliable detection of edges with low The choice of the standard deviation for the
probability of missing true edges, and a low Gaussian filter, σ, depends on the size, or
probability of detecting false edges. Second, scale, of the objects contained in the image.
the detected edges should be close to the true For images with multiple size objects, or
location of the edge. Lastly, there should be unknown size one approach is to use Canny
only one response to a single edge. To detectors with different σ values. The outputs
quantify these criteria, the following functions of the different Canny filters are combined to
are defined: form the final edge image.

Development of wavelet analysis


The concept of wavelet analysis has been
developed since the late 1980’s. However, its idea
can be traced back to the Littlewood-Paley
technique and Calderón-Zygmund theory [25] in
harmonic analysis. Wavelet analysis is a powerful
Where A is the amplitude of the signal and tool for time-frequency analysis. Fourier analysis is
(n0) is the variance of noise. SNR(f) defines
2
also a good tool for frequency analysis, but it can
only provide global frequency information, which
the signal-to-noise ratio and Loc(f) defines the is independent of time. Hence, with Fourier
localization of the filter f(x). Now, by scaling f analysis, it is impossible to describe the local
properties of functions in terms of their spectral
to fs, we get the following “uncertainty properties, which can be viewed as an expression
principle”: of the Heisenberg uncertainty principle [13].

SIP0403-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

In many applied areas like digital signal processing, obtained from Vj+1 by a dilation of factor 2. V0 is
time-frequency analysis is critical. That is, we want spanned by a function φ that satisfies
to know the frequency properties of a function in a
local time interval. Engineers and mathematicians
developed analytic methods that were adapted to (1.6)
these problems, therefore avoiding the inherent
difficulties in classical Fourier analysis. For this Equation (1.6) is called the “two-scale equation”,
purpose, Dennis Gabor introduced a “sliding- and it plays an essential role in the theory of
window” technique. He used a Gaussian function g wavelet bases.
as a “window” function, and then calculated the
Fourier transform of a function in the “sliding Edge detector using wavelets
window”. The analyzing function is Now that we have talked briefly about the
development of edge detection techniques and
wavelet theories, we next discuss how they are
The Gabor transform is useful for time-frequency
related. Edges in images can be mathematically
analysis. The Gabor transform was later
defined as local singularities. Until recently, the
generalized to the windowed Fourier transform in
Fourier transforms was the main mathematical tool
which g is replaced by a “time local” function
for analyzing singularities. However, the Fourier
called the “window” function. However, this
transform is global and not well adapted to local
analyzing function has the disadvantage that the
singularities. It is hard to find the location and
spatial resolution is limited by the fixed size of the
spatial distribution of singularities with Fourier
Gaussian envelope [13]. In 1985, Yves Meyer
transforms. Wavelet analysis is a local analysis, it
([23], [24]) discovered that one could obtain
is especially suitable for time-frequency analysis
orthonormal bases for L2(R) of the type
[1], which is essential for singularity detection.
This was a major motivation for the study of the
wavelet transform in mathematics and in applied
domains. With the growth of wavelet theory, the
and that the expression
wavelet transforms have been found to be
remarkable mathematical tools to analyze the
singularities including the edges, and further, to
for decomposing a function into these orthonormal detect them effectively. This idea is similar to that
wavelets converged in many function spaces. of John Canny [4]. The Canny approach selects a
Themost preeminent books on wavelets are those Gaussian function as a smoothing function θ; while
ofMeyer ([23], [24]) and Daubechies. Meyer the wavelet-based approach chooses a wavelet
focuses on mathematical applications of wavelet function to be θ0. Mallat, Hwang, and Zhong ( [5],
theory in harmonic analysis; Daubechies gives a [6] ) proved that the maxima of the wavelet
thorough presentation of techniques for transform modulus can detect the location of the
constructing wavelet bases with desired properties, irregular structures. Further, a numerical procedure
along with a variety of methods for mathematical to calculate their Lipschitz exponents has been
signal analysis [14]. A particular example of an provided. One and two-dimensional signals can be
orthonormal wavelet system was introduced by reconstructed, with a good approximation, from the
Alfred Haar. However, the Haar wavelets are local maxima of their wavelet transform modulus.
discontinuous and therefore poorly localized in The wavelet transform characterizes the local
frequency. Stéphane Mallat made a decisive step in regularity of signals by decomposing signals into
the theory of wavelets in 1987 when he proposed a elementary building blocks that arewell localized
fast algorithm for the computation of wavelet both in space and frequency. This not only explains
coefficients. He proposed the pyramidal schemes the underlying mechanism of classical edge
that decompose signals into subbands. These detectors, but also indicates a way of constructing
techniques can be traced back to the 1970s when optimal edge detectors under specific working
they were developed to reduce quantization noise. conditions.
The framework that unifies these algorithms and
the theory of wavelets is the concept of a multi-
resolution analysis (MRA). AnMRA is an Results:
increasing sequence of closed, nested subspaces
Multiscale edge detection
{Vj}j∈ Z that tends to L2(R) as j increases. Vj is
SIP0403-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

Wavelet filters of large scales are method gives more continuous and precise
more effective for removing noise, but at the edges. Table 1 shows that the SNR of the
same time increase the uncertainty of the edges obtained by the multiscale wavelet
location of edges. Wavelet filters of small transform is significantly higher than others.
scales preserve the exact location of edges,
but cannot distinguish between noise and real
edges. We can use the coefficients of the
wavelet transform across scales to measure
the local Lipschitz regularity. That is, when the
scale increases, the coefficients of the wavelet (a) (b) (c)
transformare likely to increase where the Fig. 1.5: Edge detection for Lena image: (a) The

Lipschitz regularity is positive, but they are Lena image; (b) Edges by the Canny edge detector;
(c) Edges by the multiscale edge detection using
likely to decrease where the Lipschitz
wavelet transform
regularity is negative. We know that locations
with lower Lipschitz regularities are more likely
to be details and noise. As scale increases,
the coefficients of the wavelet transform
increase for step edges, but decrease for Dirac
and fractal edges. So we can use a larger-
scale wavelet at positions where the wavelet
transform decreases rapidly across scales to
remove the effect of noise, while using a
smaller-scale wavelet at positions where the
wavelet transform decreases slowly across (a)
scale to preserve the precise position of the
edges. Using the cascade algorithm in, we can
observe the change of the wavelet transform
coefficient between each adjacent scales, and
(b) (c)
distinguish different kind of edges. Then we
can keep the scales small for locations with
positive Lipschitz regularity and increase the
scales for locations with negative Lipschitz
regularity. Fig. 1.5 shows that for a image (d) (e)
without noise, the result of our method is Fig. 1.6: Edge detection for a block image with

similar to that of Canny’s edge detection. For noise: (a) A block image (SNR=10db); (b) Edges by

images with white noise in Fig. 1.6 – 1.10, our the Sobel edge detector; (c) Edges by Canny edge

SIP0403-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

detection with default variance; (d) edges by Canny


edge detection with adjusted variance; (e) Edges by
the multiscale edge detection using wavelet
transform

(c) (d)
Fig. 1.8: Edge detection for a bridge image with
noise: (a) Bridge image (SNR=30db); (b) Edges by
the Sobel edge detector; (c) Edges by Canny edge
(a) (b) detection with adjusted variance; (d) Edges by
multi-level edge detection using wavelet

(c) (d)
Fig. 1.7: Edge detection for a Lena image with
noise: (a) Lena image (SNR=30db); (b) Edges by
the Sobel edge detector; (c) Edges by Canny edge (a) (b)
detection with adjusted variance; (d) Edges by
multi-level edge detection using wavelets

Table 1: False rate of the detected edges

(c) (d)
Fig. 1.9: Edge detection for a pepper image with
noise: (a) Pepper image (SNR=10db); (b) Edges by
the Sobel edge detector; (c) Edges by Canny edge
detection with adjusted variance; (d) Edges by
multi-level edge detection using wavelet

(a) (b) (a) (b)

SIP0403-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

[5] S. Mallat, S. Zhong, 1992, “Characterization of


signals from multiscale edges,” IEEE Trans. Pattern
Anal. Machine Intell., vol.14, no.7, pp. 710-732.
[6]Acharyya, M., Kundu, M.K., 2001. Wavelet-based
texture segmentation of remotely sensed images. IEEE
11th Internat. Conf. Image Anal. Process, 69–74.
[7] Xiao, D., Ohya, J., “CONTRAST ENHANCEMENT
OF COLOR IMAGES BASED ON WAVELET
TRANSFORM AND HUMAN VISUAL SYSTEM”,
international conference GRAOPHICS AND
(c) (d) VISUALIZATIO IN ENGINEERING, Florida, USA,
2007.
Fig. 1.10: Edge detection for a wheel image with [8] Scharcanski, J., Jung, C., R., Clarke, R. T., “Adaptive
noise: (a) Wheel image (SNR=10db); (b) Edges by Image Denoising Using Scale and Space Consistency”,
IEEE TRANSACTIONS ON IMAGE PROCESSING,
the Sobel edge detector; (c) Edges by Canny edge VOL. 11, NO. 9, SEPTEMBER 2002.
detection with adjusted variance; (d) Edges by [9] Mallat S. A.: Theory for Multiresolution Signal
Decomposition: The Wavelet Representation. IEEE
multi-level edge detection using wavelet Transactions on Pattern Analysis and Machine
Intelligence, 11(7), 674–693.
[10] Prieto M. S., Allen A. R.: A Similarity Metric for
Conclusion & Future scope Edge Images. IEEE Transactions on Pattern Analysis and
In this work we have described an Machine Intelligence, 25(10), 1265–1273.
approach for edge detection using wavelet [11] Drori I., Lischinski D.: Fast Multiresolution Image
Operations in the Wavelet Domain. IEEE Transactions
transform. The wavelet edge detector produces
on Visualization and Computer Graphics, 9(3), 395–411.
better edges over classical edge detectors. Classical [12] I. Drori D. Lischinski. Fast multiresolution image
edge detectors are very sensitive to noise. Since operations in the wavelet domain. IEEE Transactions on
wavelet decomposition involves low-pass filter, the Visualization and Computer Graphics., 9(3):395–412,
amount of the noise can be decreased in image 2003.
which in turn could lead to robust edge detection. [13] A. Cohen, R. D. Ryan, 1995, “ Wavelets
We can use the wavelet transformer to produce
andMultiscale Signal Processing,” Chapman & Hall.
initial images, then watershed algorithm can be
used for segmentation of the initial image, then by [14] J. J. Benedetto,M.W. Frazier, 1994, “Wavelets-
Mathematics and Applications,” CRC Press, Inc.
using the inverse wavelet transform, the segmented
[15] R. J. Beattie, 1984, “Edge detection for semantically
image can be projected up to a higher resolution. based early visual processing,” dissertation, Univ.
Edinburgh, Edinburgh, U.K..
[16] B. K. P. Horn, 1971, “The Binford-Horn line-
finder,” Artificial Intell. Lab., Mass. Inst. Technol.,
REFERENCES Cambridge, AI Memo 285.
[1] J. C. Goswami, A. K. Chan, 1999, [17] L. Mero, 1975, “A simplified and fast version of the
Hueckel operator for finding optimal edges in pictures,”
“Fundamentals of wavelets: theory, algorithms, and
Pric. IJCAI, pp. 650-655.
applications,” John Wiley & Sons, Inc. [18] R. Nevatia, 1977, “Evaluation of simplified Hueckel
[2] Y. Y. Tang, L. Yang, J. Liu, 2000, “Characterization edge-line detector,” Comput., Graph., Image Process.,
of Dirac-Structure Edges with Wavelet Transform,” vol. 6, no. 6, pp. 582-588.
IEEE Trans. Sys. Man Cybernetics-Part B: Cybernetics, [19] Y. Y. Tang, L.H. Yang, L. Feng, 1998,
vol.30, no.1, pp. 93-109. “Characterization and detection of edges by Lipschitz
[3] Mallat, S. 1987. “A compact multiresolution exponent and MASW wavelet transform,” Proc. 14th Int.
representation: the wavelet model.” Proc. IEEE Conf. Pattern Recognit., Brisbane, Australia, pp. 1572-
Computer Society Workshop on Computer Vision, IEEE 1574.
Computer Society Press, Washington, D.C., p.2-7. [20] K. A. Stevens, 1980, “Surface perception from local
analysis of texture and contour,” Artificial Intell. Lab.,
[4] J. Canny, 1986, “A computational approach to
Mass. Instr. Technol., Cambridge, Tech. Rep. AI-TR-
edge detection,” IEEE Trans. Pattern Anal. Machine 512.
[21] K. R. Castleman, 1996, “Digital Image Processing,”
Intell., vol. PAMI-8, pp. 679-698.
Englewood Cliffs, NJ: Prentice- Hall.

SIP0403-8
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27
2011

[22] M. Hueckel, 1971, “An operator which locates


edges in digital pictures,” J. ACM, vol. 18, no. 1, pp.
113-125.
[23] Acharyya, M., Kundu, M.K., 2001. Wavelet-based
texture segmentation of remotely sensed images. IEEE
11th Internat. Conf. Image Anal. Process., 69–74.
[24] Jahromi O. S., Francis B. A., Kwong R. H.:
Algebraic theory of optimal filterbanks. Proceedings of
IEEE International Conference on Acoustics, Speech and
Signal Processing, 1, 113–116.
[25] A. Zygmund, 1968, “Trigonometric Series,” 2nd
ed., Cambridge: Cambridge Univ. Press.
[26] Mallat S.: Multifrequency channel decompositions
of images and wavelet models. IEEE Transaction in
Acoustic Speech and Signal Processing, 37, 2091–2110.
[27] R. M. Haralick, 1984, “Digital step edges from zero
crossing of second directional derivatives,” IEEE Trans.
Pattern Anal. Machine Intell., vol. PAMI-6, no. 1, pp.
58-68.
[28] E. C. Hildreth, 1980, “Implementation of a theory of
edge detection,” M.I.T. Artificial Intell. Lab., Cambridge,
MA, Tech. Rep. AI-TR-579.

SIP0403-9
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Color Image Enhancement by Scaling Luminance and Chromatic


Components
1
Satyabrata Das, 2Sukanti Pal and 3A K Panda
National Institute of Science and Technology, Palur Hills, Berhampur, Odisha, 761008
Email: 1satyabratadas.m@gmail.com, 2sukanti.nist@gmail.com, 3akpanda62@hotmail.com

ABSTRACT compressed format to save memory space and


In this paper, a new technique for color image bandwidth. So it is better if enhancement of the
enhancement using luminance and chromatic image can be achieved in compressed domain
component is presented. In the proposed rather than transforming to spatial domain and
technique luminance and chromatic components applying the enhancement technique and
of color image are extracted separately and transforming back to compressed domain;
converted to frequency domain. Then DC and thereby increasing the computational overhead.
AC coefficients are scaled to preserve contrast, Therefore, color images mostly uses JPEG
brightness and color. While enhancing the compression format for saving bandwidth and
image care is taken to reduce the mathematical memory space which uses popular discrete
computations. Processing the color image in cosine transform (DCT). Extracting the
DCT domain invites unwanted side effect such luminance and chromatic components in DCT
as blocking artifact which is minimized by using domain and processing them to improve the
smaller sub block matrix keeping in view the brightness, contrast and color invites unwanted
complexity of mathematical computation. side effect such as blocking artifact. However,
these side effects can be minimized by using
Keywords
special mathematical computation techniques.
Blocking artifacts, Chromatic, DCT, Luminance
In our work we have represented the color
1. INTRODUCTION image using Y-Cb-Cr color space so that we can
The display of a color image depends mainly on preserve both luminance and color component.
brightness, contrast and colors. Enhancement of Previous works [2-4] have used the DCT
the image is necessary to improve the visibility domain and they have implemented non uniform
of the image subjectively to remove unwanted scaling of DC and AC coefficients which
flickering, to improve contrast and to find more requires more mathematical computation. In our
details. In general there are two major approach we have adopted a uniform scale value
approaches [1]. They are spatial domain, where for both DC and AC components of Y, Cb and
statistics of grey values of the image are Cr which substantially lowers computational
manipulated and the second is frequency domain burden and at the same time enhancing the
approach; where spatial frequency contents of image. DCT-II is presented in section 2. The
the image are manipulated [1]. In spatial domain proposed algorithm is presented in section 3.
histogram equalization, principal component The results obtained are presented in section 4
analysis, rank order filtering, homomorphic and the paper is concluded in section 5.
filtering etc are generally used to enhance the 2. MATHEMATICAL
image. Although these techniques are developed PRELIMINARIES
for gray valued images but few of them are also There are eight different ways to do the even
applied to color image for enhancement extension of DFT and there are as many
purpose. Mostly images are represented in definitions of the DCT [5,6]. We have used type
SIP0404-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

II DCT, which is widely, used in practice for implemented in four steps. In first step
speech and image compression applications as adjustment of local brightness is achieved. Local
part of various standards [7]. Equation (1) brightness is adjusted by mapping the DC
represents two dimensional DCT where C(k,l) coefficients of each sub block of Y(u,v) using a
represents transformed DCT coefficients for the monotonically increasing function ψ(x) [8]
input image x(m,n) assuming a square image of which is shown in fig.1. While mapping the
size (N×N). coefficients, DC coefficient is treated separately
as compared to rest of the AC coefficients.
N 1N 1
(2m 1) k (2n 1) l
C (k , l ) (k ) (l ) x(m, n) cos cos Mapping function for DC coefficient is
m 0n 0
2N 2N
0 k, l N 1
Y 0, 0
(1) DC 8
y mapped Ymax (2)
where Ymax

1 2 where
0 and k l
N N
for 1 k , l N 1 p1
x
n1 1 0 x m
m
Contrast of an image is defined using as change x
p2
in luminance with respect to surrounding to x m
n 1 n ,m x 1
luminance of surround. Hence contrast can be 1 m
thought of as the ratio between standard and 0 m n 1; p1 , p 2 0
deviation (σ) to mean (µ) value of the image.
The greater the value of standard deviation more ymax is the maximum brightness value of the
is the contrast. image before transforming using DCT. There
are various monotonic increasing functions
3. THE PROPOSED ALGORITHM
available in the literature [4] and [7]. No single
Image in RGB format space is converted into Y-
function is best suitable for all the images for
Cb-Cr color space to find out luminance and
enhancement purpose. We choose ψ(x) as its
chromatic component individually. Then Y, Cb,
value can be modified using four parameters
Cr component is split into (8×8) sub blocks
such as m, n, p1, p2. We varied the values for m,
respectively. Then for each sub block DCT-II is
n, p1, p2 and choose m = n = 0.5 and p1=1.8 and
computed separately to obtain Y(u,v), Cb(u,v)
p2= 0.8 for best performance. As Y component
and Cr(u,v) respectively, where Y(u,v), Cb(u,v)
represents the luminance component hence only
and Cr(u,v) represents the block transformed
this component is mapped to alter its brightness
DCT coefficients and the first element of each
leaving behind the Cb and Cr component
DCT transformed coefficient Y(0,0), Cb(0,0)
unaltered. In the second step adjustment of local
and Cr(0,0) represents DC component and rest
contrast is achieved by scaling the DC and AC
are AC component. Each sub block after
coefficients of normalized Y(u,v), Cb(u,v) and
computing its DCT coefficient is normalized by
Cr(u,v). The scale factor „s‟ is defined as the
a factor of 8. The proposed algorithm is
ratio between mapped DC coefficient for each
SIP0404-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

normalized sub block (8×8) of Y(u,v) to the value of standard deviation which is image
original DC coefficient. As DC component dependent and to be decided based on the
gives the information about mean of brightness amount of blocking artifact removal. Each
distribution of each sub block hence it is used to normalized 8×8 sub block of Y(u,v), Cb(u,v)
compute the scale factor„s‟. Assuming 8 bit and Cr(u,v) are subdivided into four 4×4 sub
representation while scaling overflow of gray blocks. and the scale factor „s‟ is recomputed
values may occur beyond 255 which is taken through the earlier mentioned steps of this
care by limiting the scale factor depending upon algorithm. Only those sub blocks will be scaled
the image. In the third step preservation of color where threshold condition is met leaving
is achieved through scaling of normalized behind the remaining sub blocks unaltered.
Cb(u,v) and Cr(u,v) component through the Then corresponding sub blocks of Y, Cb and Cr
same scale factor „s‟ corresponding to each is scaled through the new scale factor in order to
normalized sub block of Y(u,v). Since the remove the artifacts. Finally image is
mapping from RGB to Y-Cb-Cr is non linear reconstructed in spatial domain by combing Y,
and Cb, Cr depends on Y hence while scaling Cb and Cr components.
the color component DC coefficients has to be
treated separately. 4. QUALITY ASSESMENT
Simulation is performed on various images
using MATLAB. As the proposed algorithm is
based on DCT so for assessing quality PSNR
and SNR is not a suitable option as prior
information regarding the type of distortion is
not available with us. We have used no-
Similarly for normalized Cr(u,v) is to be scaled reference perceptual quality assessment for
JPEG compressed images [9] where quality
using the above mentioned procedure. Finally
metric that incorporates human visual system
blocking artifacts are suppressed. As this characteristics which do not require the input
algorithm is developed around type–II DCT image for computing the quality. Based upon
hence blocking artifacts are visible in the this a quality score is obtained which reflects the
processed image because of discontinuities in amount of blocking artifact removal and
gray values. There are several methods available distortion removal due to non linear mapping. If
to minimize the blocking artifacts but they are the quality score is nearer to 10 it reflects the
best quality image and 1 represents worst
computationally exhaustive. We have proposed
quality image. Wang et al. [9] suggested no
a simple method to minimize blocking artifacts reference quality metric for computing the
and at the same time it requires less quality of JPEG image. The computation of this
computation. For this purpose standard metric is described in [9] where they have cited
deviation (σ) is computed for each normalized the website which contains the MATLAB code
sub block of Y(u,v). When (σ) represents a large for computing the quality score. We have used
value then it is concluded that corresponding the same MATLAB code for evaluation of
quality and called as quality score. Quality score
sub block contains a large variation of gray
obtained for different images is tabulated in
values which results in blocking artifacts. If table 1.
threshold where threshold represents threshold
Table 1. Quality Score
SIP0404-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Before After
artifact artifact
removal removal

Image_1 7.7274 8.3612


Image_2 6.177 8.36 (e)
Image_3 8.3903 9.3895 Fig 2 (a) Image_1 (b) enhanced image by
Image_4 8.6128 8.9288 scaling DC coefficient only (c) enhanced image
by scaling both DC and AC coefficient (d)
enhanced image by scaling all components
Figure 2.a represents the original image for including Cb and Cr (e) enhanced image with
Image_1. For Image_1 four stages of output blocking artifacts removal.
obtained; they are (i) after scaling the DC
coefficient of Y (fig 2.b), (ii) after scaling both
DC and AC coefficients of Y (fig 2.c),(iii)
scaling (Y, Cb and Cr) components before
blocking artifact removal (fig 2.d) and (iv) after
blocking artifact removal (fig 2.e) respectively.
For Image_2,3,4 outputs before and after
blocking artifact removal are shown in same
way in figure 3, 4 and 5. Quality factor is
computed for different images and is shown in
table 1. From table, it is observed that quality
factor is improved after removing the blocking
artifact. Table shows the quality factor is nearer
to ten showing the better enhancement of color
image.
Fig 1: Plot of mapping function ψ(x)

(a) (b) (a) (b) (c)


Fig 3.(a) Image_2 (b) enhanced image by
scaling all components including Cb and Cr (c)
enhanced image with blocking artifacts removal.

(c) (d)

SIP0404-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

ACKNOWLEDGMENT
The authors acknowledge the DST-TIFAC
CORE on “3G/4G Communication
Technologies” received by National Institute of
(a) (b) Science and Technology from Department of
Science & Technology (DST), Government of
India.

REFERENCES
[1] Gonzalez, Rafael C. and Woods, Richard E.
Digital Image Processing, Pearson, Prentice
(c) Hall, Third edition, 2008.
[2] Aghagolzadeh, S. and Erosy, O. K.
Fig 4.(a) Image_3 (b) enhanced image by “Transform image enhancement,” Opt. Eng.,
scaling all components including Cb and Cr (c) vol.31, pp.614-626, Mar.1992.
enhanced image with blocking artifacts removal [3] Tang, J., Peli, E., and Acton, S. “Image
enhancement using a contrast measure in the
compressed domain,” IEEE Signal Process.
Lett. Vol.10, pp.289-292, Oct. 2003.
[4] Lee, S. “An efficient content – based image
enhancement in the compressed domain
using retinex theory,” IEEE Trans. Circuits
(a) (b)
Syst. Video Technol., vol. 17,no. 2, pp. 199-
213, feb.2007.
[5] Wang, Z. “Fast algorithms for the discrete w
transform for the discrete fourier transform,”
IEEE Trans. On ASSP, vol. 32. No. 4. pp.
803-816, Aug. 1984.
(c)
[6] Martucci, S.A. “Symmetric convolution and
Fig 5.(a)Image_4 (b) enhanced image by scaling the discrete sine and cosine transforms.”
all components including Cb and Cr (c) IEEE Trans. On signal Processing, vol.42,
enhanced image with blocking artifacts removal no. 5, pp.1038-4051, May. 1994.
[7] Rao, K. and Huang, J. “Techniques and
standards for image, video, and audio
CONCLUSION coding,” Prentice Hall, Upper Saddle River,
In this paper, we have presented a simple NJ. 1996.
method for enhancing the color image in [8] De, T.K. “A simple programmable S-
compressed format by scaling luminance and function for digital image processing,” in
chromatic components using less computational Proc. 4th IEEE Region 10th Int. Conf.,
overhead. Quality score is computed which Bombay, India, pp. 573-576.Nov.1989.
proves the performance of proposed method. [9] Wang, Z., Sheikh, H.R. and Bovik, A.C.
The proposed algorithm can be implemented on “No-reference perceptual quality assessment
any image processing hardware.
SIP0404-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

of JPEG compressed images,” in Proc. Int. vol. 1. pp. 477-480, Sep. 2002.
Conf. Image Processing, Rochester, NY,

SIP0404-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

A Tutorial on Image Compression Techniques


1
Vedvrat, 2Krishna Raj
1
Department of Electronics & Communication Engineering A.I.T, Kanpur, U.P., India
2
Department of Electronics Engineering H.B.T.I., Kanpur, U.P., India
(Email: 1r.ved.hbti@gmail.com, 2kraj_biet@yahoo.com)

Abstract—Processing of multimedia data acquires large Existing correlation in neighboring pixels causes the
transmission bandwidth and storage capacity. Reduction in redundant information in images. So less correlated
these parameters introduces the concept of data compression.
For achieving the better compression without degrading the representation of image required. Two fundamental
image quality, data compression techniques become the components of compression are redundancy and
challenge for the researchers. Numerous image coding irrelevancy reduction. Redundancy reduction aims at
techniques i.e. subband coding, EZW, SPIHT, EBCOT,
removing duplication from the signal source
wavelet transform coding have been presented. In this paper
performance comparison of these coding techniques is (image/video). Irrelevancy reduction omits parts of the
presented. signal that will not be noticed by the signal receiver,
Keywords—Wavelet transform, EBCOT, SPIHT, EZW, namely the Human Visual System (HVS). In general, three
subband coding, JPEG types of redundancy can be identified. Image compression
research aims at reducing the number of bits needed to
I. INTRODUCTION represent an image by removing the spatial and spectral
redundancies as much as possible.
Uncompressed multimedia (audio and video) data
a. Spatial Redundancy; correlation between
requires considerable storage capacity and transmission
neighboring pixel values.
bandwidth. Despite rapid progress in mass-storage density,
b. Spectral Redundancy; correlation between
processor speeds, and digital communication system
different color planes or spectral bands.
performance, demand for data storage capacity and data-
c. Temporal Redundancy; correlation between
transmission bandwidth continues to outstrip the
adjacent frames in a sequence of images (in video
capabilities of available technologies. The recent growth of
applications).
data intensive multimedia-based web applications have not
In lossless compression schemes, the reconstructed
only sustained the need for more efficient ways to encode
image, after compression, is numerically identical to the
signals and images but have made compression of such
original image. An image reconstructed following lossy
signals central to storage and communication technology.
compression contains degradation relative to the original.
For still image compression, the `Joint Photographic
Often this is because the compression scheme completely
Experts Group' or JPEG standard has been established by
discards redundant information. However, lossy schemes
ISO (International Standards Organization) and IEC
are capable of achieving much higher compression. Under
(International Electro-Technical Commission). The
normal viewing conditions, no visible loss is perceived. In
performance of these coders generally degrades at low bit-
predictive coding, information already sent or available is
rates mainly because of the underlying block-based
used to predict future values, and the difference is coded.
Discrete Cosine Transform (DCT) scheme. More recently,
Since this is done in the image or spatial domain, it is
the wavelet transform has emerged as a cutting edge
relatively simple to implement and is readily adapted to
technology, within the field of image compression.
local image characteristics. Transform coding, on the other
Wavelet-based coding provides substantial improvements
hand, first transforms the image from its spatial domain
in picture quality at higher compression ratios. The large
representation to a different type of representation using
storage space, large transmission bandwidth, and long
some well-known transform and then codes the
transmission time is required for image, audio, and video
transformed values. This method provides greater data
data. At the present state of technology, the only solution
compression compared to predictive methods, although at
is to compress multimedia data before its storage and
the expense of greater computation.
transmission, and decompress it at the receiver for play
back. III. COMPRESSION TECHNIQUES

II. COMPRESSION PRINCIPLE a. Subband Coding

SIP0405-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

In subband coding [4], an image is decomposed into The Fourier Transform separates the waveform into a
asset of band-limited components, called subbands, which sum of sinusoids of different frequencies and identifies
can be resembled to reconstruct the original image without their respective amplitudes. Thus it gives us a frequency-
error. Each subband is generated by band pass filtering the amplitude representation of signal. In STFT [6], non-
input. Since the bandwidth of the resulting subbands is stationary signal is divided into small portions, which are
smaller than that of the original image, the subbands can assumed to be stationary. This is done using a window
be downsampled without loss of information. function of chosen width, which is shifted and multiplied
Reconstruction of the original image is accomplished by with the signal to obtain the small stationary signals. The
upsampling, filtering, and summing the individual Fourier Transform is then applied to each of these portions
subbands. Fig.1 shows the principal components of a two- to obtain the STFT of the signal. The problem with STFT
band subband coding and decoding system. The input of goes back to the Heisenberg uncertainty principle which
the system is a 1-D, band-limited discrete-time signal x(n) states that it is impossible for one to obtain which
for n= 0,1,2....; the output sequence x‟(n) is formed frequencies exist at which time instance, but, one can
through the decomposition of x(n) into y0(n) and y1(n) via obtain the frequency bands existing in a time interval. This
analysis filters g0(n) and g1(n). Filter h0(n) is a low pass gives rise to the resolution issue where there is a trade-off
filter whose output is an approximation of x(n); filter h1(n) between the time resolution and frequency resolution. To
is a high pass filter whose output is high frequency or assume stationarity, the window is supposed to be narrow,
detail part of x(n). All the filters Are selected in such a which results in a poor frequency resolution, i.e., it is
way so that the input can be reconstructed perfectly such difficult to know the exact frequency components that
that x‟(n) = x(n). exist in the signal; only the band of frequencies that exist is
obtained. If the width of the window is increased,
ho(n) 2 2 go(n) frequency resolution improves but time resolution
x‟(n)
becomes poor, i.e., it is difficult to know what frequencies
x(n)
occur at which time intervals. Once the window function
h1(n) 2 g1(n) is decided, the frequency and time resolutions are fixed for
2
all frequencies and all times.

Fig.1 Components of Subband coding


c. Wavelet Transform
In contrast to STFT, which uses a single analysis
Woods and O'Neil used a separable combination of
window, the Wavelet Transform [5] uses short windows at
one-dimensional Quadrature Mirror Filter banks (QMF) to
high frequencies and long windows at low frequencies.
perform 4-band decomposition by the row-column
This results in multi-resolution analysis by which the
approach as shown in fig.2. The process can be iterated to
signal is analyzed with different resolutions at different
obtain higher band decomposition filter trees. At the
frequencies, i.e., both frequency resolution and time
decoder, the subband signals are decoded, upsampled and
resolution vary in the time-frequency plane without
passed through a bank of synthesis filters and properly
violating the Heisenberg inequality. In Wavelet Transform,
summed up to yield the reconstructed image.
as frequency increases, the time resolution increases;
Col H
likewise, as frequency decreases, the frequency resolution
increases. Thus, a certain high frequency component can
Row be located more accurately in time than a low frequency
H component and a low frequency component can be located
input Col L more accurately in frequency compared to a high
frequency component.
Wavelet transform analyzes non-stationary signals as both
Col H
frequency and time information is needed.
Row Wavelets are functions defined over a finite interval
L and having an average value of zero. The basic idea of the
Col L wavelet transform is to represent any arbitrary function
x(t) as a superposition of a set of such wavelets or basis
Fig.2 4-band decomposition by row-column approach functions. These basis functions are obtained from a single
prototype wavelet called the mother wavelet, by dilations
b. Short Time Fourier Transform or contractions (scaling) and translations (shifts). The
Discrete Wavelet Transform of a finite length signal x(n)

SIP0405-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

having N components, for example, is expressed by an N x 1. A discrete wavelet transforms which provides a
N matrix. compact multiresolution representation of the
The generic form for a1-D wavelet transform is shown image.
in Fig.3. Here a signal is passed through a low pass and 2. Zero tree coding which provides a compact
high pass filter, h and g, respectively, then downsampled multiresolution representation of significance
by a factor of 2, constituting one level of transform. maps, which indicates the position of significant
Multiple levels or scales of the wavelet transform are made coefficients. Zero trees allow the successful
by repeating the filtering and decimation on low pass prediction of insignificant coefficients across
branch outputs only. The process is typically carried out scales to be efficiently represented as a part of
for a finite number of levels K, and the resulting growing trees.
coefficients, di1 (n), i {1,....K} and dk0(n), and are called 3. Successive Approximation which provides a
wavelet coefficients. compact multiprecision representation of the
significant coefficients and facilitates the
d10(n) dk0(n) embedding algorithm.
h 2 h 2
4. Adaptive multilevel arithmetic coding which
provides a fast and efficient method for entropy
g 2 g 2
coding string of symbols, and requires no pre-
d11(n) dk1(n)
stored tables.
5. The algorithm runs sequentially and stops
Fig.3. Generic form of 1-D wavelet transforms
whenever a target bit rate is met.
The 1-D wavelet transform can be extended to a 2-D
A significant map defined as an indication of whether
wavelet transform using separable wavelet filters. With
a particular coefficient was zero or nonzero (i.e.,
separable filters the 2-D transform can be computed by
significant) relative to a given quantization level. The
applying a 1-D transform to all the rows of input, and then
EZW algorithm [2] determined a very efficient way to
repeating on all of the columns. Fig.4 shows an example of
code significance maps not by coding the location of the
three-level (k=3) 2-D wavelet expansion, where k
significant coefficients, but rather by coding the location of
represents the highest level of the decomposition of the
the zeros. It was found experimentally that zeros could be
wavelet transform.
predicted very accurately across different scales in the
wavelet transform. Defining a wavelet coefficient as
LL2 HL2
insignificant with respect to a threshold T if |x | < T, the
HL1 EZW algorithm hypothesized that “if a wavelet coefficient
LH2 HH at a coarse scale is insignificant with respect to a given
2
threshold T, then all wavelet coefficients of the same
orientation in the same spatial location at finer scales are
LH1 HH likely to be insignificant with respect to T.” Recognizing
1 that coefficients of the same spatial location and frequency
orientation in the wavelet decomposition can be compactly
Fig.4 Three-level 2-D wavelet expansion
described using tree structures, the EZW called the set of
insignificant coefficients, or coefficients that are quantized
d. Embedded Zero tree Wavelet (EZW) Compression to zero using threshold T, zero-trees.
In octave-band wavelet decomposition each
coefficient in the high-pass bands of the wavelet transform
has four coefficients corresponding to its spatial position in
the octave band above in frequency. Because of this very
structure of the decomposition, encoding of coefficients
required to achieve better compression results. Lewis and
Knowles [5] in 1992 were the first to introduce a tree-like
data structure to represent the coefficients of the octave
decomposition. Later, in 1993 Shapiro [2] called this
structure zero tree of wavelet coefficients, and presented
his elegant algorithm for entropy encoding called
Embedded Zero tree Wavelet (EZW) algorithm. EZW
algorithm contains the following features
SIP0405-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Fig.5 Tree structure of wavelet transform algorithm. They also present a scheme for progressive
Consider the tree structures on the wavelet transform transmission of the coefficient values that incorporates the
shown in Fig.5. In the wavelet decomposition, coefficients concepts of ordering the coefficients by magnitude and
that are spatially related across scale can be compactly transmitting the most significant bits first. SPIHT uses a
described using these tree structures. With the exception of uniform scalar quantizer and claim that the ordering
the low resolution approximation (LL1) and the highest information made this simple quantization method more
frequency bands (HL1, LH1, and HH1) each parent efficient than expected. An efficient way to code the
coefficient at level i of the decomposition spatially ordering information is also proposed. Results from the
correlates to 4 (child) coefficients at level i -1of the SPIHT coding algorithm in most cases surpass those
decomposition which are at the same frequency obtained from EZQ algorithm.
orientation. For the LLk band, each parent coefficient f. Scalable Image Compression with EBCOT
spatially correlates with 3 child coefficients, one each in This algorithm is based on independent Embedded
the HLk, LHk, and HHk bands. The standard definitions of Block Coding with Optimized Truncation of the embedded
ancestors and descendants in the tree follow directly from bit-streams (EBCOT). EBCOT algorithm [1] uses a
these parent- child relationships. A coefficient is part of a wavelet transform to generate the subband coefficients
zero-tree if it is zero and if all of its descendants are zero which are then quantized and coded. Although the usual
with respect to the threshold T. It is also a zero-tree root if dyadic wavelet decomposition is typical, other "packet"
is not part of another zero-tree starting at a coarser scale. decompositions are also supported and occasionally
Zero-trees are very efficient for coding since by declaring preferable. Scalable compression refers to the generation
only one coefficient a zero-tree root, a large number of of a bit-stream which contains embedded subsets, each of
descendant coefficients are automatically known to be which represents an efficient compression of the original
zero. The compact representation, coupled with the fact image at a reduced resolution or increased distortion. A
that zero-trees occur frequently, especially at low bit rates, key advantage of scalable compression is that the target
make zero-trees efficient for coding position information. bit-rate or reconstruction resolution need not be known at
EZW implements successive approximation the time of compression. Another advantage of practical
quantization through a multipass scanning of the wavelet significance is that the image need not be compressed
coefficients using successively decreasing thresholdsT0, multiple times in order to achieve a target bit-rate, as is
T1,T2 ,.... . The initial threshold is set to the value of T 0 = common with the existing JPEG compression standard.
2[log2 xmax], where xmax is the largest wavelet coefficient. Rather than focusing on generating a single scalable bit-
Each scan of wavelet coefficients is divided into two stream to represent the entire image, EBCOT partitions
passes: dominant and subordinate. The dominant pass each subband into relatively small blocks of samples and
establishes a significance map of the coefficients relative generates a separate highly scalable bit-stream to represent
to the current threshold Ti. Thus, coefficients which are each so-called code-block. The algorithm exhibits state-of-
significant on the first dominant pass are known to lie in the-art compression performance while producing a bit-
the interval [T0 ,2T0 ) , and can be represented with the stream with an unprecedented feature set, including
reconstruction value of (3T 0/2). The dominant pass resolution and SNR scalability together with a random
essentially establishes the most significant bit of binary access property. The algorithm has modest complexity and
representation of the wavelet coefficient, with the binary is extremely well suited to applications involving remote
weights being relative to the thresholds Ti. browsing of large compressed images.
e. Set Partitioning in Hierarchical Trees (SPIHT) IV. PERFORMANCE COMPARISION
Said and Pearlman [3], offered an alternative
explanation of the principles of operation of the EZW
algorithm to better understand the reasons for its excellent
performance. According to them, partial ordering by
magnitude of the transformed coefficients with a set
partitioning sorting algorithm, ordered bit plane
transmission of refinement bits, and exploitation of self-
similarity of the image wavelet transform across different
scales of an image are the three key concepts in EZW. In
addition, they offer a new and more effective
implementation of the modified EZW algorithm based on
set partitioning in hierarchical trees, and call it the SPIHT

SIP0405-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

45
PSNR (Lena) ideas found in the EZW algorithm. The wavelet coders are
much closer to the EZW algorithm than to the subband
40
coding. SPIHT became very popular since it was able to
35 achieve equal or better performance than EZW without
30 having to use an arithmetic encoder. The reduction in
SC
25 complexity from eliminating the arithmetic encoder is
WT
significant. Another technique, EBCOT algorithm, has
20 EZW been chosen as the basis of the JPEG 2000 standard. The
15 SPIHT performance comparison of these techniques has been
10 EBCOT discussed in the previous section. By comparing the EZW,
5 subband coding and other techniques, because of the
multiresolution property and its performance of the lossy
0
0.0625 0.125 0.25 0.5 1
wavelet image coding technique have matured
significantly and provides a very strong basis for the new
Fig.6 (a) PSNR results for LENA JPEG 2000 coding standard.

PSNR (Barbara)
40

35
VI. REFRENCES
30
[1] Taubman, D. „High Performance Scalable Image
25 Compression with EBCOT‟, IEEE Tran. IP, Mar. 1999
EZW
20 [2] Shapiro, J. M. „Embedded Image Coding Using Zerotrees of
SPIHT
Wavelet Coefficients‟, IEEE Trans. SP, vol. 41, no. 12, Dec.
15 EBCOT 1993, pp. 3445-3462.
10 [3] Said, A. and Pearlman, W. A. „A New, Fast and Efficient
Image Codec Based on Set Partitioning in Hierarchical
5
Trees‟, IEEE Trans. CSVT, vol. 6, no. 3, June 1996, pp. 243-
0 250,
0.0625 0.125 0.25 0.5 1 [4] Woods, J. W. and O'Neil, S. D. „Subband Coding of Images‟
IEEE Trans. ASSP, vol. 34, no. 5, October 1986, pp. 1278-
Fig.6 (b) PSNR results for BARBARA
128
[5] Lewis, A. S. and Knowles, G. „Image Compression Using
the 2-D Wavelet Transform‟, IEEE Trans. IP, vol. 1, no. 2,
V. CONCLUSION April 1992, pp. 244-250.
A number of coding techniques have been proposed [6] Gonzalez, R.C. and Woods, R.E., Digital Image Processing,
since the introduction of the EZW algorithm. A common 2nd edition, Pearson Education, 2004, pp. 409 – 510.
characteristic of these techniques is that they use the basic

SIP0405-5
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

Comparative Study of Lifting –based


Discrete Wavelet transform Architectures
Vidyadhar Gupta, Krishna Raj
Department of electronics engineering
Harcourt Butler Technological Institute, Kanpur

Abstract. In this paper, we provide comparative architecture. In section 4, we present a comparison


study of different existing architecture for efficient the hardware and critical path latency of all the
implementation of lifting based Discrete wavelet architecture. We conclude this paper in section 5.
Transform(DWT).The basic principal behind the architecture.
lifting based scheme is to decompose the finite
impulse response filters in wavelet transform into
a finite sequence of simple filtering steps. 2. DWT and Lifting implementation
Keywords Architecture, Discrete wavelet In traditional convolution (filtering) based
transform, lifting approach for computation of the forward DWT, the
input signal (s) is filtered separately by a low-pass
1. Introduction The Discrete Wavelet Transform filter ( ) and a high-pass filter ( ). The two output
(DWT) has become a very versatile signal streams are then sub-sampled by simply dropping
processing tool over the last decade. It has been the alternate output samples in each stream to
effectively used in signal and image processing produce the low-pass ( ) and high-pass ( ) sub-
application ever since Mallat [4] proposed the band outputs as shown in Fig1
multiresoluation representation of signals based on The two filters ( ) form the analysis filter bank.
wavelet decomposition. In fact lifting based DWT The original signal can be reconstructed by a
is the basis of the new JPEG2000 image synthesis filter bank (h, g) starting from and
compression standard which has been shown to as shown in Fig1 Given a discrete signal s(n), the
have superior performance compared to the current output signals (n)and (n) in Fig1can be
JPEG standard [5]. The main feature of the lifting- computed as:
based DWT scheme is to break up the high-pass
(n) = (i) s(2n-1),
and low-pass wavelet filters into a sequence of
upper and lower triangular matrices, and convert
the filter implementation into banded matrix (n) = (i) s(2n-1) (1)
multiplications [6] .The popularity of lifting-based where and are the length of the low-pass filter
DWT has triggered the development of several ( ) and a high-pass filter ( ) respectively. During
architectures in recent years. These architectures the inverse transform computation, both and
range from highly parallel architectures to are first up-sampled by inserting zeros in between
programmable, DSP-based architectures to folded two samples and then filtered by low-pass (h) and
architectures. In this paper we present comparative high-pass (g) filters respectively. Then they are
study of these architectures. We provide systematic added together to obtain the reconstructed signal
derivation of these architectures and compared on (s’) as shown in Fig1.
the basis of hardware utilization and critical path
latency. The rest of the paper is organized as ↓2 ↑2
follows. In Section2, we briefly explain
mathematical formulation and principles behind the
lifting scheme. In section 3, we present a number of s s’
one dimensional lifting -based DWT architectures.
Specifically, we describe direct mapping of the
data dependency diagram of the lifting scheme in a ↓2 ↑2
pipelined architecture, folded architecture, MAC g
based programmable architecture, flipping
Fig1.Signal analysis and reconstruction in 1D
DWT.

SIP0406-1
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

For multiresolution wavelet decomposition the low


=
pass sub-band ( ) is further decomposed in a
similar fashion in order to get the second-level of For the forward DWT and inverse DWT
decomposition, and the process repeated. The respectively. If the determinant of is unity, it
inverse process follows similar multi-level can be shown by applying Cramer’s rule that
synthesis filtering in order to reconstruct the signal. = ,
Since two dimensional wavelet filters are separable = and hence
functions, 2D DWT can be obtained by first = ,
applying the 1D DWT row-wise (to produce L and = .
H sub-bands in each row) and then column-wise. When the determinant of is unity, the
For the filter bank in fig1, the condition for perfect synthesis filter pair ( ) and the analysis filter
reconstruction of a signal [3] are given by
pair ( ) are both complementary. When ( )
( )+ ( ) =2
= ), the wavelet transform is called orthogonal,
(2)
otherwise it is biorthogonal. We can apply
( )+ =0 Euclidean algorithm to factorize (z) into a finite
Where is the Z-transform of the FIR Filter . sequence of alternating upper and lower triangular
can be expressed as a Laurent polynomial of degree matrices as follows;
p as
= (z) =
This can also be expressed using a polyphase
representation as
= ( )+ ( ) (3) where K is a constant and act as a scaling factor (so
where contains the even coefficients and is ), and (for 1 ≤ i ≤ m) are Laurent
contains the odd coefficients of the FIR filter h. polynomials of lower orders. Computation of the
Similarly, upper triangular matrix is known as primal lifting
= ( ) + ( ), and this is referred to in the literature as lifting the
low-pass sub band with the help of the high-pass
= + , (4) subband .Similarly, computation of the lower
triangular matrix is called dual lifting, which is
= ( )+ lifting the high-pass sub band with the help of the
low-pass sub band Often these two basic lifting
Based on the above formulation, we can define the steps are called update and predict as well. The
polyphase matrices as dual polyphase factorization which also consists of
predict and update steps can be represented in the
= , following form:
(5)
=
=
Often is called the dual of and for
perfect reconstruction, they are related as Hence the lifting based forward wavelet transform
=I, where I is the identity matrix. essentially is to first apply the lazy wavelet on the
Now the wavelet transform in terms of the input stream (split into even and odd samples), then
polyhpase matrix can be expressed as alternately execute primal and dual lifting steps,
and finally scale the two output streams by and K
=
respectively, to produce low pass and high-pass sub
bands, as shown in fig 2(a).

merge
Split

K
K
(b ) Inverse transforms.
(a) Forward transformation.
Fig 2 Lifting based DWT&IDWT.

SIP0406-2
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

The inverse DWT can be derived by traversing The architecture proposed by Lian, et al. in [2]
above steps in the reverse direction, first scaling the consists of two pipeline stages, with three pipeline
low-pass and high-pass sub band inputs by K and registers, R1, R2 and R3. In the (9, 7) type filtering
1/K respectively, and then applying the dual and operation, intermediate data (R3) generated after
primal lifting steps after reversing the signs of the first two lifting steps (Phase 1) are folded back
coefficients in and and finally the inverse to R1 (as shown in Fig.5) for computation of the
lazy transform by up-scaling the output before last two lifting steps (phase 2). The architecture can
merging them into a single reconstructed stream as be reconfigured so that computation of two phases
shown in Fig.2 (b) can be interleaved by selection of appropriate data
3. Lifting Architecture for 1D DWT by the multiplexors. As a result, two delay registers
(D) are needed in each lifting step in order to
The data dependencies in the lifting scheme can be properly schedule the data in each phase. Based on
explained with the help of an example of DWT the phase of interleaved computation, the
filtering with four factors (or four lifting steps). coefficient for multiplier M1 is either α or γ, and
The four lifting steps correspond to four stages as similarly the coefficient for multiplier M2 is β or δ
shown in Fig. 3. The intermediate results generated .The hardware utilization of this architecture is
in the first two stages for the first two lifting steps always 100%. Note that for the (5, 3) type filter
are subsequently processed to produce the high- operation, folding is not required.
pass (HP) outputs in the third stage, followed by
the low-pass (LP) outputs in the fourth stage. (9, 7) 3.3 MAC Based Programmable Architecture [3]
filter is an example of a filter that requires four
lifting steps. For the DWT filters requiring only A programmable architecture that implements the
two factors, such as the (5, 3) filter, the data dependencies represented in Fig.3 using four
intermediate two stages can simply be bypassed MACs (Multiply and Accumulate) and nine
registers has been proposed by Chang et al. in [3].
3.1 Direct Mapped Architecture The algorithm is executed in two phases as shown
in Fig. 6 The data-flow of the proposed architecture
A direct mapping of the data dependency diagram can be explained in terms of the register allocation
into a pipelined architecture was proposed by Liu of the nodes. The computation and allocation of the
et al. in [7] and described in Fig .4 the architecture registers in phase 1 are done in the following order.
is designed with 8 adders (A1–A8), 4 multipliers
(M1–M4), 6 delay elements (D) and 8 pipeline R0 s2i-1 ; R2 s2i
registers (R). There are two input lines to the R3 R0 + α (R1+R2);
architecture: one that inputs even samples (s2i) and R4 R1 +β (R5+R3);
the other one that inputs odd samples (s2i+1). There R8 R5 + γ (R6+R4);
are four pipeline stages in the architecture. In the Output LP R6+δ (R7+R8);
first pipeline stage, adder A1computes s2i + s2i+1and Output HP R8
adder A2 computes α (s2i+s2i-2)+s2i-1 The output of
A2 corresponds to the intermediate results Similarly, the computation and register allocation
generated in the first stage of Fig3. The output of in phase 2 are done in the following order.
adder A4 in the second pipeline stage corresponds
to the intermediate results generated in the second R0 s2i+1; R1 s2i+2;
stage of Fig.3. Continuing in this fashion, adder A6 R5 R0+ α (R2+R1);
in the third pipeline stage produces the high-pass R6 R2 + β (R3+R5);
output samples, and adder A8 in the fourth pipeline Output LP R4 +γ (R8+R7);
stage produces the low-pass output samples. For Output HP R7
lifting schemes that require only 2 lifting steps, As a result, two samples are input per phase and
such as the(5,3) filter, the last two pipeline stages two samples (LP and HP) are output at the end of
need to be bypassed causing the hardware every phase. For 2D DWT implementation, the
utilization to be only 50% or less. Also, for a output samples are also stored into a temporary
single read port memory, the odd and even samples buffer for filtering in the vertical dimension.
are read serially in alternate clock cycles and
buffered. This slows down the overall pipelined 3.4 Flipping Architecture [1]
architecture by 50% as well.
While conventional lifting-based architectures
3.2 Folded Architecture require fewer arithmetic operations, they
sometimes have long critical paths. For instance,
The pipelined architecture in Fig.4 can be further the critical path of the lifting-based architecture for
improved by carefully folding the last two pipeline the (9, 7) filter is 4Tm + 8Ta while that of the
convolution implementation is Tm + 4Ta.
stages into the first two stages as shown in Fig.5 .

SIP0406-3
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

input s0 s1 s2 s3 s4 s5 s6 s7 s8

First stage α α α α α α α α

Second stage β β β β β β β β

γ γ γ γ γ γ γ γ 1/K
HP output HP
δ δ δ δ δ δ δ δ
K
LP output LP

Fig.3 Data dependency diagram for lifting of filters with four factors

LP
s2i
D R1 A4 R2 D R3 A8 R4

A1
A5
s2i-2+s2 M2 M4
β δ
α M1
γ
M3

D A3 A7
D HP
s2i+1
D A2 R1 A6
R2 D R3 R4
s2i-1 α (s2i+s2i-2)+s2i-1

Fig.4 The direct mapped architecture [7]

Input

R1 D D R2 A4 R3
Even R
β, δ M4
K
A1 M2 1/K

α, γ
M1
A3 M3
R
Odd A2 R2 D D R3
R1

Fig. 5 The folded architecture in [2]

SIP0406-4
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

One way of improving this is by pipelining which critical path. . The critical path is now Tm + 5Ta.
results in a significant increase in the number of The minimum critical path of Tm can be achieved
registers. For instance, to pipeline the lifting-based by 5 pipelining stages using 11 pipelining registers
(9,7) filter such that the critical path is Tm + 2Ta, 6 (not shown in the figure). Detailed hardware
additional registers are required. C.T. Huang,[1] analysis of lossy (9, 7), integer (9, 7) and (6, 10)
proposed a very efficient way of solving the timing filters have been included in [1]. Furthermore,
accumulation problem The basic idea is to remove since the flipping transformation Changes the
the multiplications along the critical path by scaling round-off noise considerably, techniques to address
the remaining paths by the inverse of the multiplier precision d noise problems have also been
coefficients. Fig.7 (a)–(b) describes how scaling at addressed in [1].
each level can reduce the multiplications in the

Phase1 Phase2 Phase1 Phase2

Input R1 R0 R2 R0 R1 R0 R2 R0 R1

First stage R3 R5 R3 R5

Second stage R4 R6 R4 R6 R4

HP output R7 R8 R7 R8 1/K HP

LP output
K LP
Fig. 6 Data-flow and registers allocation of the MAC based architecture

z-1 z-1
1/α 1/α
α α z-1 1
1/α z-1
1/β 1/β
β β z-1 1
z-1
1/β
1/γ 1/γ
γ γ z-1 1
z-1
1/γ
1 1/δ 1/δ
δ δ
1/δ

(a) 1/K K 1/d

HP LP 1/K K

HP LP

Fig 7 A flipping architecture [1]. (a) Original architecture, (b) Scaling the coefficients to reduce the number of
multiplications .

SIP0406-5
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

3.5 Efficient folded architecture [8] 4. Comparison of performances

However, the conventional lifting scheme adopts We can compare the performances of different
the serial operation to process these intermediate architecture on the basis of hardware requirement
data; thus, the critical path latency is very long. We and critical path latency. The hardware complexity
know that the way of processing the intermediate has been described in terms of data path
data determines the hardware scale and critical path components. Comparison of different architecture
latency of the implementing architecture. Since shown in table I
some intermediate data are on different paths, we
can calculate them in parallel. With this parallel
operation, the critical path latency is reduced, and
the number of registers is decreased. Therefore it is
called as efficient folded. The critical path latency
is reduces up to Tm+Ta.

Table I [8] (FOR 9/7 LIFTING-BASED DWT)

Architecture Multiplier Adder Register Critical path Control Throughput


latency
complexity Rate(per
cycle)

Direct 4 8 6 4Tm+8Ta Simple 2


input/output

Direct +full 4 8 32 Tm simple 2


pipeline input/output

Folded 2 4 12 Tm+2Ta Medium 1


input/output

Flipping 4 8 4 Tm+5Ta complex 2


input/output

Flipping+ 4 8 11 Tm complex 2
5stage input/output
pipeline

Efficient 2 4 10 Tm+Ta Medium 1


folded input/output

Tm denotes latency of multiplication, Ta denotes latency of adder

5. Conclusion Reference

In this paper, we presented comparison of the [1] C.T. Huang, P.C. Tseng, and L.G. Chen,
existing lifting based implementations of 1- ―Flipping Structure: An Efficient VLSI
dimensional Discrete Wavelet Transform. We Architecture for Lifting-Based Discrete Wavelet
briefly described the principles behind the lifting Transform,‖ in IEEE Transactions on Signal
scheme in order to better understand the different Processing, 2004, pp. 1080–1089.
implementation styles and structure. We provided a [2] C.J Lian, K.F. Chen, H.H. Chen, and L.G.
systematic derivation of each architecture and Chen, ―Lifting Based Discrete Wavelet Transform
evaluated them with respect to their hardware and Architecture for JPEG2000,‖ in IEEE International
timing requirements.

SIP0406-6
CONFERENCE ON ―SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)‖ MARCH 26-27 2011

Symposium on Circuits and Systems, Sydney,


Australia, 2001, pp. 445–448.
[3] W.H. Chang, Y.S. Lee, W.S. Peng, and C.Y.
Lee, ―A Line-Based, Memory Efficient and
Programmable Architecture for 2D DWT Using
Lifting Scheme,‖ in IEEE International Symposium
on Circuits and Systems, Sydney, Australia, 2001,
pp. 330–33
[4] S. Mallat, ―A Theory for Multiresolution Signal
Decomposition: The Wavelet Representation,‖
IEEE Trans. Pattern Analysis and Machine
Intelligence, vol. 11, no. 7, 1989, pp. 674–693.
[5] T. Acharya and P. S. Tsai, JPEG2000 Standard
for Image Compression: Concepts, Algorithms and
VLSI Architectures. John Wiley & Sons, Hoboken,
New Jersey, 2004.
[6] I. Daubechies and W. Sweldens, ―Factoring
Wavelet Transforms into Lifting Schemes,‖ The J.
of Fourier Analysis and Applications, vol. 4, 1998,
pp. 247–269.
[7] C.C. Liu,Y.H. Shiau, and J.M. Jou, ―Design and
Implementation of a Progressive
Image Coding Chip Based on the Lifted Wavelet
Transform,‖ in Proc. of the 11th VLSI Design/CAD
Symposium, Taiwan, 2000
[8] Weifeng Liu, Li Zhang, and Fu Li ―An Efficient
Folded Architecture for Lifting-Based Discrete
Wavelet Transform‖ IEEE TRANSACTIONS ON
CIRCUITS AND SYSTEMS—II: EXPRESS
BRIEFS, VOL. 56, NO. 4, APRIL 2009.
[10] Xiaonan Fan,Zhiyong pang,DihuChen ,H.Z.
Tan ― A Pipeline Architecture for 2-D Lifting-
based Discrete Wavelet Transform of JPEG2000‖
supported by the National Natural Science
Foundation of China under grant No. 60874060
/$26.00 ©2010 IEEE.

SIP0406-7
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

A Novel Approach in Image De-noising for Salt


& Pepper Noise
J S Bhat1, B N Jagadale2 and Lakshminarayan H K2
1
Dept. of Physics, Karnatak University, Dharwad, India
Email: jsbhat@kud.ac.in
2
Dept. of Electronics, Kuvempu University, Shankaragatta, India
Email: basujagadale@gmail.com; lakkysharmahk@yahoo.co.in

Abstract-The de-noising of an image corrupted by tend to blur edges and other fine image
salt and pepper has been a classical problem in details. Therefore nonlinear filters [1, 2] are
image processing. In the last decade, various
modified median filtering schemes have been
most preferred over linear filters due to their
developed, under various signal/noise models, to improved filtering performance in terms of
deliver improved performance over traditional noise suppression and edge preservation.
methods. In this paper a simple method called The standard median (SM) filter [3] is the
Inerpolate Median Filter (IMF) is proposed to one of the most robust nonlinear filters,
restore the images corrupted by salt and pepper
noise. The proposed method works better in
which exploits the rank-order information of
preserving image details by suppressing noise. The pixel intensities within filtering window.
experimental results show that the proposed This filter is very popular due to its edge
algorithm outperforms the conventional Median preserving characteristics and its simplicity
filter and other algorithms like mimum- in implementation. Various modifications of
maximumum exclusive mean filter (MMEM),
Adaptive median filtering(AMF) in terms of signal
the SM filter have been introduced, such as
to noise ratio. the weighted median (WM) [4] filter. By
incorporating noise detection mechanism
into the conventional median filtering
Key words- Image de-noising, Interpolate median approach, the filters like switching median
filter, nonlinear filter, salt & pepper noise
filters [5, 6] had shown significant
I. INTRODUCTION performance improvement. The median
filter, as well as its modifications and
An image is often corrupted by noise generalizations[7] are typically implemented
during its acquisition and transmission. invariably across an image. Examples
Image de-noising is used to reduce the noise include the mimum-maximumum exclusive
while retaining the important features in the mean filter (MMEM)[8], Florencio‟s [9],
image. Always there exists a tradeoff Adaptive median filter(AMF)[10]These
between the removed noise and the blurring filters have demonstrated excellent
in the image. The intensity of impulse noise performance but the main drawbacks of all
has the tendency of being either relatively these filters are, they are prone to edge
high or relatively low, which will degrade jitters in the cases where noise density is
the image quality. Therefore image de- high, large widow size results in blurred
noising is used as preprocessing to edge images and significant computational
detection, image segmentation and object complexity. To solve this problem, a
recognition etc. modified median filter algorithm called
A variety of filtering techniques has been Interpolate Median filter that employs
proposed for enhancing images degraded by Interpolated search in determining the
noise. The classical linear digital image desired central pixel value is proposed.
filters, such as averaging lowpass filters,

SIP0407-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

The paper is organized as follows: Section considered with the middle pixel value. The
II gives brief review of mean and median median filter, especially with larger window
filtering. The new approach, The Interpolate size destroys the fine image details due to its
Median filter technique is explained in
rank ordering process. Figure1. illustrates an
section III. Experimental results are
presented in section IV. Finally in section V, example calculation.
we give the conclusion.
Neighborhood values: 115, 119, 120, 123,
II MEAN & MEDIAN FILTERING 124, 125, 126, 127, 150
MEAN FILTER Median value: 124
Mean filtering is a simple and easy to
implement method of smoothing images, i.e.
it reduces the amount of intensity variation 110 125 125 130 140
between one pixel and the next. It is often 123 124 126 127 136
used to reduce noise in images.
The idea of mean filtering is simply to 114 120 150 125 134
replace each pixel value in an image with
118 115 119 123 134
the mean (`average') value of its neighbors,
including itself. The drawback of this 111 116 111 120 131
algorithm is, it has the effect of eliminating
pixel values which are unrepresentative of Fig. 1. Calculating the median value of a 3x3 pixel
their surroundings. With salt and pepper neighborhood. The central pixel value of 150 is rather
noise, image gets smoothed with a 3×3 unrepresentative of the surrounding pixels and is
replaced with the median value: 124
mean filter. Since the shot noise pixel values
are often very different from the surrounding
values, they tend to significantly distort the III INTERPOTATE MEDIAN FILTER
pixel average calculated by the mean filter.
The Interpolate Median filter method
MEDIAN FILTER considers each pixel in the image in turn and
The median filter is normally used to looks at its neighbors to decide whether or
reduce noise in an image like the mean not it is representative of its surroundings.
filter; however, it does well in preserving Instead of replacing the pixel value with the
useful details in the image. Unlike the mean median of neighboring pixel values, it
filter, the median filter considers each pixel replaces it with the interpolation of those
in the image and instead of simply replacing values.
the pixel value with the mean of neighboring
The interpolation is calculated by first
pixel values; it is replaced with the median
sorting all pixel values from surrounding
of those values. The median is calculated by
neighborhood into numerical order and then
first sorting all the pixel values from the
replacing the pixel being considered with
surrounding neighborhood into numerical
the interpolation pixel value. The calculation
order and then replacing the pixel being
of interpolation value is derived from the

SIP0407-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Interpolation search technique used for levels Table 1, illustrates the PSNRs of the
searching the elements. We can also call it a six de-noising methods. The peak signal-to-
Non- linear filter or order-static filter noise ratio (PSNR) in decibels (dB), is
because there response is based on the defined as
ordering or ranking of the pixels contained
within the mask. The advantages of this 2552
filter over mean and median filter are, it PSNR 10 log (dB) (3)
MSE
gives more robust average than both the
methods, for some pixels in the
neighborhood; it creates new pixel values 1 m1n1 2
with MSE I (i, j ) K (i, j ) (4)
like mean filter and for some it will not mn i 0 j 0
create new pixel value like median filter, It
has the characteristics of both filters.
where I and K being the original image and
denoised image, respectively. Figure 2,
The algorithm uses the fallowing formula shows the original test images used for
experiments and Figure 3, shows the Lena
Key (a[l ]) a[h]) / 2 image corrupted by salt and pepper noise by
(1)
20% (dB).

where K is the „key‟, Here we make an


intelligent guess about „key‟ which is the
mid value of the array „a‟, and a[l ], a[h] are
values of bottom and top elements in the
sorted array.

Mid l (h l ) * ((K a[l ]) /(a[h] a[l ])) (2) a b c


Fig.2 The original test images with 512x512 pixels:
Here value „Mid‟ gives the optimal mid-
(a) Lena; (b) Barbara; (c) Goldhill.
point of the array and a[mid] gives the
interpolated value. This interpolated value is
the new value of the pixel

IV EXPERIMENTAL RESULTS

To validate proposed method, the


experiments are conducted on some natural
Fig 3. Lena image corrupted by salt & pepper
grayscale test images like Lena, Barbera and
noise(dB) (20%)
Goldhill of size 512*512 at different noise
SIP0407-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Table 1. PSNR Performance of Different Algorithms [9] T. Sun and Y. Neuvo, “Detail-preserving median
for Lena image corrupted with salt and pepper noise based filters in image processing,” Pattern
Recognit. Lett., vol. 15, no. 4, pp. 341–347,
Algorithm Noise Density in dB Apr.1994.
[10] A. Sawant, H. Zeman, D. Muratore, S. Samant,
10% 20% 30% and F. DiBianka, “An adaptive median filter
MF(3x3) 31.19 28.48 25.45 algorithm to remove impulse noise in X-ray and
MF(5x5) 29.45 28.91 28.43 CT images and speckle in ultrasound
images,” Proc.SPIE vol. 3661,pp. 1263–1274,
MMEM [8] 30.28 29.63 29.05 Feb. 1999.
Florencio‟s [9] 33.69 32.20 30.95
AMF(5x5) [10] 30.11 28.72 27.84
IMF(Proposed) 33.86 30.59 25.75

V CONCLUSION

In this paper, the proposed algorithm


called Interpolate Median filter employs
Interpolated search in determining the
desired central pixel value. Interpolation
mean filtering is a simple, and easy to
implement, for image de-noising.the
simulation results show that the proposed
method performs significantly better than
many other existing methods

REFERENCES

[1] R. Boyle and R. Thomas Computer Vision: A


First Course, Blackwell Scientific
Publications, 1988, pp 32 - 34.
[2] E. Davies Machine Vision: Theory, Algorithms
and Practicalities, Academic Press, 1990,
Chap. 3.
[3] I. Pitas and A. N. Venetsanopoulos, “Order
statistics in digital image processing,” Proc.
IEEE, vol. 80, no. 12, pp. 1893–1921, Dec.
1992.
[4] D. R. K. Brownrigg, “The weighted median
filter,” Commun. ACM, vol. 27, no. 8, pp. 807–
818, Aug. 1984.
[5] H. Hwang and R. A. Haddad, “Adaptive
median filters : New algorithms and results,”
IEEE Trans. Image Process., vol. 4, no. 4,
pp.499–502, Apr. 1995.
[6] A. Bovik, Handbook of Image & Video
st
Processing, 1 Ed. New York: Academic, 2000.
[7] http://homepages.inf.ed.ac.uk
[8] W. Y. Han and J. C. Lin, “Minimum–
maximum exclusive mean (MMEM) filter to
remove impulse noise from highly corrupted
images,” Electron. Lett., vol. 33, no. 2, pp. 124-
125, 1997.

SIP0407-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Content Based Image Retrieval System for Medical


Images
Prof.K.Narayanan1, Shaista Khan2
Asst.Professor, Fr.Agnel College of Engg., University of Mumbai, India, unniknarayan@gmail.com
Fr.Agnel College of Engg., University of Mumbai, India khan_shaista26@yahoo.co.in

Abstract: The rapid development of technologies and steady using a simple metric and the images are compared with one
growing amounts of digital information highlight the need of another based on those extracted features. These three features
developing an accessing system. Content-based image indexing are integrated into one method to improve the retrieval
and retrieval has been an emerging research area from the last efficiency. Those images which have similar features would
few decades. In this, the project approaches content based image
have similar content as well. Focus of this project is on
retrieval using low level features such as color, shape and texture
to investigate samples of blood cells through the images to aid medical diagnosis in which CBIR can be used to detect the
diagnosing disease by identifying similar cases in a medical disease by identifying similar cases in a medical database.
database. Medical images are classified in terms of diseases and
by using query image the relevant image is retrieved along with II. PROPOSED METHOD
the classification of disease. The histogram of red, green, and Content-based Image Retrieval (CBIR) consists of
blue color components is analyzed. The wavelet decomposition is retrieving the most visually similar images to a given query
also used to analyze texture. In addition, morphological image from a database of images. CBIR from medical image
operations such as opening and closing are applied to analyze databases does not aim to replace the physician by predicting
object shape. Lastly, color, texture, and shape in image retrieval
the disease of a particular case but to assist him/her in
are integrated in order to increase the retrieval accuracy.
diagnosis. The visual characteristics of a disease carry
Keywords: Text Based Image Retrieval (TBIR), Content Based diagnostic information and oftentimes visually similar images
Image Retrieval (CBIR) correspond to the same disease category. By consulting the
output of a CBIR system, the physician can gain more
I. INTRODUCTION confidence in his/her decision or even consider other
In today world the word knowledge has exchanged its possibilities.
meaning with the information and hence to the data. In However, due to the existence of a large number of
addition to it the rapid development of technologies in digital medical image acquisition devices, medical images are
field and computing hardware makes the digital acquisition of distinct and require a specific design of CBIR systems. The
information to be more in demand and popular. goals of medical information systems have been defined to
Consequently many digital images are being captured and deliver the needed information at the right time, the right place
stored such as medical images, architectural and engineering to the right person in order to improve the quality and
images, advertising, design and fashion images, etc., and as a efficiency of care processes. In the medical domain, images
result large image databases are being created and used in from the same disease class as the query image must be
many applications. However, the focus of our study is on retrieved in order to help the doctor in diagnosis. The images
medical images in this work. A large number of medical in the medical database are labeled by a specialist to ensure
images in digital format are generated by hospitals and that they are less subjective than those of the generic CBIR.
medical institutions every day. So, how to make use of this Figure 1 represents the framework of the CBIR system. This
huge amount of images effectively becomes a challenging level of retrieval is based on the primitive features. The
problem. following are some of the primitive features such as
In order to overcome this problem the most common
approach that had been used previously for image retrieval Color
from a database was Text Based Image Retrieval (TBIR). Texture
But later introduced image retrieval based on content Shape or the spatial location of image element.
which is known as Content Based Image Retrieval (CBIR). In
TBIR, all medical images are labeled with text which is A. COLOR ANALYSIS
manmade and may be different for individuals for the similar Color is one of the most important features that make the
images. Another drawback of TBIR is that all images image recognition possible by human. It is a property that
especially medical images are difficult to be described by text. depends on the reflection of light to the eye and the processing
Drawback of TBIR can be overcome by CBIR. of that information in the brain. Color will be used every day
In CBIR, the features from images are extracted using to differentiate objects, places, etc. where colors are defined in
different methods. The features include color, texture and three dimensional color spaces such as RGB (Red, Green, and
shape. Color histogram is the main method to represent the Blue), HSV(Hue, Saturation, and Value) or HSB (Hue,
color information of the image. A method called the pyramid- Saturation, and Brightness). Most image formats use the RGB
structured wavelet transform for texture classification is used. color space to store information. Most image formats such as
The number of oval objects in the query image is calculated
SIP0427-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

JPEG, BMP, GIF, use the RGB color space to store Where considering the samples a and b, n is the number
information. of partitions, and ai, bi are the number of members of samples
a and b in the ith partition. The Bhattacharya coefficient will
range from 0 to 1 where 1 represents the completely similar
image and 0 indicates that there is no similarity in two images
[9].

B) TEXTURE ANALYSIS:
A texture is a measure of the variation of the intensity of a
surface, quantifying properties such as smoothness, coarseness
and regularity. The most popular representation of texture is
Wavelet Transform.A method called the pyramid-structured
wavelet transform for texture classification is used. It
decomposes sub-signals in the low frequency channels
recursively. It is mainly trivial for textures with dominant
frequency channels. For this reason, it is mostly suitable for
signals consisting of components with information
concentrated in lower frequency channels. Since most of the
information exists in lower sub band of the image due to the
natural image properties, the pyramid-structured wavelet
Figure: 1 Proposed CBIR System transform is highly sufficient. Using the pyramid structured
wavelet transform, [6] the texture image is decomposed into
The RGB color space is defined as a unit cube with red,
four sub images, in low-low, low-high, high-low and high-
green, and blue axes. Thus, a vector with three co-ordinates
high sub-bands. At this point, the energy level of each sub-
represents the color in this space which represents black when
band is calculated which is the first level decomposition. In
all of them set to zeros and represents white when all three
this study, fifth level decomposition is obtained by using the
coordinates are set to 1.
low-low sub-band for further decomposition. The reason for
1) Algorithm for Color Analysis:
this is the basic assumption that the energy of an image is
i. Color histograms of query image and images in a
concentrated in the low-low band. For this reason the wavelet
database are calculated and put them into two
function used is the Daubechies wavelet.
different vectors.
1) Algorithm for Texture Analysis:
ii. Use this vector to calculate Bhattacharya
i. Decompose the image using pyramid –
coefficient of query image with each image in
structures Wavelet Transform (till fifth level
data base.
decomposition).
iii. The Bhattacharya coefficient is 1 for completely
ii. Build a histogram of the transformed image
similar image and 0 indicates that there is no
coefficients in each sub band.
similarity in two images. It ranges from 0 to 1.
iii. Calculate signature Vector for each image by
In CBIR, color histogram is the main method to represent
concatenation of these histograms.
the color information of the image. A color histogram is a type
iv. Compute L1- distance using equation 2 of Query
of bar graph, where each bar represents a particular color of
image with all images in data base.
the color space being used. A histogram is a probability
In order to characterize the image texture at different
density function. It represents discrete frequency distribution
scales, the distribution of the wavelet coefficients in each sub
for a grouped dataset, which includes different discrete values
band of such decomposition is characterized by an image
that are grouped into a number of intervals [12]. An image
signature. An image signature is defined by building a
histogram refers to the probability density function of the
histogram of the transformed image coefficients in each sub
image intensities. This is extended for color images to capture
band. As images are decomposed with a pyramidal scheme on
the intensities of the three-color channels.
Nl levels, they consist of 3 * Nl + 1 sub bands: there are 3 sub
In this project the color histograms of query image and
bands of details at each scale l <= Nl (lHH, lHL and lLH) plus
images in a database are calculated and put them into two
an approximation (NlLL), 3*Nl+1 histograms are thus built.
different vectors and compare them using Bhattacharya
The signature is a vector formed by the concatenation of these
coefficient. The Bhattacharya coefficient is an approximate
histograms. The distance used to compare two images Im1
measurement of the amount of overlap between two statistical
and Im2 based on the L1-distance between histograms or 2
samples. The coefficient can be used to determine the relative
signatures.
closeness of the two samples being considered.
n
The distance measure is given
BhattacharyaCoeff ( ai bi) (1)
i 1

SIP0427-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

3 Nl 1 III. RESULT
d (Im1 , Im2 ) t ( H t1 H t2 ) (2) In our classification system, the ground truth database is
t 1 made of 25 blood cell images with two different
NB classifications. Classification is based on type of disease i.e.,
Ht1 Ht2 Ht1 ( j ) Ht2 ( j ) sickle cell disease and cancer disease.
j 1 Sickle Cell disease is hereditary Blood disease
n
H ( j ) the value of the jth bin of the ith is normalized resulting from a single amino acid mutation of the red
Where t
blood cells. A blood condition of anemia. People with
sickle cell disease have red blood cells that contain
t t 1 3 Nl 1
histogram of image n and is a set of tunable mostly hemoglobin S, an abnormal type of
weights. hemoglobin. Sometimes these red blood cells become
crescent shaped "sickle shaped".
C) SHAPE ANALYSIS Cancer of the myeloid line of blood cells,
Shape may be defined as the characteristic surface characterized by the rapid growth of abnormal white
configuration of an object; an outline or contour. It permits an blood cells.
object to be distinguished from its surroundings by its outline. In order to increase the accuracy of retrieval result in
1) Algorithm for cell geometry analysis: the proposed system, the result of color, texture and cell
i. Convert the image to black and white in order to geometric are combined so that only images which are
prepare for boundary tracing using common in all the above three feature extraction will be
bwboundaries and threshold the image. shown as final result. The advantages of this system are
ii. Remove the noise. high accuracy and precision as well as simplicity of the
iii. Find the boundaries. algorithm.
iv. Determine number of oval objects in Query Query image is blood cell sample image of patient for
image and all the images in database. diagnose of disease. Search result shows type of disease
Based on the domain in this project which is blood cell patient is suffering from. If patient is not suffering from
images, the number of round objects in the image needs to be these two diseases then result will be shown as patient is
determined; to achieve this Convert the image to black and not suffering.
white in order to prepare for boundary tracing using
bwboundaries function in MATLAB.
Then morphological operator such as opening is used to
remove the small connected objects which do not belong to
the objects of interest. The result of area and perimeter of an
object inside each image is used to form a simple metric
indicating the roundness of an object using the following
formula:
4 area (3)
Metric
P Perim eter 2

This metric is equal to one only for a circle and it is less


than one for any other shape. The discrimination process can
be controlled by setting an appropriate threshold. Here
threshold is taken 0.7.
The shape is an important feature as diseases are
classified depending on the shape of cell for example Sickle-
cell disease, or sickle-cell anaemia, is an autosomal co-
dominant genetic blood disorder characterized by red blood Figure: 2 Result shows patient is suffering from disease or not.
cells that assume an abnormal, rigid, sickle shape, so for this IV. CONCLUSION
disease. The rapid growth in the sizes of image databases
For cell geometric analysis, once the number of oval highlights the need of developing an effective and efficient
objects in the query image is calculated, its value will be retrieval system. This development started with retrieving
compared with all the value of number of oval objects in all images using textual annotation called TBIR but later
the images in database. Then the images which are close to introduced image retrieval based on content which is known
query images will be displayed. as CBIR.CBIR overcome the drawbacks of TBIR
Then combine result of all three algorithms and then Our focus is on medical diagnosis in which is CBIR can be
sorted to give best search result along with disease. used to aid diagnosis by identifying similar past cases in a
medical database of medical images mainly blood cell images.

SIP0427-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

These images are classified in terms of diseases and images


from the same disease class as the query image must be
retrieved in order to help the doctor in diagnosis.
This work investigates the approaches of CBIR based on
the low level features such as color, shape and texture
analysis. In order to increase the accuracy of retrieval result of
color, texture and shape is combined and the result is shown.
To diagnose disease considered 25 blood cell images in the
database which are classified based on type of disease for
example sickle cell disease and cancer disease. For a given
query image, retrieved image from database shows patient is
suffering from which type of disease.

REFERENCES
[1] “Old fashion text-based image retrieval uses FCA” by Ahamd, I.;
Taek-Sueng Jang, published in Image Processing, 2003.ICIP
2003.Proceedings.2003 International Conference on Image Processing.
[2] “ Content based medical image retrieval based on pyramid structure
wavelet” by Aliaa.A.A.Youssif*, A.A.Darwish an R.A.Mohamed
published in IJCSNS International Journal of Computer Science and
Network Security, VOL.10 No.3, March 2010
[3] “Content-based image retrieval from large medical databases” by Kak,
A. Pavlopoulou, C. published in 3D Data Processing Visualization and
Transmission, 2002,Proceedings in First International Symposium.
[4] “An Adaptive, Knowledge-Driven Medical Image Search for Interactive
Diffuse Parenchymal Lung Disease Quantification” by Yimo Tao,
Xiang Sean Zhou.
[5] “WEB-BASED MEDICAL IMAGE RETRIEVAL SYSTEM” by Ivica
Dimitrovski, Dejan Gorgevik, Suzana Loskovska.
[6] Paper on “Wavelet Optimization for Content-Based Image Retrieval in
Medical Database “by G. Quellec M. Lamard, G. Cazuguel B.
Cochener, C. Roux.
[7] “Application of Wavelet Transform and its Advantage Compared to
Fourier Transform” by M. Sifuzzaman1, M.R. Islam1 and M.Z Ali
Journal of Physical Sciences, Vol. 13, 2009, 121-134.
[8] “Automatic Detection of Red Blood Cells in Hematological Images Using
Polar Transformation and Run-length Matrix” by S. H. Rezatofighi*, A.
Roodaki, R. A. Zoroofi R. Sharifian H. Soltanian-Zadeh published in
ICSP2008 Proceedings. ( 978-1-4244-2179-4/08/$25.00 ©2008 IEEE)
[9] “Content-based Image Retrieval for Blood Cells” by Mohammad Reza
Zare, Raja Noor Ainon, Woo Chaw Seng, published in 2009 Third Asia
International Conference on Modelling & Simulation.
[10] “Digital Image Search & Retrieval uses FFT Sectors of Color Images”
by H. B. Kekre, Dhirendra Mishra published in International Journal on
Computer Science and Engineering.
[11] “Content Based Image Retrieval using Contourlet Transform” by
Ch.Srinivasa rao ,S. Srinivas kumar , B.N.Chatterji in ICGST-GVIP
Journal, Volume 7, Issue 3, November 2007.
[12] Paper on “Discrete Wavelet Transforms: Theory and Implementation
“by Tim Edwards.
[13] “A Content-Based Retrieval System for Blood Cells Images” by Woo
Chaw Seng and Seyed Hadi Mirisaee in 2009 International Conference
on Future Computer and Communication.
[14] “A CBIR METHOD BASED ON COLOR-SPATIAL FEATURE” by
Zhang Lei, Lin Fuzong, Zhang Bo.

SIP0427-4
1

AUDIO +
Abhay Kumar
Research Scholar at Associated Electronics Research Foundation, Phase-II Noida (U.P.)
abhay.2t@gmail.com

Abstract--AUDIO+ is an electronic device that Very low distortion, low noise, and wide bandwidth
alter how a musical instrument or other audio provide superior performance in high quality audio
source sounds and can be best termed as a applications.
“Digital Effect Processor”. Some effects subtly
"colour" a sound, while others transform it LM1036 of the National Instruments is a DC
dramatically. Effects can be used during live controlled tone (bass/treble), volume and balance
performances (typically with keyboard, electric circuit for stereo applications in car radio, TV and
guitar or bass) or in the studio i.e. the faithful audio systems. An additional control input allows
reproduction of the sound signals is heard when loudness compensation to be simply effected.
AUDIO+ is used in the audio line.
III. DRV134
AUDIO+ has a unique quality to modify the
sound signals and make it soothing to every DRV134 is a differential output amplifiers that
human ear. The device is provided with the convert a single-ended input to a balanced output
control panel of “Volume”, “Bass”, “Treble” and pair. These balanced audio drivers consist of high
“Balance” to make it desirable for ear sensitive performance op amps with on-chip precision
to high and low frequency sound. AUDIO+ is resistors. They are fully specified for high
easy to use portable device with single signal performance audio applications, including low
input/output port and an internal power supply distortion (0.0005% at 1 kHz). Wide output voltage
with batteries. swing and high output drive capability allow use in
a wide variety of demanding applications. They
Keywords: Digital audio players, Digital signal easily drive the large capacitive loads associated
processors, Mixed analog digital integrated circuits, with long audio cables. Laser-trimmed matched
Digital filters, Equalizers, Digital controls. resistors provide optimum output common-mode
rejection (typically 68dB), especially when
I. INTRODUCTION compared to circuits implemented with op amps and
discrete precision resistors. In addition, high slew
rate (15V/μs) and fast settling time (2.5μs to 0.01%)
AUDIO+ is all about the musical sound box, which
ensure excellent dynamic response. The DRV134
can take the raw mp3, mpeg data and process it has excellent distortion characteristics. Noise is
digitally. What is interesting that it can sample and below 0.003% throughout the audio frequency range
play many sound formats starting from sampling under various output conditions. The gain of 6dB is
rate of 8 kHz to 96 kHz which is more than enough seen at the output of the differential amplifier.
to play any sound format. It improves Sound quality
with significant reduction of noise and Dolby sound
effects.

II. SYSTEM DESCRIPTION

AUDIO+ is built around the combination of IC’s


from Texas Instruments and National Instruments.
DRV134 and INA2134 from Texas Instruments are
used to design a circuit which enhances sound
performance.

This project is supported by Associated Electronics Research


Foundation.
Mr. Abhay Kumar is with Associated Electronics Research Fig 1: Gain Vs Frequency graph for DRV134
Foundation, C-53, Phase-II, Noida (U.P.) as a Research Scholar
(Phone No.-+919650109759, Email-abhay.2t@gmail.com)
2

IV. INA2134 LM1036 provide user a compatibility to control the


component of sound with the help of multi-turn
INA2134 differential line receivers consisting of potentiometer. Graphs given below illustrate the
high performance op amps with on chip precision different control operation.
resistors. They are fully specified for high
performance audio applications and have excellent
ac specifications, including low distortion (0.0005%
at 1 kHz) and high slew rate (14V/ms), assuring
good dynamic response. In addition, wide output
voltage swing and high output drive capability allow
use in a wide variety of demanding applications.
The dual version features completely independent
circuitry for lowest crosstalk and freedom from
interaction, even when overdriven or overloaded.
The INA2134 on-chip resistors are laser trimmed for
accurate gain and optimum common-mode
rejection. . It has a unity gain.

Fig 3: Volume control LM1036

Fig 2: Gain Vs Frequency graph for INA2134


Fig 4: Tone control LM1036

V. LM1036

LM1036 has a four control inputs provide control of


the bass, treble, balance and volume functions
through application of DC voltages from a remote
control system or, alternatively, from four
potentiometers which may be biased from zener
regulated power supply. LM1036 has the following
features:
 Large volume control range, 75 dB typical
 Tone control, ±15 dB typical
 Channel separation, 75 dB typical
 Low distortion, 0.06% typical for an input
level of 0.3 V RMS
 High signal to noise, 80 dB typical for an
input level of 0.3Vrms
Fig 5: Balance control LM1036
3

VI. DRV 134 SIMULATION T 3.00

1.50

Output
0.00

-1.50

-3.00
0.00 10.00 20.00 30.00 40.00 50.00
Input voltage (V)

Fig 8: DC analysis of DRV 134

The Fig 8 shows how the input at DRV134 can be


.
balanced and input line can be modulated.

Fig 6: TINA-TI simulation window for DRV134


VII. SIMULATION OF DRV 134 WITH
INA 137
The above result shows how a circuit can be built
on Tina-TI software of DRV 134.
The input to the circuit has to be in the range of 8
kHz-96 kHz and the input voltage should be
200mVrms to 2Vrms. The result can be judged by
taking the voltage at the VM1 and VM2. The
output is balanced owing to DRV134 acts as a
balance modulator.

T 30.00u
Output noise (V/Hz?)

20.00u

10.00u

Fig 9: TINA-TI simulation window for DRV134 and INA


137

0.00
1 10 100 1k 10k 100k 1M The above diagram shows that how the balanced
Frequency (Hz) output can be amplified and two channels can be
made using INA137 (Gain=1/2) and INA134
(Gain=1).
Fig 7: Noise analysis of DRV 134

The above figure shows the noise analysis of the


DRV134 circuit. The noise significantly reduces as
the frequency increases.
4

T 400.00n b. Timbre: Timbre is that unique


combination of fundamental frequency,
harmonics, and overtones that give each
300.00n voice, musical instrument, and sound
Output noise (V/Hz?)

effect its unique colouring and character.


200.00n
c. Harmonics: When a object vibrates it
propagates sound waves of a certain
100.00n frequency. This frequency, in turn, sets in
motion frequency waves called harmonics.
d. Loudness: The loudness of a sound
0.00 depends on the intensity of the sound
1 10 100 1k
Frequency (Hz)
10k 100k 1M
stimulus.
Fig 10: Analysis of DRV 134 with INA 137 e. Rhythm: Rhythm is a recurring sound
The above graph shows how the noise can be that alternates between strong and weak
significantly reduced after the introduction of elements.
INA137 or INA134. This shows that how the input
signal can be balanced and amplified to reduce the In combination to the above all components of
noise affect to the desired one. sound present AUDIO+ concentrate on the high
frequencies with 6dB overall gain and gives
T 3.00 presence of the original reproduction of sound and
thus it is more useful for high quality audio system
and long distance telephonic calls.
1.50

IX. FUTURE WORK


Voltage (V)

0.00
The AUDIO+ has a great advantage in audio
system and audio communication. That’s why an
-1.50
opportunity to use in digital communication and
VOIP phone.
X. REFRENCES
-3.00
0.00 25.00 50.00 75.00 100.00 1) Software support and information about the digital speakers
Input voltage (V) reveal from: Texas Instrument ( www.TI.com)
Fig 11: DC analysis of DRV 134 with INA 2137 2) Audio www.ti.com/audio
3) Data Converters dataconverter.ti.com
4) DSP dsp.ti.com
5) Digital Control www.ti.com/digitalcontrol
The above fig 11 shows that the output voltage
6) Clocks and Timers www.ti.com/clocks
range between 200mVrms to 2Vrms and the sampling 7) Logic logic.ti.com
frequency of 8 kHz to 96 kHz. 8) Power Mgmt power.ti.com
9) Microcontrollers microcontroller.ti.com
VIII. CONCLUSION 10) Hardware support from: Farnell India (http://in.farnell.com/)
11) Audio codec www.ti.com/tlv320aic3101.pdf
AUDIO+ maintains the originality of five major 12) Audio digital processor www.ti.com/tas3103.pdf
components of sound signals: 13) Audio line driver www.ti.com/drv134.pdf
14) Input amplifier www.ti.com/ina2134.pdf
a. Pitch: the frequency of sound signals. 15) Voltage regulator www.ti.com/tps62007.pdf,
 Low frequencies (Bass): Make www.ti.com/tps74801.pdf, www.ti.com/tps74701.pdf,
16) Control IC www.national.com
the sound powerful.
 Midrange frequencies: Give
sound its energy. Human being
are more sensitive to midrange
frequencies.
 High frequencies (Treble): Give
sounds its presence and life like
quality and lets us feel that we
are close to sound source.
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

pervasive in activities such as the telephone,


Speaker Identification financial transactions and information
retrieval from speech databases, the utility
Prerana & Aditi Choudhary

Abstract-Humans use voice recognition


everyday to distinguish between speakers
and genders. Other animals use voice
recognition to differentiate among sounds
sources Speaker recognition is the process
of automatically identifying a speaker is
of automatically recognizing who is
based solely on vocal characteristic.
speaking on the basis of individual
information included in speech waves. This
FEATURES OF SPEECH
technique makes it possible to use the
One might wonder what information is
speaker's voice to verify their identity and
needed to classify between genders or to
control access to services such as voice
classify the speech of multiple speakers. In
dialing, banking by telephone, telephone
fact, speech contains a great deal of
shopping, database access services,
information that allows a listener to
information services, voice mail, security
determine both gender and speaker identity.
control for confidential information areas,
In addition, speech can reveal much about
and remote access to computers
the emotional state and age of the speaker.
Speaker identification has been a wide and For example, an Israeli engineer created a
attractive area of research. Many works signal processing lie detector system that out
based on speech features, were proposed. In performs the traditional polygraph test.
a speaker recognition system there are three
PITCH
important components; the feature extraction
Pitch is the most distinctive difference
component, the speaker models and the
between male and female speakers. A
matching algorithm.
person’s pitch originates in the vocal
cords/folds, and the rate at which the vocal
The speech signal conveys information
folds vibrate is the frequency of the pitch.
about the identity of the speaker. The area of
So, when the vocal folds oscillate at 300
speaker identification is concerned with
times per second, they are said to be
extracting the identity of the person
producing a pitch of 300 Hz. When the air
speaking the utterance. As speech
passing through the vocal folds vibrates at
interaction with computers becomes more
the frequency of the pitch, harmonics are
also created. The harmonics occur at integer
multiples of the pitch and decrease in
SIP0502-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

amplitude at a rate of 12 dB per octave – the structures such as the oral cavity, nasal
measure between each harmonic . cavity, velum, epiglottis, tongue, etc.

The reason pitch differs between sexes is the When air flows through the laryngeal tract,
size, mass, and tension of the laryngeal tract the air vibrates at the pitch frequency
which includes the vocal folds and the formed by the laryngeal tract as mentioned
glottis (the spaces between and behind the above. Then the air flows through the
vocal folds). Just before puberty, the supralaryngeal tract, which begins to
fundamental frequency, or pitch, of the reverberate at particular frequencies
human voice is about 250 Hz, and the vocal determined by the diameter and length of the
fold length is about 10.4 mm. After puberty cavities in the supralaryngeal tract. These
the human body grows to its full adult size, reverberations are called “resonances” or
changing the dimensions of the larynx area. “formant frequencies”. In speech,
The vocal fold length in males increases to resonances are called formants. So, those
about 15-25 mm while female’s vocal fold harmonics of the pitch that are closest to the
length increases to about 13-15 mm. These formant frequencies of the vocal tract will
increases in size correlate to decreased become amplified while the others are
frequencies coming from the vocal folds. In attenuated
males, the average pitch falls between 60
and 120 Hz, and the range of a female’s INTRODUCTION- Most signal processing
pitch can be found between 120 and 200 Hz. involves processing a signal without concern
Females have a higher pitch range than for the quality or information content of that
males because the size of their larynx is signal. In speech processing, speech is
smaller. However, these are not the only processed on a frame by-frame basis usually
differences between male and female speech only with the concern that the frame is either
patterns . speech or silence The usable speech frames
can be defined as frames of speech that
FORMANT FREQUENCIES contain higher information content
When sound is emitted from the human compared to unusable frames with reference
mouth, it passes through two different to a particular application. We have been
systems before it takes its final form. The
first system is the pitch generator, and the Similarity
next system modulates the pitch harmonics
created by the first system. Scientists call the
Reference
first system the laryngeal tract and the model Identification
Input Feature Maximum
second system the supralaryngeal/vocal (Speaker #1) result
speech extraction selection
tract. The supralaryngeal tract consists of (Speaker ID)

SIP0502-2

Similarity

Reference
model
(Speaker #N)
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

investigating a speaker identification system speech utterance. System identifies the user
to identify usable speech frames. We then by comparing the codebook of speech
determine a method for identifying those utterance with those of the stored in the
frames as usable using a different approach. database and lists, which contain the most
However, knowing how reliable the likely speakers, could have given that
information is in a frame of speech can be speech utterance.
very important and useful.
This is where usable speech detection and
extraction can play a very important role.
The usable speech frames can be defined as
frames of speech that contain higher
information content compared to unusable
frames with reference to a particular
application. We have been investigating a
speaker identification system to identify At the highest level, all speaker recognition
usable speech frames .We then determine a systems contain two main modules (refer to
method for identifying those frames as Figure 1): feature extraction and feature
usable using a different approach. matching. Feature extraction is the process
that extracts a small amount of data from the
PARADIGMS OF SPEECH voice signal that can later be used to
RECONGITION represent each speaker. Feature matching
involves the actual procedure to identify the
1. Speaker Recognition - Recognize which unknown speaker by comparing extracted
of the population of subjects spoke a given features from his/her voice input with the
utterance. ones from a set of known speakers.
2. Speaker verification -Verify that a given
speaker is one who he claims to be. System
prompts the user who claims to be the
Verification
Input Feature result
speaker to provide ID. System verifies user Similarity Decision
by comparing codebook of given speech speech extraction (Accept/Reject)
utterance with that given by user. If it
matches the set threshold then the identity
claim of the user is accepted otherwise Reference
rejected. Speaker ID Threshold
3. Speaker identification - detects a
model
particular speaker from a known population.
(#M) (Speaker #M)
The system prompts the user to provide

SIP0502-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

capable of producing good results with a


fraction of the test speech sample required
by a text-independent system. The pitch
period or fundamental frequency of speech
varies from one individual to another; pitch
(b) Speaker verification frequency is high for female voices and low
for male voices. This suggests that pitch
Figure 1. Basic structures of speaker might be a suitable parameter to distinguish
recognition systems one speaker from another, or at least to
narrow down the set of probable matches.
The analysis of the frequency spectrum of
Figure 1 shows the basic structures of
the test utterance provides valuable
speaker identification and verification
information about speaker identification.
systems. The system that we will describe is
The spectrum contains both pitch harmonics
classified as text-independent speaker
and vocal-tract resonant peaks, making it
identification system since its task is to
possible to identify the speaker with a high
identify the person who speaks regardless of
probability of being correct. The vocal-tract
what is saying.
filter parameters (filter coefficients) can also
be used to good effect for speaker
identification. This is due to the fact that
Concepts of speaker identification
different speakers have different vocal-tract
systems:
configurations for the same utterance,
Speaker identification systems may be
depending on their physical and emotional
classified into two categories based on their
conditions, as well as whether the speaker is
principle of operation.
a native or non-native speaker

Text-dependent systems, which make use of


In any text-dependent speaker identification
a fixed utterance for test and training and
system, an important decision is the choice
rely on specific features of the test utterance
of test utterance. The source-filter model is
in order to affect a match.
most accurate at representing voiced sounds,
such as the vowels. Vowels have a definite,
Text-independent systems, which make use
consistent pitch period. The vocal-tract
of different utterances for test and training
configuration for vowel-utterances exhibits a
and rely on long term statistical
clear formant (resonant) structure. The
characteristics of speech for making a
frequency spectrum corresponding to vowel-
successful identification.
utterances therefore contains a wealth of
Text-dependent systems require less training
information that can be used for speaker
than text-independent systems and are
SIP0502-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

identification. In general, it is difficult to 1. The Acoustic Phonetic approach


guarantee a hundred percent recognition 2. The Pattern Recognition approach
even with the best speaker identification 3. The Artificial Intelligence approach
approaches.
A. The Acoustic Phonetic Approach
Generally speaking, two parameters may be The acoustic phonetic approach is based
used to describe the overall performance of upon the theory of acoustic phonetics that
a speakeridentification system. postulate that there exist a set of finite,
distinctive phonetic units in spoken
A false acceptance: Which occurs when the language and that the phonetic units are
system incorrectly identifies an unregistered broadly characterized by a set of properties
individual as an enrolled one, or when one that can be seen in the speech signal, or its
registered individual is mistaken for another. spectrum, over time. Even though the
The FAR (False Acceptance Ratio) is the acoustic properties of phonetic units are
ratio of the number of false acceptances to highly variable, both with the speaker and
the total number of trials. The value of FAR with the neighboring phonetic units, it is
can be reduced by setting a strict low assumed that the rules governing the
threshold. variability are straightforward and can
readily be learned and applied in practical
A false rejection: Which occurs when the situations. Hence the first step in this
system incorrectly refuses to identify an approach is called segmentation and labeling
individual who is registered with the system. phase. It involves segmenting the speech
The FRR (False Rejection Ratio) is the ratio signal into discrete (in Time) regions where
of the number of false rejections to the total the acoustic properties of the signal are
number of trials. Setting the threshold to a representatives of one of the several
liberal high value can minimize the value of phonetic units or classes and then attaching
FRR. The requirements for low FAR and one or more phonetic labels to each
FRR are seen to be conflicting and both segmented region according to acoustic
parameters cannot be simultaneously properties.
lowered. However, a low FAR is vital for For speech recognition, a second step is
good speaker identification systems and required. This second step attempts to
most systems are biased for good FAR determine a valid word (or a string of words)
performance at the expense of FRR. from the sequence of phonetic labels
produced in the first step, which is
APPROACHES TO SPEECH consistent with the constraints of the speech
RECOGNITION recognition task

B. The Pattern Recognition Approach


SIP0502-5
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

The Pattern Recognition approach to speech • Robustness and invariance to different


is basically one in which the speech patterns speech vocabularies, user, features sets
are used directly without explicit feature pattern comparison algorithms and decision
determination (in the acoustic – phonetic rules. This property makes the algorithm
sense) and segmentation. As in most pattern appropriate for wide range of speech units,
recognition approaches, the method has two word vocabularies, speaker populations,
steps – namely, training of speech patterns, background environments, transmission
and recognition of patterns via pattern conditions etc.
comparison. Speech is brought into a system • Proven high performance. The pattern
via a training procedure The concept is that recognition approach to speech recognition
if enough versions of a pattern to be consistently provides a high performance on
recognized (be it sound a word, a phrase etc) any task that is reasonable for technology
are included in the training set provided to and provides a clear path for extending the
the algorithm, the training procedure should technology in a wide range of directions.
be able to adequately characterize the
acoustic properties of the pattern (with no
C. The Artificial Intelligence Approach
regard for or knowledge of any other pattern
The artificial intelligence approach to
presented to the training procedure) This
speech is a hybrid of acoustic phonetic
type of characterization of speech via
approach and the pattern recognition
training is called as pattern classification.
approach in which it exploits ideas and
Here the machine learns which acoustic
concepts of both methods. The artificial
properties of the speech class are reliable
intelligence approach attempts to mechanize
and repeatable across all training tokens of
the recognition procedure according to the
the pattern. The utility of this method is the
way a person applies intelligence in
pattern comparison stage with each possible
visualizing, analyzing and finally making a
pattern learned in the training phase and
decision on the measured acoustic features.
classifying the unknown speech according to
In particular, among the techniques used
the accuracy of the match of the patterns
within the class of methods are the use of an
Advantages of Pattern Recognition
expert system for segmentation and labeling.
Approach
The use of neural networks could represent a
• Simplicity of use. The method is relatively
separate structural approach to speech
easy to understand. It is rich in mathematical
recognition or could be regarde
and communication theory justification for
as an implementational architecture that may
individual procedures used in training and
be incorporated in any of the above classical
decoding. It is widely used and best
approaches.
understood.
FUTURE SCOPE
SIP0502-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

A range of future improvements is possible:


• Speech independent speaker identification
• No of user scan be increased
• Identification of a male female child and
adult

REFERENCES

1. R.V Pawar, P.P.Kajave, and S.N.Mali


“Speaker Identification using Neural
Networks”, World Academy of Science,
Engineering and Technology 12 2005.

2. Lawrence Rabiner- “Fundamentals of


Speech Recognition” Pearson Education
Speech Processing Series. Pearson
Education Publication.

3. Brian J. Love , Jennifer Vining , Xuening


Sun “Automatic Speaker Recognition
Using Neural Networks”, Electrical and
Computer Engineering Department The
University of Texas at Austin Spring 2004.

4. Muzhir Shaban Al-Ani, Thabit Sultan


Mohammed and Karim M. Aljebory
“Speaker Identification: A Hybrid Approach
Using Neural Networks and Wavelet
Transform”, Journal of Computer Science 3
(5): 304-309, 2007 ISSN 1549-3636, 2007
Science Publications.

SIP0502-7
Modeling of FBAR Resonator and Simulation using APLAC
Deepak kumar, Navaid Z.Rizvi,Rajesh Mishra
Gautam Buddha University,Greater Noida
dkumar.gbu@gmail.com

Abstract
as compared to silicon and furthermost the
This Paper focuses on the analysis of the
cost of quartz wafers is significantly higher
Film Bulk Acoustic Wave Resonator
than that of silicon.[1-7]
(FBAR) comprising of Zinc Oxide (ZnO)
FBAR Devices
piezoelectric thin film sandwiched
FBAR stands for Film Bilk Acoustic
between two metal electrodes of gold (Au)
Resonator FBAR is a break through
and located on a silicon substrate with a
resonator technology being developed by
low stress silicon nitride (Si3N4)
Agilent technologies.Thus the technology
supporting membrane for high frequency can be used to create the essential
wireless application. The film bulk frequency shooing elements found in
acoustic wave technology is a promising modern wireless systems, including filters,
technology for manufacturing miniaturized duplexers and resonators for oscillators.
high performance filters for Giga Hertz [1-3]
range. Why FBAR
Keywords: FBAR, Quartz crystal, APLAC. The rapid growth of wireless mobile
telecommunication system leads to
Quartz Crystal increase in demand for high frequency
Crystal Quartz is the most important oscillators, filters and duplexers capable of
resonator material presently available. It operating in GHz frequency band range.
has been used for 50 years, and thus Conventionally Liquid Crystal, microwave
growth, characterization, and fabrication ceramic resonators, transmission lines and
techniques are quite mature. Its low SAW devices have been used as high
coupling is usually not a disadvantage frequency band devices. Although they
when it is used for frequency control provide high performance at reasonable
applications. For reasonable values of price but they are large in size to be able to
transducer areas, the resistance falls in the integrate in wireless application. SAW
10 –20 ohm range at 5 to 20MHz. This have better electrical performances and
range is ideal for oscillator circuits. Its Q is smaller in size but they had relatively poor
some what lower than that of ferroelectric sensitivity to temperature, high insertion
materials, but at lower frequencies it is losses and limited power handling.
more than adequate, and because the To cope with these limitations FBAR
stoichmetery of the crystal quartz is simple devices have been developed and can
and its growth technology well easily replace these devices in higher
established, there are a few crystal defects frequency for wireless communication
and the attenuation has frequency squared applications.A thin film bulk acoustic
dependence. Only when very high wave resonator consists basically of a thin
frequencies or wide inductive regions are piezoelectric layer sandwiched between
required do designers look beyond quartz. two electrodes. In such a resonator a
So at higher frequencies e.g. at GHz we mechanical wave is piezoelectrically is
cannot use quartz and FBAR and Saw excited in response to an electric field
devices are used which are much smaller applied between the electrodes. The
in size. Quartz also have disadvantage that propagation direction of this acoustic wave
it has the limits of the integration with the is perpendicular to the surface of the
mechanical structure and integrated circuit resonator. For a standing wave situation to

CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
SIP0503-1
prevail, the acoustic energy has to be In an LFE-FBAR, the applied electric
reflected back at the boundaries of the field is in y-direction, and the shear
resonator. This reflectivity can be achieved acoustic wave (excited by the lateral
by two means, either an air-interface or an electric field) propagates in z-direction.
acoustic mirror. Piezoelectric thin films
convert electrical energy into mechanical One Dimensional Acoustic-Wave
energy and vice versa. Film Bulk Acoustic Equation:
Resonator (FBAR) consists of a The fundamental wave equation related to
piezoelectric thin film sandwiched by two the longitudinal acoustic-wave generation
metal layers. A resonance condition occurs and propagation for one dimensional case
if the thickness of piezoelectric thin film is
(d) is equal to an integer multiple of a half
of the wavelength (λres). The fundamental
resonant frequency (Fres=1/ λres) is then (1)
inversely proportional to the thickness of
the piezoelectric material used, and is
equal to Va/2d where Va is an acoustic Where T, S, c and mo are the mechanical
velocity at the resonant frequency (Fig. 1). stress, the mechanical strain, the stiffness
elastic constant and the mass density of the
material, respectively.

From the Hooke’s law


T= c*S (2)
solution of the wave equation for the stress
contain (as common factors) e- j( t+-kz)
where =2pf is the wave frequency and
the k is the propagation constant of the
wave number
K = ( mo /c)1/2 (3)
Figure.1
= /Va
Where Va= (c/ mo)1/2 being the acoustic
impedance .
The acoustic impedance is
Z = -T/v (4)

Where v = –Va T/c is the particl velocity.


Hence, the acoustic impedance Z0 is
Figure.2
1/2
Z0 = c/ Va= ( mo c) = Va mo (5)

Three-port equivalent circuit model:


Consider the two lateral dimensions (in
x and y direction) of the uniform
Figure.3 resonator are very large compared with
A bulk-micro machined FBAR with TFE the thickness and acoustic. the metallic
(Thickness Field Excitation) uses a z- electrodes are assumed to be very thin,
directed electric field to generate z- providing no mass loading on opposite
propagating longitudinal or compressive surface normal to the z direction.
wave.[3-8]

CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
SIP0503-2
The following equivalent circuit models Using boundary conditions :
are used widely for FBAR electrical V = -v2sin[k(z+d/2)]+v1sin[k(d/(2-z))]/sin(kd)
modeling. (15)
1. Mason equivalent circuit model By evaluating the above equations Mason
2. Redwood equivalent circuit model model of a piezoelectric transducer
3. KLM equivalent circuit model (resonator) is obtained.
In this paper the Mason three Port
Equivalent circuit model have been used.

Mason equivalent circuit model


Mason’s model has been accepted most
widely used in analyzing vertical structure
of the piezoelectric materials. It is based
on a physical model and uses as its inputs,
dielectric constants, mass densities
constants, mass densities, stiffness
coefficients, from the piezoelectric stress
tensor and thickness of the physical layers. Where C0 is called the clamped (zero
The model is used for calculating the strain) capacitance or static capacitor of a
fundamental frequency of the resonator as transducer (resonator), is
well as calculating the effective kt2 of the
devices. The vibration characteristics of C0= S A/ d (16)
the piezoelectric structure can be modeled and Zc is the acoustic impedance of a
as a three port network with one electric- transducer with the area A, is
input and two acoustic output ports. Owing Zc = AZ0 =A m cD (17)
to the network with one electric-input port
and two acoustic ports. Owing to the The matrix equation results can used
characteristics of the piezoelectric to represent in the Mason model
transducer driven from the coupling of the equivalent circuit.
electric potential and mechanical stress.
The forces (F) and the particle velocities
(v) at the boundary surfaces of the
resonator are:
F1 = - AT (-d/2) (6)
F2= - AT (d/2) (7)
v1= v (-d/2) (8)
v2= v (-d/2) (9)
minus (-) sign indicates the relation of
axis and direction,v1 and v2 means particle
velocities vector exist in the material
surface.Where A, d and T are the area, Figure.4
thickness and internal stress of the
resonator, respectively As shown in Fig.4 in this equivalent
k = ( m/cD)1/2 = /Va (10) circuit , the electric port of transformer
Z0 = ( m cD)1/2 = m cD (11) represents the conversion of electrical
TF = - Z0 vF (12) energy to acoustic energy, the electric
TB = Z0 vB (13) port of the transformer represents the
D
Va= (c / m) 1/2
(14) conversion of electrical energy to
Va is being the acoustic wave velocity. acoustic energy (or vice versa).

CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
SIP0503-3
Why APLAC
With the help of APLAC Circuit Obtained and taken values in the
simulation and design tool, any RF or simulation are given in table.1:
analog circuit can be easily simulated with Table.1
a wide range of analysis methods. Ar Thi S S fp fs k Q F
Moreover, optimization, tuning and a ea ck 21 11 eff O
2
Monte Carlo statistical feature (for design (F nes Mi Mi M
yield) are available with every analysis B s n n
methods. Through APLAC it is possible to A of
easily simulate miniaturized structures and R) Zn
complex system. Device models developed O
for large devices are inapplicable when 45 1.2 - - 2.5 2.6 0. 1 3
nano-scale physical phenomena enter into u um 6 0. 93 21 0 5 9
2
play. m 1 3 GH GH 2 0 0
d d z z 6 0
Simulation Results B B 0
Firstly simulated a ZnO FBAR structure in
Aplac8.1 version. The FBAR is having lay
an upper and bottom electrode of Au and a
membrane layer of Si3N4 for support.
Then calculated the resonance frequency
analytically and then analyzed the
simulated result which is approximately
the same.

Simulation of ZnO FBAR


Here used the one dimensional Mason
Model and the basic transmission line
theory to simulate the FBAR which has
ZnO as the piezoelectric material and the
Au as the top and bottom electrodes and
for the membrane we used Si3 N4 as the
material. The circuit diagram is shown in
Fig.5.For the top and bottom electrode and
membrane layer used the transmission line
model. But for the piezoelectric layer the
one dimensional Mason Model used. The
results of simulation are shown in Fig.6
and Fig.7. Fig.7 shows S21 (both
magnitude and phase) and Fig.6 shows S11
(both magnitude and phase). If we analyze
Fig.7 we can easily see the resonance at
the expected frequency. Figure.5 Circuit Simulated in Aplac

CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
SIP0503-4
It also analyzed the influence of different
ZnO FBAR Area 45usq.m d=1.2um piezoelectric films and electrode materials
APLAC 8.10 Student version FOR NON-COMMERCIAL USE ONLY
on the characteristics of a thin film bulk
1.00 180.00
acoustic resonator (FBAR). The results
dB PHASE
confirm that the material properties and
0.63 90.00 thicknesses of piezoelectric film play a
significant role in determining the
0.25 0.00 performance of FBAR, and influence such
characteristics such as Resonance
-0.13 -90.00 frequency, the bandwidth and the insertion
loss. Since the results demonstrate that the
-0.50 -180.00 thicknesses of each of the layers within the
1.500G 1.875G 2.250G 2.625G 3.000G acoustic wave path, and by the resonance
f/Hz area, the potential exists to tune the
MagdB(S(1,1)) Pha(S(1,1)) characteristics of the FBAR by specifying
Figure.6 .FBAR Resonator S (1, 1) appropriate geometric parameters during
the FBAR design stage.

Effect of using different piezoelectric


ZnO FBAR Area 45usq.m d=1.2um material:
APLAC 8.10 Student version FOR NON-COMMERCIAL USE ONLY
For example using AlN as the piezoelectric
-28.00 180.00
material then the resonance frequency
dB PHASE
from around 2.62GHz to 4.7GHz will be
-36.75 90.00
increased using the same area and
thickness for both cases. As depicted
-45.50 0.00
below Q factor and FOM of ZnO FBAR is
more then AlN FBAR hence it is better
-54.25 -90.00
FBAR in terms of performance. The
comparisons are shown in table.2. The
-63.00 -180.00
1.500G 1.875G 2.250G 2.625G 3.000G results of simulation are shown in Fig.9
f/Hz and Fig.10 and fig.11. Fig.9 shows S21
MagdB(S(2,1)) Pha(S(2,1)) (both magnitude and phase) and Fig.10
shows S11 (both magnitude and phase). If
Figure.7 .FBAR Resonator S (2, 1) we analyze Fig.9 we can easily see the
resonance at the expected frequency.
Table.2
ZnO FBAR Area 45usq.m d=1.2um
APLAC 8.10 Student version FOR NON-COMMERCIAL USE ONLY

0.5 2.0

-0.5 -2.0

0.0 0.2 1.0 5.0


Im(S(1,1)) Im(S(2,1))

Figure.8 Smith Chart showing S (2, 1) and


S (1,1)

CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
SIP0503-5
Conclusion
AlN FBAR Area=45um d=1.2um Result shows that the resonant frequency
APLAC 8.10 Student version FOR NON-COMMERCIAL USE ONLY of the FBAR depends upon the particular
-18.00 180.00 choice of the piezoelectric material. It also
dB PHASE demonstrated that the FBAR performance
-28.50 90.00 is influenced by the physical dimensions
of the device, including the thickness of
-39.00 0.00 the piezoelectric film, electrode,
membrane layer, and by the resonance area
-49.50 -90.00 size. It is possible to calculate the effective
coupling coeffient, Q factor and figure of
merit. In this way it is possible to specify
-60.00 -180.00 suitable parameter values, which will
3.500G 3.875G 4.250G 4.625G 5.000G
optimize the design of the FBAR, and
f/Hz
which can be used in designing FBAR
MagdB(S(2,1)) Pha(S(2,1))
devices that will operate within a specified
Figure.9 AlN FBAR Resonator S21 frequency range.
Refrences
(1)K.M Lakin and G.R Kline and K.T
AlN FBAR Area=45um d=1.2um
APLAC 8.10 Student version FOR NON-COMMERCIAL USE ONLY
MCArron,” High –Q microwave acoustic
0.50 180.00 resonators and filters,” IEEE transactions
dB PHASE microwave theory and techniques,vol.41.
0.18 90.00 (2) S.V Krishnaswamy , J. Rosenbaum ,S.
Horwitz ,C.Vale and R.A. Moore ,” Film
-0.15 0.00 Bulk acoustic wave resonator technology
,” Proceedings of the IEEE ultrasonic
-0.48 -90.00 Symposium, Honolulu, HI, USA, 1990.
(3)P.J Yoon GW,” Fabrication of ZnO-
-0.80 -180.00 based film bulk acoustic resonator devices
3.500G 3.875G 4.250G 4.625G 5.000G
using W/SiO2 multilayer reflector,”
f/Hz
MagdB(S(1,1)) Pha(S(1,1)) Electronics letters, vol.36 (16).
(4)K.M.Lakin and J.S. Wang,”UHF
Figure.10 AlN FBAR Resonator S11 composite bulk wave resonator” Ultrasonic
Symposium ,1990.
(5)W.P Mason, Physical Acoustic
AlN FBAR Area=45um d=1.2um Principles and Methods, Vol.1A,
APLAC 8.10 Student version FOR NON-COMMERCIAL USE ONLY Academic press, New York.
(6) G. G. Fattinger, J. Kaitila, R. Aigner,
0.5 2.0
W. Nessler,” Single-to-balanced Filters for
Mobile Phones using Coupled Resonator
BAW Technology”,IEEE International
Ultrasonics, Ferroelectrics and Frequency
Control Symposium, 2004.
(7)K. M. Lakin, “Thin film resonator
-0.5 -2.0 technologies”, IEEE Trans. UFFC,vol.52,
pp. 707-716, May 2005.
0.0 0.2 1.0 5.0 (8)F. Constantinescu. M. Nitescu, A. G.
Im(S(1,1)) Im(S(2,1)) Gheorghe, “New circuit models for power
Figure.11 Smith Chart showing S (2, 1) BAW resonators “, in Proc. .ICCSC
and S(1,1) Shanghai, China, pp.176-179,2008.

CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011
SIP0503-6
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

Role of Speech Scrambling and Encryption in Secure Voice Communication

Himanshu Gupta Prof. (Dr.) Vinod Kumar Sharma


Faculty Member, Amity Institute of Information Professor & Dean, Faculty of Technology,
Technology, Amity University Campus, Gurukula Kangri Vishwavidyalaya,
Sector – 125, Noida (Uttar Pradesh), India. Haidwar, India
E-mail: himanshu_gupta4@yahoo.co.in E-mail: vks_sun@ymail.com

Abstract— Security of speech is a challenging inversion and encryption of speech in effective


issue of voice communications today that requires manner.
speech scrambling and encryption techniques. The When two entities are communicating with each
rapid development in information technology, the other, and they do not want a third party to listen
demand of secure transmission of voice over to their communication, then they want to pass on
wireless communication channel is increasing day their message in such a way that nobody else
by day. The conventional methods of voice could understand their message. This is known as
communication can’t provide adequate security communicating in a secure manner or Secure
from intruder. The voice data may be accessed by Communication.
the unauthorized user for malicious purpose.
Therefore, it is necessary to apply effective Secure communication includes means by which
scrambling and encryption techniques to enhance people can share information with varying
voice security. The speech scrambling and degrees of certainty that third parties cannot know
encryption technique can provide sufficient what was said. Other than communication spoken
security over wireless media. In this research face to face out of possibility of listening, it is
paper, various effective speech scrambling and probably safe to say that no communication is
encryption techniques are proposed. In this guaranteed secure in this sense, although practical
scrambling and encryption technique, original limitations such as legislation, resources, technical
speech is inverted and encrypted with different issues such as interception, and the sheer volume
strong scrambling and encryption methods. This of communication are limiting factors to
scrambling and encryption technique enhances the surveillance.
security of voice over insecure communication
channel at large extent. II. BACKGROUND

Keywords-Speech Scrambling; Speech The implementation of voice encryption dates


Encryption ; Secure Voicey; Communication back to World War II when secure
Channel.. communication was paramount to the US armed
forces. During that time, noise was simply added
I. INTRODUCTION to a voice signal to prevent enemies from listening
to the conversations. Noise was added by playing
A secure voice communication is a process that
a record of noise in synch with the voice signal
allows for the secure transmission of voice
and when the voice signal reached the receiver,
communications between a sending and a
the noise signal was subtracted out, leaving the
receiving node over wireless communication
original voice signal. In order to subtract out the
channel. This process uses various scrambling and
noise, the receiver need to have the exact same
encryption techniques which are capable of
noise signal and the noise records were only made
in pairs; one for the transmitter and one for the

SIP0506-1
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

receiver. Having only two copies of records made (CFB). The basic DVP algorithm is capable of
it impossible for the wrong receiver to decrypt the 2.36 x 1021 different "keys" based on a key length
signal. To implement the system, the army of 32 bits." The extremely high amount of
contracted Bell Laboratories and they developed a possible keys associated with the early DVP
system called SIGSALY. With SIGSALY, ten algorithm, makes the algorithm very robust and
channels were used to sample the frequency gives the user a high level of security. As with any
spectrum from 250 Hz to 3 kHz and two channels voice encryption system, the encryption key is
were allocated to sample voice pitch and required to decrypt the signal with a special
background hiss. In the time of SIGSALY, the decryption algorithm[2].
transistor had not been developed and the digital
sampling was done by circuits using the model OVERVIEW OF THE PROPOSED SPEECH
III.
2051 Thyratron vacuum tube. Each SIGSALY SCRAMBLING TECHNIQUE
terminal used 40 racks of equipment weighing 55 Speech inversion is a very common method of
tons and filled a large room. This equipment speech scrambling, probably because its the
included radio transmitters and receivers and large cheapest. Speech inversion works be taking a
phonograph turntables. The voice was keyed to signal and turning it 'inside out', reversing the
two 16-inch vinyl phonograph records that signal around a pre-set frequency. Speech
contained a Frequency Shift Keying (FSK) audio inversion can be broken down into three types,
tone. The records were played on large precise base-band inversion (also called 'phase
turntables in synch with the voice transmission[1]. inversion'), variable-band inversion (or 'rolling
phase inversion') and split band inversion. Images
From the introduction of voice encryption to
will be used to help clarify what different
today, encryption techniques have evolved
inversion systems do.
drastically. Digital technology has effectively
replaced old analog methods of voice encryption
and by using complex algorithms; voice
encryption has become much more secure and
efficient. One relatively modern voice encryption
method is Sub-band coding. With Sub-band
Coding, the voice signal is split into multiple
frequency bands, using multiple bandpass filters
that cover specific frequency ranges of interest.
The output signals from the bandpass filters are
then lowpass translated to reduce the bandwidth,
which reduces the sampling rate. The lowpass
signals are then quantized and encoded using
Fig 1: The non-scrambled sound wave
special techniques like, Pulse Code
Modulation (PCM). After the encoding stage, the
signals are multiplexed and sent out along the Base band inversion inverts the signal around
communication network. When the signal reaches a pre-set frequency that never changes. Because
the receiver, the inverse operations are applied to of this, base-band inversion is useless. Because
the signal to get it back to its original state. the inverting frequency never changes, running
Motorola developed a voice encryption system the frequency through another inverter set on the
called Digital Voice Protection (DVP) as part of same frequency unscrambles it. Descrambling
their first generation of voice encryption baseband inversion is simple. Take the scrambled
techniques. "DVP uses a self-synchronizing input and re-invert it around the same inversion
encryption technique known as cipher feedback point used to scramble it.

SIP0506-2
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

IV. OVERVIEW OF THE PROPOSED SPEECH


ENCRYPTION TECHNIQUE
Encryption is a much stronger method of
protecting speech communications than any form
of scrambling. Voice encryptions work by
digitizing the conversation at the telephone and
applying a cryptographic technique to the
resulting bit-stream. In order to decrypt the
speech, the correct encryption method and key
must be used [3]. For Speech or Voice encryption,
Fig 2: Base Band Inversion of Sound wave we can use any one of the following encryption
methods.
Variable-band inversion inverts the signal around (A) Hardware Based Encryption Systems
a constantly varying frequency, making
Hard encryption systems are voice encryption
decryption possible, but not bloody likely.
schemes that utilize hardware to encrypt
Variable band inversion can be identified by the
conversations. Hard encryption devices are useful
burst of modem noise at the beginning of the
because they don't need a computer to work
transmission (it’s a 1200 bps carrier) and the
(allowing them to be built into things like radios
repeated clicking sounds as the inverting
and cellular phones), are usually more secure, and
frequency changes. Descrambling variable band
are simpler to use. On the downside, hardware
inversion would be a chore for the amateur
encryption systems are very expensive and can be
eavesdropper, as the inversion point changes
hard to acquire.
every fraction of a second. Professionals however
would likely have little trouble extracting clear (B) Software Based Encryption Systems
speech. Soft encryption systems are exactly what they
sound like, software based encryption. While the
Split-band inversion is another method for making inconvenience of having to use a computer is the
inversion more secure. Split band inversion primary drawback to soft voice encryption, most
divides the signal into two frequencies and inverts of the available programs use good crypto and are
them (usually baseband) separately. Some split free.
band inversion systems provide enhanced security (C) Digital Voice Protection
by randomly changing the frequency where the
signal is split at given intervals. Digital Voice Protection (DVP) is a proprietary
speech encryption technique used by Motorola for
their higher-end secure communications products.
DVP is considered to be very secure.
(D) PGPFone
PGPfone is another offering from Pretty Good
Privacy Inc., a secure voice program for the PC.
The interface is pleasantly intuitive, and there are
options for different encoders and decoders (for
either cellphone or landline use). PGPfone offers
a selection of encryption schemes: 128 bit CAST
key (a DES-like crypto system), 168 bit Triple-
Fig 3: Split Base Band Inversion of Sound Wave DES key (estimated key strength is 112 bits) or

SIP0506-3
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

192 bit Blowfish key (unknown estimated key V. CONCLUSION


strength). The Speech Scrambling and Encryption is an
ambivalent technique for voice security and plays
(E) Nautilus an important role in the field of voice
Nautilus is a free secure communications communication. Speech Scrambling and
program. Its lacks many of the features of other Encryption technique describes the enhanced
communications programs, and its interface is security of voice communication due to large
best described as user-hateful. Unlike most other number of complex operations to convert the
sound wave from original one to scramble wave
voice encryption programs, Nautilus uses a
format, which is very difficult to convert into
proprietary algorithm with a key negotiated by the original format for any unauthorized third party.
Diffie-Hellman Key Exchange. The advantage of Speech Scrambling and
Encryption is that it provides better security
(F) Speak Freely because even if transmitted wave is accessed by
Speak Freely is a versatile, simple voice the intruder, the confidentiality of original wave
encryption system. Speak Freely offers a selection can still be maintained by the speech scramble and
of voice encryption techniques (IDEA or DES). encryption technique. The study of speech
Speak Freely also permits conferencing, and scrambling and encryption technique aims to
contains several other useful functions. Unlike enhance the potential of upcoming communication
most voice encryption platforms, Speak Freely technologies and its implications to defense and
includes options that it to connect to other government users. The implementation of voice
scrambling and encryption technique is a strong
encrypting and non-encrypting internet
and positive move in the way of defining a
telephones. standard for secure voice communication.
(G) SEU-8201 Cipher system However, as the amount of confidential voice
communication increases over the insecure
The SEU-8201 is a high-security voice ciphering wireless channel, speech scrambling and
system which is mainly used for authorities, encryption must also be reviewed from a security
governmental agencies, police and military or prospective.
paramilitary. The ciphering algorithm is a new
approach, providing the highest security needed VI. REFERENCES
for such user groups. From a practical standpoint,
it is not susceptible to attack by eavesdroppers or 1. Weblink:http://history.sandiego.edu/gen/re
by using current crypto-analytical methods [4]. cording/sigsaly.html, “SIGSALY”
2. Owens, F. J. (1993). Signal Processing of
Speech. Houndmills: MacMillan
Press. ISBN 0333519221.
3. Weblink:http://seussbeta.tripod.com/crypt.
html#SCRAMBLE
4. Accessed e-Link: http://vhf-encryption.at-
communication.com/
en/secure/seu_8201.html

.
Fig 4: SEU-8201 Voice Encryption System

SIP0506-4
CONFERENCE ON “SIGNAL PROCESSING AND REAL TIME OPERATING SYSTEM (SPRTOS)” MARCH 26-27 2011

SIP0506-5

Vous aimerez peut-être aussi