Vous êtes sur la page 1sur 134

PROCEEDINGS

ICDER - 2014
INTERNATIONAL CONFERENCE ON DEVELOPMENTS
IN ENGINEERING RESEARCH

Sponsored By
INTERNATIONAL ASSOCIATION OF ENGINEERING &
TECHNOLOGY FOR SKILL DEVELOPMENT

Technical Program
31 August, 2014
Hotel Pavani Residency, Nellore

Organized By
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL
DEVELOPMENT

www.iaetsd.in

Copyright 2014 by IAETSD


All rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted, in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, without the prior written
consent of the publisher.

ISBN: 378 - 26 - 138420 - 4

http://www.iaetsd.in

Proceedings preparation, editing and printing are sponsored by


INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY
FOR SKILL DEVELOPMENT COMPANY

About IAETSD:
The International Association of Engineering and Technology for Skill
Development (IAETSD) is a Professional and non-profit conference organizing
company devoted to promoting social, economic, and technical advancements
around the world by conducting international academic conferences in various
Engineering fields around the world. IAETSD organizes multidisciplinary
conferences for academics and professionals in the fields of Engineering. In order
to strengthen the skill development of the students IAETSD has established.
IAETSD is a meeting place where Engineering students can share their views,
ideas, can improve their technical knowledge, can develop their skills and for
presenting and discussing recent trends in advanced technologies, new educational
environments and innovative technology learning ideas. The intention of IAETSD
is to expand the knowledge beyond the boundaries by joining the hands with
students, researchers, academics and industrialists etc, to explore the technical
knowledge all over the
world, to publish proceedings. IAETSD offers
opportunities to learning professionals for the exploration of problems from many
disciplines of various Engineering fields to discover innovative solutions to
implement innovative ideas. IAETSD aimed to promote upcoming trends in
Engineering.

About ICDER:
The aim objective of ICDER is to present the latest research and results of
scientists related to all engineering departments topics. This conference provides
opportunities for the different areas delegates to exchange new ideas and
application experiences face to face, to establish business or research relations and
to find global partners for future collaboration. We hope that the conference results
constituted significant contribution to the knowledge in these up to date scientific
field. The organizing committee of conference is pleased to invite prospective
authors to submit their original manuscripts to ICDER 2014.
All full paper submissions will be peer reviewed and evaluated based on
originality, technical and/or research content/depth, correctness, relevance to
conference, contributions, and readability. The conference will be held every year
to make it an ideal platform for people to share views and experiences in current
trending technologies in the related areas.

Conference Advisory Committee:

Dr. P Paramasivam, NUS, Singapore


Dr. Ganapathy Kumar, Nanometrics, USA
Mr. Vikram Subramanian, Oracle Public cloud
Dr. Michal Wozniak, Wroclaw University of Technology,
Dr. Saqib Saeed, Bahria University,
Mr. Elamurugan Vaiyapuri, tarkaSys, California
Mr. N M Bhaskar, Micron Asia, Singapore
Dr. Mohammed Yeasin, University of Memphis
Dr. Ahmed Zohaa, Brunel university
Kenneth Sundarraj, University of Malaysia
Dr. Heba Ahmed Hassan, Dhofar University,
Dr. Mohammed Atiquzzaman, University of Oklahoma,
Dr. Sattar Aboud, Middle East University,
Dr. S Lakshmi, Oman University

Conference Chairs and Review committee:

Dr. Shanti Swaroop, Professor IIT Madras


Dr. G Bhuvaneshwari, Professor, IIT, Delhi
Dr. Krishna Vasudevan, Professor, IIT Madras
Dr.G.V.Uma, Professor, Anna University
Dr. S Muttan, Professor, Anna University
Dr. R P Kumudini Devi, Professor, Anna University
Dr. M Ramalingam, Director (IRS)
Dr. N K Ambujam, Director (CWR), Anna University
Dr. Bhaskaran, Professor, NIT, Trichy
Dr. Pabitra Mohan Khilar, Associate Prof, NIT, Rourkela
Dr. V Ramalingam, Professor,
Dr.P.Mallikka, Professor, NITTTR, Taramani
Dr. E S M Suresh, Professor, NITTTR, Chennai
Dr. Gomathi Nayagam, Director CWET, Chennai
Prof. S Karthikeyan, VIT, Vellore
Dr. H C Nagaraj, Principal, NIMET, Bengaluru
Dr. K Sivakumar, Associate Director, CTS.
Dr. Tarun Chandroyadulu, Research Associate, NAS

ICDER - 2014
CONTENTS

DECENTRALIZED COORDINATED COOPERATIVE CACHE


REPLACEMENT ALGORITHMS FOR SOCIAL WIRELESS
NETWORKS

REAL TIME IMPLEMENTATION OF RAILWAY TRACK FAULT


DETECTING SYSTEM USING IR AND ULTRASONIC
TECHNOGY

11

DESIGN, ENGINEERNING AND ANALYSIS OF UTILITY SCALE


SOLAR PV POWER PLANT

20

LITERATURE REVIEW ON EFFICIENT DETECTION AND


FILTERING OF HIGH DENSITY IMPULSE NOISE - A NOVEL
ADAPTIVE WEIGHT ALGORITHM

30

5
6
7
8

9
10

TIME CONSTRAINED SELF-DESTRUCTING DATA SYSTEM


(SeDaS) FOR DATA PRIVACY
ANALYSIS ON PACKET SIZE OPTIMIZATION TECHNIQUES IN
WIRELESS SENSOR NETWORKS
COMPRESSED AIR VEHICLE (CAV)
DESIGN OF AN OPTIMISED LOW POWER FULL ADDER USING
DOUBLE GATED MOSFET AT 45nm TECHNOLOGY
AN MTCMOS TECHNIQUE FOR OPTIMIZING LOW POWER FLIPFLOP DESIGNS
SECURITY AND PRIVACY ENHANCEMENT USING JAMMER IN
DOWNLINK CELLULAR NETWORKS

35
39
44
50

56
62

11

LITERATURE REVIEW ON TRAFFIC SIGNAL CONTROL


SYSTEM BASED ON WIRELESS TECHNOLOGY

67

12

BLUETOOTH BASED SMART SENSOR NETWORKS

73

13

IMPLEMENTATION OF SECURE AUDIT PROCESS FOR


UNTRUSTED STORAGE SYSTEMS

81

14

SECURE DATA DISSEMINATION BASED ON MERKLE HASH


TREE FOR WIRELESS SENSOR NETWORKS

86

15

LOW POWER PULSE TRIGGERED FLIPFLOP WITH


CONDITIONAL PULSE ENHANCEMENT SCHEME

90

16

HIERARCHICAL FUZZY RULE BASED CLASSIFICATION


SYSTEMS WITH GENETIC RULE SELECTION TO FILTER
UNWANTED MESSAGES

96

17

REDUCING SECURITY RISKS IN VIRTUAL NETWORKS BY


USING SOFTWARE SWITCHING SOLUTION

105

18

EFFECTIVE METHOD FOR SEARCHING SUBSTRINGS IN LARGE


DATABASES BY USING QUERY BASED APPROACH

112

19

SIGNIFICANCE OF STATOR WINDING INSULATION SYSTEMS


OF LOW-VOLTAGE INDUCTION MACHINES AIMING ON TURN
INSULATION PROBLEMS : TESTING AND MONITORING

120

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Decentralized Coordinated Cooperative Cache


Replacement Algorithms For Social Wireless
Networks
Surabattina sunanda1, abdul rahaman shaik2
1
Computer science and engineering.
2
Computer science and engineering
1
surabattina.sunanda@gmail.com, 2 rahaman.0501@gmail.com
Cooperative caching is a technique
used in wireless networks to improve the
efficiency of information access by reducing
the access latency and bandwidth usage.in
SWNETs are formed by mobile devices,
such as data enabled phones, electronic book
readers etc., sharing common interests in
electronic content, and physically gathering
together in public places. Electronic object
caching in such SWNETs are shown to be
able to reduce the content provisioning cost
which depends heavily on the service and
pricing dependences among
various
stakeholders including content providers
(CP), network service providers, and End
Consumers (EC). We present a very lowoverhead decentralized algorithm for
cooperative
caching
that
provides
performance comparable to that of existing
centralized algorithms. Unlike existing
algorithms that rely on centralized control of
cache
functions,
our
algorithm
uses hints (i.e. inexact information) to allow
clients to perform these functions in a
decentralized fashion. This paper shows that
a hint-based system performs as well as a
more tightly coordinated system while
requiring less overhead.
Keywords:Data
Caching,
Cache
Replacement,SWNETs,Cooperative caching,
content provisioning, ad hoc networks

1.INTRODUCTION:
Caching is a common technique for
improving the performance of distributed
file
systems[Howard88,
Nelson93,
Sandberg85]. Client caches filter application
I/O requests to avoid network and server
traffic, while server caches filter client cache
misses to reduce disk accesses. A drawback
of this organization is that the server cache
must be large enough to filter most client
cache misses, otherwise costly disk accesses
will dominate system performance.A
solution is to add another level to the storage
hierarchy, one that allows a client to access
blocks cached by other clients. This
technique
is
known
as cooperative
caching and it reduces the load on the server
by allowing some local client cache misses
to be handled by other clients.
The cooperative cache differs from the other
levels of the storage hierarchy in that it is
distributed across the clients and it therefore
shares the same physical memory as the
local caches of the clients. A local client
cache is controlled by the client, and a
server cache is controlled by the server, but
it is not clear who should control the
cooperative cache. For the cooperative cache
to be effective, the clients must somehow
coordinate their actions.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


1

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Wireless mobile communication is a fastest


growing segment in communication
industry[1]. It has currently supplemented or
replaced the existing wired networks in
many places. The wide range of applications
and new technologies[5] simulated this
enormous growth. The new wireless traffic
will support
heterogeneous traffic,
consisting of voice, video and data. Wireless
networking environments can be classified
in to two different types of architectures,
infrastructure based and ad hoc based. The
former type is most commonly deployed
one, as it is used in wireless LANS and
global wireless networks. An infrastructure
based wireless network uses fixed network
access points with which mobile terminals
interact for communication and this requires
the mobile terminal to be in the
communication range of a base station. The
ad hoc based network structure alleviates
this problem by enabling mobile terminals to
cooperatively form a dynamic network
without any pre existing infrastructure. It is
much convenient for accessing information
available in local area and possibly reaching
a WLAN base station,which comes at no
cost for users.
Mobile terminals available today have
powerful hard ware, but the capacity of the
batteries goes up slowly and all these
powerful components reduce battery life.
Therefore adequate measures should be
taken to save energy. Communication is one
of the major sources of energy onsumption.
By reducing the data traffic energy can be
conserved for longer time. Data caching has
been introduced[10] as a techniques to
reduce the data traffic and access latency.
By caching data the data request can be
served from the mobile clients without
sending it to the data source each time. It is
a major technique used in the web to reduce
the access latency. In web, caching is

implemented at various points in the


network. At the top level web server uses
caching, and then comes the proxy server
cache and finally client uses a cache in the
browser.
The major contribution of this paper is to
show that a cooperative caching system that
relies on local hints to manage the
cooperative cache performs as well as a
more tightly-coordinated fact-based system.
Our motivation is simple: hints are less
expensive to maintain than facts and as long
as hints are highly accurate, they will
improve system performance. Hence,
instead of maintaining and accessing global
state, each client in a hint-based system
gathers its own information about the
cooperative cache. Thus the key challenge in
designing a hint-based system is to ensure
that the hints are highly accurate with
minimal overhead.
In this paper, we describe a cooperative
caching system that uses hints instead of
facts whenever possible.

2.RELATED WORK:
There is a rich body of the existing
literature on several aspects of cooperative
caching including object replacements,
reducing cooperation overhead, cooperation
performance in traditional wired networks.
The Social Wireless Networks explored in
this paper, which are often formed using
mobile ad hoc network protocols, are
different in the caching context due to their
additional constraints such as topological
insatiability and limited resources. As a
result, most of the available cooperative
caching solutions for traditional static
networks are not directly applicable for the
SWNETs. Three caching schemes for
MANET have been presented[9][11].
in. In the first scheme, CacheData, a
forwarding node checks the passing-by

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


2

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

objects and caches the ones deemed useful


according to some predefined criteria. This
way, the subsequent requests for the cached
objects can be satisfied by an intermediate
node. A problem with this approach is that
storing large number of popular objects in
large number of intermediate nodes does not
scale well.The second approach, CachePath,
is different in that the intermediate nodes do
not save the objects; instead they only
record paths to the closest node where the
objects can be found. The idea in CachePath
is to reduce latency and overhead of cache
resolution by finding the location of objects.
This strategy works poorly in a highly
mobile environment since most of the
recorded paths become obsolete very soon.
The last approach in is the HybridCache in
which either CacheData or CachePath is
used based on the properties of the passingby objects through an intermediate node.
While all three mechanisms offer a
reasonable solution, and that relying only on
the nodes in an objects path is not most
efficient. Using a limited broadcast-based
cache resolution can significantly improve
the overall hit rate and the effective capacity
overhead of cooperative caching. According
to the protocols the mobile hosts share their
cache contents in order to reduce both the
number of server requests and the number of
access misses. The concept is extended in
for tightly coupled groups with similar
mobility and data access patterns. This
extended version adopts an intelligent bloom
filter-based peer cache signature to
minimize the number of flooded message
during cache resolution. A notable limitation
of this approach is that it relies on a
centralized mobile support center to
discover nodes with common mobility
pattern and similar data access patterns. Our
work, on the contrary, is fully distributed in
which the mobile devices cooperate in a
peer-to-peer fashion for minimizing the
object access cost. In summary, in most of

the existing work on collaborative caching,


there is a focus on maximizing the cache hit
rate of objects, without considering its
effects on the overall cost which depends
heavily on the content service and pricing
models. This paper formulated two object
replacement mechanisms to minimize the
provisioning cost,
instead of just
maximizing the hit rate. Also, the validation
of our protocol on a real SWNET interaction
trace with dynamic partitions, and on a
multiphone Android prototype is unique
compared to the existing literature. From a
user selfishness standpoint, Laoutaris et
al.investigate its impacts and mistreatment
on caching. A
mistreated node is a cooperative node that
experiences an increase in its access cost
due to the selfish behavior by other nodes in
the network. In, Chun et al study selfishness
in a distributed content replication strategy
in which each user tries to minimize its
individual access cost
by replicating a subset of objects locally (up
to the storage capacity), and accessing the
rest from the nearest possible location.
Using a game theoretic formulation, the
authors prove the existence of a pure Nash
equilibrium under which network reaches a
stable situation. Similar approach
has been used in which the authors model a
distributed caching as a market sharing
game.

3.Network Model,Cache Replacement


Policies:
3.1 NETORK MODEL
Fig. 1 illustrates an example SWNET within
a University campus. End Consumers
carrying mobile devices form SWNET
partitions, which can be either multi-hop
(i.e.,MANET) as shown for partitions 1, 3,
and 4, or single hop access point based as
shown for partition 2. A mobile device can
download an object (i.e., content) from the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


3

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

CPs server using the CSPs cellular


network, or from its local SWNET partition.
We consider two types of SWNETs. The
first one
involves stationary SWNET
partitions. Meaning, after a partition is
formed, it is maintained for sufficiently long
so that the cooperative object caches can be
formed and reach steady states. We also
investigate a second type to explore as to
what happens when the stationary
assumption is relaxed.To investigate this
effect, caching is applied to SWNETs
formed using human interaction traces
obtained from a set of real SWNET nodes.

Fig. 1. Content access from an SWNET in a


University Campus.

3.2. Cache Replacement


Caching in wireless environment has unique
constraints like scarce bandwidth, limited
power supply, high mobility and limited
cache space. Due to the space limitation,
the mobile nodes can store only a subset of
the frequently accessed data. The
availability of the data in local cache can
significantly improve the performance
since it overcomes the constraints in
wireless environment. A good replacement
mechanism is needed to distinguish the
items to be kept in cache and that is to
be removed when the cache is full. While it
would be possible to pick a random object
to replace when cache is full, system
performance will be better if we choose an
object that is not heavily used. If a heavily
used data item is removed it will probably

have to be brought back quickly, resulting


in
extra
overhead.
So
a
good
replacement policy is essential to achieve
high hit rates. The extensive research on
caching for wired networks can be adapted
for the wireless environment with
modifications to account for mobile terminal
limitations and the dynamics of the wireless
channel.

3.3. Cache Replacement Policies in Ad


hoc networks
Data caching in MANET is proposed as
cooperative caching. In cooperative caching the
local cache in each node is shared among the
adjacent nodes and they form a large unified
cache. So in a cooperative caching
environment, the mobile hosts can obtain data
items not only from local cache but also from
the cache of their neighboring nodes. This aims
at maximizing the amount of data that can be
served from the cache so that the server delays
can reduced which in turn decreases the
response time for the client. In many
applications of
MANET like automated
highways and factories, smart homes and
appliances, smart class rooms, mobile nodes
share common interest. So sharing cache
contents between mobile nodes offers significant
benefits. Cache replacement algorithm greatly
improves the effectiveness of the cache by
selecting suitable subset of data for caching.
The available cache replacement mechanisms
for ad hoc network can be categorized in to
coordinated and uncoordinated depending on
how replacement decision is made. In
uncoordinated scheme the replacement decision
is made by individual nodes. In order to cache
the incoming data when the cache is full,
replacement algorithm chooses the data items
to be removed by making use of the local
parameters in each node.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


4

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

3.4.Split Cache Replacement

Fig.2. Cache partitioning in split cache policy.

To realize the optimal object placement


under homogeneousobject request model we
propose the following Split Cache policy in
which the available cache space in each
device is divided into a duplicate segment (
fraction) and a unique segment (see Fig. 2).
In the first segment, nodes can store the
most popular objects without worrying
about the object duplication and in the
second segment only unique objects are
allowed to be stored. The parameter in Fig.
2 (0 1) indicates the fraction of cache
that is used for storing duplicated objects.
With the Split Cache replacement policy,
soon after an object is downloaded from the
CPs server, it is categorized as a unique
object as there is only one copy of this
object in the network. Also, when a node
downloads an object from another SWNET
node, that object is categorized as a
duplicated object as there are now at least
two copies of that object in the network.
For storing a new unique object, the least
popular object in the whole cache is selected
as a candidate and it is replaced with the
new object if it is less popular than the new
incoming object. For a duplicated object,
however, the evictee candidate is selected
only from the first duplicate segmentof the
cache. In other words, a unique object is
never evicted in order to accommodate a
duplicated object.

4 A Hint-based Algorithm
The
previous
cooperative
caching
algorithms rely in part on exact information,
or facts to manage the cache. Although facts
allow these algorithms to make optimal
decisions, they increase the latency of block
accesses and the load on the managers. Our
goal in designing a cooperative caching
algorithm is to remove the reliance on
centralized control of the cooperative cache.
Clients should be able to access and replace
blocks in the cooperative cache without
involving a manager.
Reducing the dependence of clients on
managers is achieved through the use
of hints information that only approximates
the global state of the system. The decisions
made by a hint-based system may not be
optimal, but managing hints is less
expensive than managing facts. Hints do not
need to be consistent throughout the system,
eliminating the need for centralized
coordination of the information flow. As
long as the overhead eliminated by not using
facts more than offsets the effect of making
mistakes, the gamble of using hints will pay
off.
The remainder of this section describes the
components of a hint-based algorithm.
Section 4.1 describes the hint-based block
lookup algorithm. Section 4.2 describes how
the replacement policy decides whether or
not to forward a block to the cooperative
cache. Section 4.3 discusses how the
replacement policy chooses the target client
for forwarding a block. Section 4.4 discusses
the use of the server cache to mask
replacement mistakes. Finally, Section 4.5
describes the effect of the cache consistency
protocol on the use of hints.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


5

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

4.1 Block Lookup


When a client suffers a local cache miss a
lookup must be performed on the
cooperative cache to determine if and where
the block is cached. The manager performs
this lookup in the previous algorithms, both
increasing the block access time and
incurring load on the manager and network.
An alternative approach is to let the client
itself perform the lookup, using its own hints
about the locations of blocks within the
cooperative cache. These hints allow the
client to access the cooperative cache
directly, avoiding the need to contact the
manager on every local cache miss.
Based on the above, we can identify two
principal functions for a hint-based lookup
algorithm:

Hint Maintenance: The hints must be


maintained so that they are
reasonably accurate, otherwise the
overhead of looking for blocks using
incorrect hints will be prohibitive.
Lookup Mechanism: Hints are used
to locate a block in the cooperative
cache, but the system must be able to
eventually locate a copy of the block
should the hints prove wrong.

Each of these functions is discussed in detail


in the following sections.

To keep hints as correct as possible, we


introduce the concept of a master copy of a
block. The first copy of a block to be cached
by any client is called the master copy. The
master copy of a block is distinct from the
block's other copies because the master copy
is obtained from the server.
Based on this concept of the master copy,
we enumerate two simple rules for hint
maintenance:
1. When a client obtains a token for a
file from a manager, it is also given a
set of hints that contain the probable
location of the master copy for each
block in the file. The manager
obtains the set of hints for the file
from the last client to acquire a token
for the file, because the last client is
likely to have the most accurate
hints.
2. When a client forwards a master
copy of a block to a second client,
both the clients now update their
hints to show that the probable
location of the master copy of the
block is the second client.
The hints only contain the probable
locations of the master copy and hence, we
ignore the changes to the locations of the
other copies of the block. This simplifies the
task of keeping the hints accurate.
4.1.2 Lookup Mechanism

4.1.1 Hint Maintenance


To make sure that hints are reasonably
accurate, our strategy is to change hints only
when necessary. In other words, correct
hints are left untouched and incorrect hints
are changed when correct information is
made available.

Given hints about the probable location of


the master copy of a block, the lookup
mechanism must ensure that a block lookup
is successful, regardless of whether the hints
are right or wrong. Fortunately, as all block
writes go through to the server, it always has
a valid copy of a block and can satisfy
requests for the block should the hints prove

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


6

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

false. This simplifies the lookup mechanism


which is outlined below:

manager to contact a client whenever a


block becomes a singlet.

1. When a client has a local cache miss


for a block, it consults its hint
information for the block.
2. If the hint information contains the
probable location for the master copy
of the block, the client forwards its
request to this location. Otherwise,
the request is sent to the server.
3. The client which receives a
forwarded request for a block
consults its hint information for the
block and proceeds to Step 2.

To avoid these overheads, we propose a


forwarding mechanism in which the copy to
be forwarded to the cooperative cache is
predetermined and does not require
communication between the clients and the
manager. In particular, only the master copy
of a block is forwarded to the cooperative
cache, while all other copies are discarded.
Since only master copies are forwarded, and
each block has only one master copy, there
can be at most one copy of a block in the
cooperative cache.

The general idea is that each client keeps


track of the probable location of the master
copy of each block, and uses this
information to lookup blocks in the
cooperative cache.

A potential drawback of the master copy


algorithm is that it has a different
forwarding behavior than the previous
algorithms. Instead of forwarding the last
local copy of a block as in GMS or Nchance, the master copy algorithm forwards
the first or master copy. In some cases, this
may lead to unnecessary forwardings. A
block which is deleted before it is down to
its last copy should not have been forwarded
to the cooperative cache. The existing
algorithms avoid this, while the master copy
algorithm will potentially forward the block.
Fortunately, our measurements show that
few of the master copy forwardings are
unnecessary, as described in Section 6.3.

4.2 Forwarding
When a block is ejected from the local cache
of a client, the cooperative cache
replacement policy decides whether or not
the block should be forwarded to the
cooperative cache. As discussed earlier, one
of the motivations of the replacement policy
is to ensure that only one copy of a block is
stored in the cooperative cache. If not, the
cooperative cache will contain unnecessary
duplicate copies of the same block.
The previous algorithms rely on the manager
to determine whether or not a block should
be forwarded to the cooperative cache. A
block is forwarded if it is the only copy of
the block stored in either the local caches or
the cooperative cache. Maintaining this
invariant is expensive, however, requiring
an N-chance client to contact the manager
whenever it wishes to discard a block that is
not known to be a singlet, and the GMS

4.3 Best-Guess Replacement


Once the replacement policy has decided to
forward a block to the cooperative cache, it
must decide the target client of this
forwarding. Forwarding a block to this
target client will replace a block that is
either in the client's local cache or in the
cooperative cache. Note that this replaced
block can be a master copy, providing a
means for removing master copies from the
cooperative cache.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


7

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Previous algorithms either choose the client


at random, or rely on information from the
manager to select the target. An alternative
based on hints, however, can provide highly
accurate replacements without requiring a
centralized manager. We refer to this
as best-guess replacement because each
client chooses a target client that it believes
has the system's oldest block. The objective
is to approximate global LRU, without
requiring a centralized manager or excessive
communication between the clients. The
challenge is that the block age information is
distributed among all the clients, making it
expensive to determine the current globally
LRU block.
In best-guess replacement, each client
maintains an oldest block list that contains
what the client believes to be the oldest
block on each client along with its age. This
list is sorted by age. A block is forwarded to
the client that has the oldest block in the
oldest block list.
The
high
accuracy
of
best-guess
replacement comes from exchanging
information about the status of each client.
When a block is forwarded from one client
to another, both clients exchange the age of
their current oldest block, allowing each
client to update its oldest block list. The
exchange of block age information during
replacement
allows
both active clients
(clients that are accessing the cooperative
cache) and idle clients (clients that are not)
to maintain accurate oldest block lists.
Active clients have accurate lists because
they frequently forward blocks. Idle clients
will be the targets of the forwardings,
keeping their lists up-to-date as well. Active
clients will also tend to have young blocks,
preventing other clients from forwarding
blocks to them. In contrast, idle clients will
tend to accumulate old blocks and therefore
be the target of most forwarding requests.

Changes in the behavior of a client may


cause the oldest block lists to become
temporarily inaccurate. An active client that
becomes idle will initially not be forwarded
blocks, but its oldest block will age relative
to the other blocks in the system. Eventually
this block will be the oldest in the oldest
block lists on other clients and be used for
replacement. On the other hand, an idle
client that becomes active will initially have
an up-to-date list because of the blocks it
was forwarded while idle. This allows it to
accurately forward blocks. Other clients may
erroneously forward blocks to the newlyactive client but once they do, their updated
oldest block lists will prevent them from
making the same mistake twice.
Although trace-driven simulation has shown
this simple algorithm to work well, there are
several potential problems, including the
effect of replacing a block that is not the
globally LRU block and also the problem of
overloading a client with simultaneous
replacements.
First, since global state information is not
maintained, it is possible for a client to
replace a block that is not the globally LRU
block. However, if the replaced block
is close to the globally LRU block, the
performance impact of not choosing the
globally LRU block is minimal. In addition,
the next section discusses a mechanism for
masking any deviations from the globally
LRU block.
Second, if several clients believe that the
same client has the oldest block, they will all
forward their blocks to that client,
potentially overloading it. Fortunately, it can
be shown that it is highly unlikely that the
clients using the cooperative cache would
forward their blocks to the same target. This
is because clients that do forward their
blocks to the same target will receive

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


8

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

different ages for the oldest block on the


target, since each forwarded block replaces a
different oldest block. Over time, the clients'
oldest block lists will contain different block
age information even if they start out
identical, reducing the probability of always
choosing the same forwarding targets.
4.4 Discard Cache
One drawback of best-guess replacement is
that erroneous replacements will occur. A
block may be forwarded to a client that does
not have the oldest block; indeed, a block
may be forwarded to a client whose oldest
block is actually younger than the forwarded
block.
To offset these mistakes we introduce the
notion of a discard cache one that is used to
hold possible replacement mistakes and thus
increase the overall cache hit rate of the
system. The simple algorithm used to
determine whether a block is mistakenly
replaced and should be sent to the discard
cache is shown in Table 1. As is evident,
non-master copies are always discarded
because only master copies are accessed in
the cooperative cache.
Replacements are considered to be in error
when the target client of a replacement
decides that the block is too young to be
replaced. A client chooses to replace a block
on a particular target client because it
believes that client contains the oldest block.
The target client considers the replacement
to be in error if it does not agree with this
assessment. The target determines this by
comparing the replaced block's age with the
ages of the blocks on its oldest block list. If
the block is younger than any of the blocks
on the list, then the replacement is deemed
an error and the block is forwarded to the
discard cache. Otherwise, the block is
discarded.

The blocks in the discard cache are replaced


in global LRU order. Thus the discard cache
serves as a buffer to hold potential
replacement mistakes. This extends the
lifetimes of the blocks and reduces the
number of erroneous replacements that
result in an expensive disk access.
Type of block
Non-master copy
Old master copy
Young master copy

Action
Discard
Discard
Send to discard cache

Table 1:Discard Cache Policy. This table


lists how the hint-based replacement policy
decides which blocks to send to the discard
cache. A master copy is old if it is older than
all blocks in the oldest block list, else it is
considered young. The oldest block list is
the per-client list that contains what the
client believes to be the oldest block on each
client along with its age.
4.5 Cache Consistency
The use of hints for block lookup raises the
issue of maintaining cache consistency. One
solution is to use block-based consistency,
but this would require contacting the
manager on every local cache miss to locate
an up-to-date copy, making it pointless to
use hints for block lookup or replacement.
For this reason, we propose the use of a filebased consistency mechanism. Clients must
acquire a token from the manager prior to
accessing a file. The manager controls the
file tokens, revoking them as necessary to
ensure consistency. The token includes
version numbers for all the file's blocks,
allowing copies of the blocks to be validated
individually. Once a client has the file's
token it may access the the file's blocks
without involving the manager, enabling the
use of hints to locate and replace blocks in
the cooperative cache.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


9

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

6. Conclusion
Cooperative caching is a technique that
allows clients to access blocks stored in the
memory of other clients. This enables some
of the local cache misses to be handled by
other clients, offloading the server and
improving the performance of the system.
However, cooperative caching requires
some level of coordination between the
clients to maximize the overall system
performance. Previous cooperative caching
algorithms achieved this coordination by
maintaining global information about the
system state. This paper shows that allowing
clients to make local decisions based on
hints performs as well as the previous
algorithms, while requiring less overhead.
The hint-based algorithm's block access
times are as good as those of the previous
and ideal algorithms

7.References
[1] C. Aggarwal, J.L. Wolf, and P.S. Yu,
Caching on the World Wide Web, IEEE
Trans. Knowledge and Data Eng.,
vol. 11, no. 1, pp. 94-107, Jan./Feb. 1999
[2]. Denko, M.K., Tian, J.,Cross-Layer
Design for Cooperative Caching in Mobile
Ad Hoc Networks, Proc .of IEEE
Consumer Communications and Networking
Conf( 2008).
[3]. L. Yin, G. Cao: Supporting cooperative
caching in ad hoc networks, IEEE
Transactions on Mobile Computing,
5(1):77-89( 2006).
[4]. Chand, N. Joshi R.C., and Misra, M.,
Efficient Cooperative Caching in Ad Hoc
Networks Communication System
Software and Middleware.(2006).
[5] S. Lim, W. C. Lee, G. Cao, C. R. Das: A
novel caching scheme for internet based
mobile ad hoc networks. Proc .12th
Int. Conf. Computer Comm. Networks
(ICCCN 2003), 38-43 ( Oct. 2003).

[6] Narottam Chand, R.C. Joshi and Manoj


Misra, "Cooperative Caching Strategy in
Mobile Ad Hoc Networks Based on
Clusters," International Journal of Wireless
Personal Communications special issue on
Cooperation in Wireless Networks, Vol. 43,
Issue 1, pp. 41-63, Oct 2007
[7] Li, W., Chan, E., & Chen, D. (2007).
Energy- efficient cache replacement policies
for cooperative caching in mobile
ad hoc network. In Proceedings of the IEEE
WCNC (pp33493354).
[8]. B. Z heng, J. Xu, and D. L ee. Cache
invalidation and replacement strategies for
location dependent data in mobile
environments,. IEEE Transactions on
Computers, 51(10) : 11411153, October
2002.
[9]. Mary Magdalene Jane.F, Yaser Nouh
and R. Nadarajan,Network Distance Based
Cache Replacement Policy for LocationDependent Data in Mobile Environment,
Proceedings of the 2008 Ninth International
Conference on Mobile Data Management
Workshops ,IEEE Computer Society
Washington,DC,USA,2008.
[10].Kumar, A., Sarje, A.K. and Misra, M.
Prioritised Predicted Region based Cache
Replacement Policy for location dependent
data in mobile environment, Int. J. Ad Hoc
and Ubiquitous Computing , Vol. 5, No. 1,
(2010) pp.5667.[11]. B. Zheng, J. Xu, and
D. L. Lee. cache invalidation and
replacement
strategies
for
locationdependent data in mobile environments.
IEEE Trans. on Comp, 51(10):1421, 2002.
[12] Q. Ren and M. Dhunham. Using
semantic caching to manage location
dependent data in mobile computing. Proc.
Of ACM/IEEE MobiCom, 99:210221,
2000.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


10

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

REAL TIME IMPLEMENTATION OF RAILWAY


TRACK FAULT DETECTING SYSTEM USING IR
AND ULTRASONIC TECHNOGY
GITABASHYAN R *1, SHANMUGAM P *2
1

Final Year, B.E., Electronics and Instrumentation Engineering


2
Prefinal Year, B.E., Electrical and Electronics Engineering
*
National Engineering College, India
1

rgbashyan@gmail.com
shanmugameeenec@gmail.com

Abstract- Railway network is one of the worlds largest

expanded during summer, contract during winter and there is

transportation network aiming to provide continual

a possible of corrosion in rainy season. So, the cracks are

access to all people. In the modern trend, the quality of

further expanded to produce a large gap between the tracks.

service aims for secured travel. As the rail network face

Also by the movement of the train above it. The main

adverse weather throughout the year, the rail material

objective of the paper is to check the track and to detect the

tends to lose its strength. Hence continual maintenance is

cracks present in the track. The people are working

required to ensure the proper functioning of trains. These

scientifically to overcome this problem.

maintenance are done manually by technical and labour

This paper works on testing of cracks using IR sensor and

team which involves testing, analyzing and report

Ultrasonic Sensor, the preprogrammed Microcontroller

submission. The intention of this paper is to provide cost

measures the intensity of defect and send the signal to

effective and regular testing of rail tract by autonomous

railway control room via GSM module. The robotic vehicle

robot designed in embedded system. This model of

is equipped with GPS facility so that when high intense crack

manless Non-Destructive Testing robot uses Infrared

is detected the maintenance team will reach the exact location

sensor and Ultrasonic Sensing elements. With the

and take all further processes mainly including exact fault

implementation of this system the whole railway network

dimension, depth and its nature. This system provide two

will provide secured journey to the people and save all

levels of signal, a threshold signal (dangerous level) and a

lives and properties.

typical fault signal (acceptable level). Railway monitors the

Keywords- IR Sensor, Ultrasonic Sensor, Hall Effect

process continuously and will take all necessary action.

Sensor, Microcontroller, GSM, GPS.

I. INTRODUCTION

II. NON-DESTRUCTIVE TESTING ELEMENTS

In India, the tracks are more than 10,000 of km. Due to

The interconnecting of the elements is the challenging one,

frequent climatic changes, the tracks lose their stability. So,

because the output of the IR receiver is never linearly

cracks are formed in the tracks .We know that tracks are

connected to the microcontroller input for its safety purpose.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


11

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

So, we prefer the capacitor and resistor combination

By replacing the optical incremental encoder by

interfacing with the IR transmitter and receiver. In addition,

optoelectronic instrument we get long range distance

the microcontroller control the motor. So, we use a motor

measurement and also compatible form for microcontroller.

controller to control the motor, we can also use optocoupler

The alarm is coupled to it for alerting the railway staffs. The

and motor circuit to control the DC motor. The optical

LCD module is interconnected to the system to display the

incremental encoder is used to count the number of

distances and test level.

revolutions. One revolution is equal to one bit. In this way,


we can calculate the distance moved by the robot vehicle
above the track.

III. BLOCK DIAGRAM

Optoelectronic
Displacement
Instrument

LCD
Module

IR
Transmitter

RS
232
Cable
Receiver

GSM
Module

Microcontroller
Railway
Control
Room

Ultra
sonic
sensor

ALARM

Robot platform
Battery and

voltage
regulator

Motor
Driver

Motor

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


12

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

IV. BLOCK EXPLANATION AND OPERATION

the reduced instruction set computer (RISC) architecture.

The inputs are IR transmitter and receiver, ultrasonic sensor,


optoelectronic displacement module. The outputs are motor
driver, LCD module, GSM module and DC motor. The robot
is placed on the rail track. The testing process starts
automatically by the control of PIC controller. The IR and
Ultrasonic sensors work simultaneously. These sensors test

The microcontroller processes the sensor output to compute


the fault level. The internal ADC of the microcontroller is
used to convert the analog output of the sensor into its digital
equivalent value. The internal ADC of the microcontroller
has eight channels of analog inputs and gives 10-bit digital
output.

for any crack, damage in the track. These signals are


converted to digital and fed to PIC, here the controller is
preprogrammed with some threshold data, when the sensors
outputs are above the threshold value (dangerous crack) this
information is immediately send to Railway Control Room
with alarm warning and details about distance from station.
The vehicle location is tracked by the GPS system attached
with it. If the test value is below the threshold value then the
information are send to control room. Then vehicle moves to
next step. The distance from the station is measured by long
range optoelectronic displacement instrument. Both test
result and distance is send via GSM module to the control
room. The robot vehicle proceeds to test whole track step by
step

using

DC

motor

driven

by

PIC

command.

Microcontrollers- Microcontrollers are widely used in


embedded

systems

products.

Microcontroller

is

programmable device. A microcontroller has a CPU in


addition to a fixed amount of RAM, ROM, I/O ports and a
timer embedded all on a single chip. The fixed amount of onchip

ROM,

RAM

and

number

of

I/O

ports

in

microcontrollers makes them ideal for many applications in


which cost and space are critical.

V. SENSOR
A sensor is a converter that measures a physical quantity and
converts it into a signal which can be read by an observer or
by an electronic instrument. IR transmitter and receiver is a
sensor using to detect the gap between the tracks. It originally
used for operating the device wirelessly from a short line-ofsight distance. Commonly, remote controls are Consumer
IR devices used to issue commands from a distance to
televisions or other consumer electronics such as stereo
systems, DVD players and dimmers. Remote controls for
these devices are usually small wireless handheld objects
with an array of buttons for adjusting various settings such
as television channel, track number, and volume. In fact, for
the majority of modern devices with this kind of control, the
remote control contains all the function controls while the
controlled device itself has only a handful of essential
primary controls. Most of these remote controls communicate
to their respective devices via infrared signals and a few
via radio

signals.

Earlier

remote

controls

in

1973

used ultrasonic tones. The remote control code, and thus the
required remote control device, is usually specific to a
product line, but there are universal remotes, which emulate

PIC 16F877A microcontroller- IC PIC 16F877A is an 8-bit

the remote control made for most major brand devices. The

microcontroller with 8k x 14-bit flash program memory, 368

main

bytes of RAM and many others extra peripherals like ADC,

is infrared (IR) light. The signal between a remote control

USART,

handset and the device it controls consists of pulses of

timers,

compare

capture

and

pulse-width

modulation modules, and analog comparators. It is based on

technology

used

in

home

remote

controls

infrared light, which is invisible to the human eye.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


13

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

The transmitted wave reflects back to receiver when there is a


change in propagating medium. Hence when crack, hole is
present in the material then the echo pulse is reflected back to
receiver, generating the fault detection signal. Non-contact
type ultrasonic transducers use high air impedance matching
design and output signal conditioning circuit which is easily
adopted in this system.
VI.MOTOR DRIVER:
These

devices

consist

of

two

independent

voltage

comparators that are designed to operate from a single power


supply over a wide range of voltages. Operation from dual
supplies also is possible as long as the difference between the
Ultrasonic transducers[1] used in the time domain

two supplies is 2 V to 36 V, and VCC is at least 1.5 V more

transducers measure the time of flight and the velocity of

positive than the input common-mode voltage. Current drain

longitudinal, shear, and surface waves. Time domain

is independent of the supply voltage. The outputs can be

transducers measure density and thickness, detect and locate

connected to other open-collector outputs to achieve wired-

defects, and measure elastic and mechanical properties of

AND relationships. The LM193 is characterized for operation

materials. These transducers are also used for interface and

from 55C to 125C. The LM293 and LM293A are

dimensional analysis, proximity detection, remote sensing,

characterized for operation from 25C to 85C. The LM393

and robotics.

and LM393A are characterized for operation from 0C to


70C. The LM2903 is characterized for operation from
40C to 125C. The LM2903Q is tested from 40C to
125C and is manufactured to demanding automotive
requirements.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


14

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

VII. ADVANTAGES

IX. CURRENT RESEARCHES

1. Easy to construct

Research about the conventional methods of railway crack

2. More useful in remote areas.

detection include mechanical system and eddy current based

3. Easily and Automatic operation

approaches, we find that they are expensive in nature which

4. More savings in economy

does not warrant their use in the current scenario.

5. Avoid accidents
6. Frequently used

X. CONCLUSION
VIII. FURTHER APPLICATIONS

1. Military applications
2. Landmine detection
3. Firing situation
4. Flight runway
5. Boiler Tube
6. Refinery pipe line

The implementation of this automatic testing technology in


the railways is easy and efficient in flaw detection, Hence on
using this system the government can get more benefits like
easy, automatic and cost effective testing of flaws in the rail
track. We can handle one of the serious concern of todays
leap in the number of accidental deaths and save more lives
and properties.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


15

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

DESIGN, ENGINEERNING AND ANALYSIS


OF UTILITY SCALE SOLAR PV POWER
PLANT
M. Srinu1, S. Khadarvali2.
1
2

Department of Electrical and Electronic Engineering, Madanapalle Institute of Technology and Science, Madanapalle
Department of Electrical and Electronic Engineering, Madanapalle Institute of Technology and Science, Madanapalle
Email: srinu.happytohelp@gmail.com, khadar.vl@gmail.com.

ABSTRACT-The objective of this paper is study to design,


engineering and analysis of 25MW solar Photovoltaic (PV)
power plant. Standard procedures of solar PV power plant
shall be studied. PV syst software model shall be used in
studying the performance of solar PV power plant with state
of art components including solar modules, inverters. The
performance of the proposed 25 MW solar PV power plant
shall be modeled using MATLAB/SIMULINK tools and
analyze the results. The system performance can be
presented.
Key words: photovoltaic, utility scale

I.

under actual working circumstances with changeable


irradiance as well as major temperature changes in the
ground most profitable modules do not automatically
perform as in the condition given by the manufacturers
[11].
II.
SYSTEM DESCRIPTION
Several components are needed to construct a grid
coupled PV system to perform the power generation and
conversion functions shown in fig.1.[6].

INTRODUCTION

In the most recent time, fresh energy sources have


been planned and urbanized due to the need and regular
increase of expenses of vestige fuel. On additional hand,
vestige fuels have an enormous pessimistic blow on the
atmosphere. In this circumstance, the novel energy
sources are basically non-conventional energies [1]. It is
predictable that the electrical energy generation from non
conventional energy sources will boost from 19% in 2010
to 32% in 2030 most important to a subsequent decline of
CO2 emission [2]. The planetary PV systems have
established that they can produce power to extremely
minute electronic devices up to utility range PV power
plant. The current power system is more and more
attractive benefit of solar power systems incoming the
marketplace. Solar PV power systems setting up in the
region of the universal demonstrate a almost exponential
boost. Utility-scale PV plants are typically owned and
operated by a third party and sells the electricity to a
market or load serving entity through a Purchase Power
Agreement (PPA). Utility scale systems that can reach
tens of megawatts of power output under optimum
conditions of solar irradiation [3]. These systems are
usually ground mounted and span a large area for power
harvesting [16]. The feat of a PV system is in general
evaluate under the standard test condition (STC), where
an regular planetary spectrum at AM1.5 is used, the
irradiance is standardize to 1000W/m2, and the cell
hotness is defined as 25oC [10] [11]. On the other hand,

Fig.1.Grid connected PV power plant topology.

A PV array is used to transfer the light from the sun


into DC current and voltage [3]. A three phase inverter is
then attached to perform the power conversion of the
array output power into AC power appropriate for
injection into the grid [16] [15]. A harmonics filter is
additional after the inverter to diminish the harmonics in
the output current which result from the power conversion
process [17]. An interfacing transformer is connected
after the filter to step up the output AC voltage of the
inverter to match the grid voltage level.
III.

SYSTEM DESIGN

This segment highlights several of the key issues that


need to be considered when designing a PV plant. The

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


16

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

electrical design of a PV plant can be split into the DC and


AC systems [8]. Sizing the DC component of the plant the
maximum voltage and current of the individual strings and
PV array(s) should be calculated using the maximum
output of the individual modules. For mono-crystalline and
multi-crystalline silicon modules all DC components
should be rated as follows to allow thermal and voltage
limits.
minimum voltage rating = VOC(STC) 1.15
(1)
minimum current rating = I SC ( STC ) 1.25
(2)
For non-crystalline silicon modules DC component ratings
should be calculated from manufacturers data taking into
account the temperature and irradiance coefficients.
A.

Solar pv module selection and sizing

Poly crystalline silicon cell type solar PV modules


are now -a days becoming best choice for most of the
residential and commercial applications [23]. Recent
improvements in the technology of multi-crystalline or
poly crystalline solar modules which are having better
efficiency, size and heat tolerance levels than their mono
crystalline counterparts. A central power generation system
with a set of solar PV arrays has been considered for the 25
MW solar PV plant. For the proposed system power output
required from solar PV arrays would be approximately 25
MW peak. In this paper CANADIAN SOLAR CS6P-250P
solar modules have been selected and the electrical
specifications for the module are given below table 1 [23].
By using the PV syst software the sizing of the 25 MW PV
plant can be done and the results are shown in fig.2.from
the fig.2 the number of series connected modules per string
is 20 and the total number of strings per inverter is 100.
Here we required 50 inverters each of capacity 500kW so
the total number of modules for designing a 25MW PV
plant are 1, 00,000.
Table 1 Parameters of a CS6P-250P solar module under Standard test
condition (STC) [23].
Maximum Power (Pmax )
Rated power @ PTC (W)
Module efficiency (%)
Power tolerance
Maximum Power Voltage (Vmpp )

250W
227.6W
15.54%
+2%
30.1 V

Maximum Power Current ( I mpp)


Open Circuit Voltage ( Voc)

8.30 A
37.2 V

Short Circuit Current ( Isc)


Voltage/Temperaturecoefficient(Kv )
Current/Temperaturecoefficient ( Ki)

8.87 A
-0.0034 V/K
0.00065 A/K

Series Connected cells ( C)

60X1

B.

Inverter selection and sizing

Grid connected inverters are necessary for dc-ac


conversion. To avoid the power distortions the generated

currents from these inverters are required to have low


harmonics and high power factor. PV system inverters
can be configured in four general ways. Those are central
inverter, string inverter, multi string inverter and ac
module inverter configurations [14]. This kind of
inverters has enough voltage on its dc side i.e. from 150V
to 1000V and there is no need to use an intermediate dcdc converter to boost the voltage up to a reasonable level.
Central inverter has got the advantage of high
inverter efficiency at a low cost per watt. As efficiency is
one of the major concern in the PV system, central
inverter based PV system is a better economical choice
[14]. Therefore it is the first choice of medium and large
scale PV systems. In this paper SMA SOLAR
TECHNOLOGY 500 kW solar inverter has been selected
and, electrical specifications for the inverter are given
below table 2.
Table 2 electrical specifications SMA SOLAR TECHNOLOGY 500 kW
solar inverter.
Maximum PV power (Pdc) kW
Maximum open circuit voltage (Vdc)
MPPT range(Vdc)
Maximum DC input current (Idc)

560
1000
430-850
1250

Maximum Output power(Pac) KW


Nominal output voltage (Vac)
AC output wiring
Maximum Output current (Iac)
Maximum efficiency (%)
CEC-weighted efficiency (%)

500
243-310
3wire w/neut
1176
98.6
98.4

It is not possible to formulate an optimal inverter


sizing strategy that applies in all cases. While the rule of
thumb has been to use an inverter-to-array power ratio
less than unity this is not always the best design approach.
Most plants will have an inverter sizing range within the
limits defined by
0.8 < Power Ratio < 1.2
Where
P
Power Ratio = inverter DC rated
P
PV Peak
(3)
P
Pinverter DC rated = inverter AC rated
100%
(4)
Inverters can control reactive power by controlling the
phase angle of the current injection. The array to inverter
matching can be done by the following way.

Maximum number of modules per string


Maximum input voltage for an inverter is a hard
stop design limit. Exceeding the maximum inverter
operating voltage can result in catastrophic failure of the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


17

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

inverter and also in some cases result in National Energy


Commission (NEC) violations [23]. Therefore, when
designing a PV array the maximum open circuit voltage
plays a vital role. The number of modules in series
determines the array open-circuit voltage (Voc).The
maximum numeral of modules connected in series for an
inverter can be calculated as
maximum dc input voltage for the inverter
N max
maximum module voltage(Voc max )
(5)
temperature
temperature
Voc_max = Voc +{(
)(
)}
coefficient of Voc
difference
(6)
temperature

Voc-max = Voc +{(T_min - T_STC )(


)}
coefficient of Voc (7)
Where
N max
Voc

without the need for temperature correction. To calculate


the maximum number of parallel strings, divide the
maximum inverter input current by either the module
Maximum power current (Imp) or Short-circuit current
(Isc).
maximum inverter input current
N
maximum power current at STC
(11)
WhereN = Maximumnumberof StringsinParallel

C.

Dc cable selection and sizing

In the selection and sizing of DC Cables in general,


three criteria must be observed. Those are the cable voltage
rating, the current carrying capacity of the cable and the
minimization of cable losses [20]. DC cabling consists of
module, string and main cables as shown in fig.2.

= Maximum number of Modules in Series.


= Open circuit voltage of the module.

Voc_max = Maximum module voltage.


T
= Minimum temperature for the site.
_min
T_STC = Temperature at Standard Test condition.

Minimum number of modules in a string


Excessively low array voltage can have a
dramatic negative impact on PV system energy
production. If the array voltage falls below the minimum
operating voltage the inverter may not be able to track the
array maximum power point [23]. The minimum number
of modules for an inverter can be calculated as fallows.
minimum dc input voltage for the inverter
N

min
maximum expected module
maximum power voltage(V
)
mp_min
(8)
temperature
temperature
Vmp_min = Vmp +{(
)(
)}
coefficient of Vmp
difference
(9)
temperature
Vmp_min = Vmp + {(T_max - T_STC ) (
)}
coefficient of Vmp
(10)

Where
Nmin = Minimum number of Modules in Series
Vmp = Maximum power voltage
Vmp_min = Minimum expected module maximum power voltage
T_max = Maximum temperature for the site
T_stc = Temperature at Standard Test Conditions

Number of strings in parallel per inverter


The maximum number of parallel strings that can
be connected to the inverter without causing current
limiting can usually determined by a simple calculation

Fig.2. PV array showing module, string and main Cables.

A number of cable connection systems are available those


are i. Screw terminals.
ii. Post terminals.
iii. Spring clamp terminals.
iv. Plug connectors.
Plug connectors have become the standard in grid
connected solar PV plants due to the benefits that they
offer in terms of installation ease and speed.
For module cables the following should apply
Minimum Voltage Rating = VOC (STC) 1.15
(12)
Minimum Current Rating = ISC (STC) 1.25.
(13)
In an array comprising of N strings connected in
parallel and M modules in each string, sizing of cables
should be based on the following.
Array with no string fuses (applies to arrays of three or
fewer strings only).
Voltage: VOC (STC) M1.15
(14)
Current: ISC (STC) (N - 1) 1.
(15)
Array with string fuses
Voltage Rating = VOC (STC) M1.15
(16)
Current Rating = ISC (STC) 1.25
(17)
The formulae that guide the sizing of main DC
cables running from the PV array to the inverter, for a
system are given below.
minimum voltage rating = VOC(STC) M 1.15

(18

minimum current rating = ISC(STC) N 1.25

(19)
In order to reduce losses, the overall voltage drop
between the PV array and the inverter (at STC) should be

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


18

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

minimized. A benchmark voltage drop of less than 3% is


suitable.
D.

Junction boxes/combiner box

Junction boxes or combiners are needed at the point


where the individual strings forming an array are
marshalled and connected together in parallel before
leaving for the inverter through the main DC cable. As a
precaution, disconnects and string fuses should be
provided.
E.

String fuses

The main function of string fuses is to protect PV


strings from over-currents. Miniature fuses are normally
used in PV applications. The faults can occur on both the
positive and negative sides, fuses must be installed on all
unearthed cables. To avoid nuisance tripping, the nominal
current of the fuse should be at least 1.25 times greater
than the nominal string current. The string fuse must be
rated for operation at the string voltage using the formula.
String Fuse Voltage Rating = VOC(STC) M 1.15
(20)
F.
Ac cabling

Fig.4. Sun path diagram for utility scale solar PV plant [26].

Cabling for AC systems should be designed to


provide a safe, economic means of transmitting power
from the inverters to the transformers and beyond. Cables
should be rated for the operating voltage. Cables should
comply with relevant IEC standards or national standards.
Examples of these include.
IEC 60502 for cables between 1 kV and 36 kV.
IEC 60364 for LV cabling (BS 7671 in UK).
IEC 60840 for cables rated for voltages above 30 kV and
up to 150 kV.
G.

Sizing of transformer

The purpose of transformers in a solar power plant is


to provide suitable voltage levels for transmission across
the site and for export to the grid. In general, the inverters
supply power at (LV) Low Voltage. But for a commercial
solar power plant, grid connection is typically made at
upwards of 11 kV. It is therefore necessary to step up the
voltage using a transformer between the inverter and the
grid connection point.
Cables, fuse, switches are standard components
meeting with (NEC) National Energy Commission
requirements. The proposed 25MW system needs 1, 00,
000 solar PV modules with 250 watts of power generation
at STC arranged 20 solar PV modules in series for each
string and total of 100 strings connected to one inverter of
500 kW. The selected and simulated components are
given in fig.3.

Fig.3.Solar PV power plant component sizing [26].

IV.

ENGINEERNING

A. solar fixed layout


The general layout of the plant and the distance
chosen between rows of mounting structures will be
selected according to the specific site conditions and
location on earth, plotting its altitude and azimuth angle
on a sun path diagram as shown in Fig.4 [20]. Computer
simulation software could be used to help design the plant
layout.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


19

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Tilt angle
Every location will have an optimal tilt angle that
maximizes the total annual irradiation (averaged over the
whole year) on the plane of the collector [20]. For fixed
tilt grid connected power plants, the theoretical optimum
tilt angle may be calculated from the latitude of the site.
Panel tilt angle() = -
(21)
360(284+ n)
Declination angle() = 23.45sin
365.24

(22)

where tiltangle
declinationangle

Fig.6.Aggregations of losses in the large scale PV plant.

latitudeof thesite
n number of days in that month

Inter-row spacing
The choice of row spacing is a compromise
chosen to reduce inter-row shading while keeping the area
of the PV plant within reasonable limits, reducing cable
runs and keeping ohmic losses within acceptable limits.
Inter-row shading can never be reduced to zero [24]: at
the beginning and end of the day the shadow lengths are
extremely long, so we can maintain the inter row spacing
amoung arrays is two times to its height.
Orientation

Fig.7.Percentage representations of various losses in the large scale solar


PV power plant.

The aggregation of losses in the large scale


power system using PV syst software is shown in the
fig.8.

In the northern hemisphere, the orientation that


optimizes the total annual energy yield is true south [20]
with tilt angle of 35isshowninfig.5.

Fig.5.Collector plane orientations [26].

B. Loss analysis
There are various losses are occurred in the large
scale solar pv power plant, the aggregation of those losses
in the large scale power system are shown in the fig.6 and
the percentage representation of losses in the system are
also shown in the fig.7.

Fig.8. Detailed power losses considered in sizing the power plant [26].

V.

MATLAB/SIMULINK MODELLING

The Matlab/Simulink model of a 25MW utility scale


solar PV plant is shown in fig.9.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


20

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

parallel to allow the system to produce more current. The


equivalent circuit of PV array is shown in fig.11 [6] [8]
[9].

Fig.11. Equivalent circuit of a PV array.

From the above fig.10


I A = (I ph N p ) - Ish - Id

(23)

I ph = I rr (Isc + K i (Top - Tref ))


[

V + IR S

I d = IS N P {{exp
VT =
Fig.9.Simulink model for the 25 mw solar PV plant

In fig.9 we have 25 sub systems each of capacity


1 MW, from which a 1.25 MW, 744/25kV transformer for
step up the voltages and these are fed to the utility grid
through a transmission lines to transfer the power to the
utility grid. Each sub system has two solar PV panels, two
inverters and two LCL filters to minimizing the harmonic
contents that are present in the systems due to PWM
inverters as shown in fig.10.

NS
n

(24)

]
VT C} - 1}

(25)

KTOP
q

(26)
2

q Eg
T
1
1
3
IS = I rs [ OP ] (exp
(
))
Tref
Kn Tref Top
I sc

I rs =
exp(

ISh =

qVoc
)
KCTop n

(27)

-1

(28)

IR S + V
RP

(29)

KTOP
VT =
q

(30)

Where
IA PV array output current
IPh Solar cell photocurrent
Ish Shunt current of PV array

Fig.10.1MW subsystem for 25 MW pv plant


A.

PV array

A Photovoltaic (PV) cells are used to convert the sunlight


into direct current (DC). Due to the low voltages and
current generated in a PV cell several PV cells are
connected in series and then in parallel to form a PV
module for desired output. The modules in a PV array are
usually first connected in series to obtain the desired
voltages and individual strings are then connected in

Id

diode current of PV array

N p number of modules in parallel


VA array output voltage
R s series resistance of the PV module
R P parallel resistance of the PV module
Irr cell reverse saturation current at temperatureTref
Isc short circuit current of the PVcell

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


21

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


Ki

short circuit current temperature coefficient

Top operating temperature of the PV cell in Kelvins


T reference temperature of the PV cell in Kelvins
ref
Is

Reverse saturation current equation at Top

Vt

terminal voltage of the pvcell

Irs cell reverse saturation current at temperature Tref


E G band gap of the semiconductor used in the cell
K Boltzmann's constant, 1.380658e 23 J / K
q Electron charge, 1.60217733e 19 Cb
Ns number of modules in series
N p number of modules in parallel
n p - n junction ideality factor
Ctotalnumberof cellsinaPVmodule

By grouping all the above equations from (23)


(30) and the data available in the table 1 gives the
Simulink model for PV array is shown in fig.12.

The modulation index [19] is defined as the ratio


of the magnitude of output voltage generated by SPWM
to the fundamental peak value of the maximum square
wave. Thus, the maximum modulation index of the
SPWM technique is
Vdc
VPWM

MI=
= 2 = 0.7855=78.55%
2Vdc 4
V
max-sixstep

(31)
Where VPWM is the maximum output voltage generated by
a SPWM and Vmax sixstep is the fundamental peak value of
a square wave.
C.

Filter

In the grid-connected inverter all the controlled


power electronic devices like IGBT and GTO are to be
used which are modulated by the high frequency PWM. As
a result the du/dt and di/dt are ever large [25]. Due to the
occurrence of some drifter parameters the current
incorporated high order harmonic flow into the power grid
this made the harmonic pollution. The most ordinary filter
is L filter in the grid-connected inverter [27]. In order to
reduce current ripple, the inductance have to be increased.
As a result, the volume and weight of the filter increased.
While the arrangement and the parameter of the LC filter
are easy the filtering effect is not good because of the
uncertainty of network impedance. LCL filter had an
naturally high cut-off frequency and strong penetrating
capability in low frequency. So LCL filter has come into
extensive use in the inverter [27].
Filter inductor design

Fig.12.Simulink model of a PV array.

B.

Inverter

A three phase inverter is attached to carry out the


power change of the array output power into AC power
appropriate for injection into the grid [19]. Pulse width
modulation control is one of the techniques used to shape
the phase of the inverter output voltage. The sinusoidal
pulse-width modulation (SPWM) method produces a
sinusoidal waveform by filtering an output pulse
waveform with varying width. A high switching
frequency leads to a better filtered sinusoidal output
waveform. The most wanted output voltage is achieved
by changeable the frequency and amplitude of a reference
or modulating voltage. The variations in the amplitude
and frequency of the reference voltage change the pulsewidth patterns of the output voltage but keep the
sinusoidal modulation.

For the given circumstance of DC bus voltage and


AC output voltage and current as the L value is increasing
the ripple content decreases, tracking speed of current is
reduces, weight, volume and cost increases. Under the idea
of the price economy how to design the inductance
parameters for the best consequence is the key question.
Based on great reference, the constraints could be got as
1) Under the rated circumstances the voltage drop of the
inductive filter is smaller than 5% of the network voltage.
2) The peak to peak amplitude of harmonic current will be
prohibited within 10%~20% of the rated value of the
inverter.
3) The inrush current of inverter should be as small as
possible.
4) In order to attain the best presentation of LCL filter, in
the low frequency range the current should be as smooth as
possible and in the high frequency range the shrinking rate
should be as fast as possible.
5) Let the high order harmonic flow through the
capacitance, and the low order harmonic flow through the
inductance.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


22

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

XC =

; X L2 = SL 2

CS
(32)
Therefore, f is bigger and XC is as smaller as better or both
the f and X L2 is as smaller as better. P is defined as the
rated output power of three-phase grid-connected inverter
cos is power factor, V is rated network voltage, and f is
driven frequency. Then equation (33) can be derived from
the constraint 1).
4fPL
(33)
5%
2
3V cos

2Vdc
i max = 3 + V
2L f

i max

(34)

2P

* (0.1 0.2)
(35)
3Vcos
Equations (34) and (35) can be derived from constraint 2)
and reference [25].The constraints 2) -5) show that the
values of L and C are as bigger as better.

f1

How to design the capacitance parameters is one


more key question. If the Xc value is more, the high
frequency harmonics that flow through the shunt
capacitor branch is not enough. As a result, the great highfrequency harmonic current flow into the grid [27]. If the
Xc got too small, which will lead to the great reactive
current flow though the capacitor branch thereby
increasing the inverter output current and increasing
system losses. In general when the resonant frequency of
filter capacitance and inductance is inside the range 1/4 to
1/5 carrier frequency then the filtering performance is bet.
The resonant frequency of LCL filter could be described
as

1
2

L1 + L 2
L1L 2C

grid connected phase voltage,


isthefundamentalfrequency of the grid,
is the ratio of fundamental power absorbed by
the filter capacitor to total power.

D.

GRID IMPEDANCE

The grid impedance was the leakage impedance of


the 25MW distribution transformers [25]. It is calculated
according to (39) using the transformer parameters.
2
V
V
Lg = cc . 1L
(39)
2f PNT
WhereLgleakageimpedanceofthe transformer
PNT power of the grid connected transformer
V1L primarysidelinevoltage of the transformer
f

Filter capacitance

f =

Considering the equations (37) and (38), the filter


capacitance value can be calculating the filter capacitance
value.
WhereEm istherootmeansquarevalues RMS of

grid frequency

Vcc voltagetoleranceofthe transformer

The cable impedances of the PV plant and most parasitic


impedances were neglected. These parasitic impedances
mainly affect the high-frequency range and are difficult to
estimate.

VI.

RESULTS AND DISCUSSIONS

(36)

Generally, the resonant frequency is bigger than


the 10 times the power frequency and smaller than 1/2
times the switching frequency.
1
(37)
C=
2
4f L
In order to avoid the low power factor of gridconnected inverter the reactive power that is absorbed by
filter capacitance should not exceed 5% of the rated active
power.
P
(38)
C
2
6fE M

Fig.13.500kW inverter output voltage

Fig.14.500kW inverter output current

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


23

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


Fig.20.Transformer output current

Fig.15.500kW inverter output power


Fig.21.Transformer output power

Fig.16.Transformer input voltage


Fig.22.Grid voltage

Fig.17.Transformer input current

Fig.23.Grid current

Fig.18.Transformer input power


Fig.24.Power fed to the Grid

CONCLUSION

Fig.19.Transformer output voltage

For developing a 25MW utility scale PV power


plant in this paper a site which has a GHI of 5.65
kWh/m2/day, it requires total area of 160582m2 and annual
yield is 41313 MWh. In this paper sizing of basics
components of solar PV plant can be designed using the
PV syst software. The present study of engineering
analysis of 25MW solar photovoltaic (PV) power plant
includes the losses analysis and the module orientation of
the PV plant, these are also done by using PV syst
software. The modeling of 25 MW solar PV power plant is
done in MATLAB/SIMULINK environment.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


24

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

REFERENCES
1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.
14.

15.

16.

17.

18.

19.

P. Karki, B. Adhikary and K. Sherpa, "Comparative Study of GridTied Photovoltaic (PV) System in Kathmandu and Berlin Using
PV syst," IEEE ICSET, 2012.
H.L. Tsai, C.S. Tu, Y.J. Su, Development of generalized
photovoltaic model using MATLAB/SIMULINK, in: Proceedings
of the World Congress on Engineering and Computer Science, San
Francisco, USA, 2008
M.G. Villalva, J.R. Gazoli, E.R. Filho, Comprehensive approach to
modeling and simulation of photovoltaic arrays, IEEE Transaction
on Power Electronics 24 (2009) 1198-1204
A. A. Hassan, F. H. Fahmy, A. E.-S. A. Nafeh and M. A. El-Sayed,
"Modeling and Simulation of a Single Phase Grid Connected,"
WSEAS TRANSACTIONS on SYSTEMS and CONTROL, pp.
16-25, 2010
J.T. Bialasiewicz, Renewable energy systems with photovoltaic
power generators: Operation and modeling, IEEE Transactions on
Industrial Electronics 55 (2008) 2752-2758.
Jitendra
Kasera,
Ankit
Chaplot
and
Jai
Kumar
Maherchandani,(2012). Modeling and Simulation of Wind-PV
Hybrid Power System using Matlab/Simulink, IEEE Students
Conference on Electrical, Electronics and Computer Science.
K. T. Tan, P. L. So, Y. C. Chu, and K. H. Kwan, (2010).
Modeling, Control and Simulation of a Photovoltaic Power
System for Grid-connected and Standalone Applications, IEEEIPEC.46
J. M. Carrasco, L. G. Franquelo, J. T. Bialasiewicz, E. Galvn, R.
C. Portillo-Guisado, M. A. Martn-Prats, J. I. Len, N. MorenoAlfonso, Power electronic systems for the grid integration of
renewable energy sources: a survey, IEEE Trans. Industrial
Electronics, vol. 53, no. 4, pp.1002-1016, 2006
Soeren Baekhoej Kjaer, Member, IEEE, John K. Pedersen, Senior
Member, IEEE, and Frede Blaabjerg, A Review of Single-Phase
Grid-Connected Inverters for Photovoltaic Modules, IEEE
TRANSACTIONS ON INDUSTRY APPLICATIONS, VOL. 41,
NO. 5, SEPTEMBER/OCTOBER 2005.
Vasareviius D., Martaviius R. Solar Irradiance Model for Solar
Electric Panels and Solar Thermal Collectors in Lithuania //
Electronics and Electrical Engineering. Kaunas: Technologija,
2011. No. 2(108). P. 36.
M. G. Molina, and P. E. Mercado Modeling and Control of
Gridconnected PV Energy Conversion System used as a Dispersed
Generator.978-1-4244-2218-0/08/2008 IEEE B.
Soeren Baekhoej Kjaer, John K. Pedersen, Frede Blaabjerg,
Power Inverter Topologies for Photovoltaic Modules A
Review,
Verhoeven, et. al. Utility aspects of grid connected photovoltaic
power systems, International Energy Agency PVPS task V, 1998
M. Meinhardt, D. Wimmer, G. Cramer, Multi-string-converter:
The next step in evolution of string-converter, Proc. of 9th EPE,
2001, Graz, Austria.
S. Rustemli, F. Dincer, Modeling of Photovoltaic Panel and
Examining Effects of Temperature in Matlab/Simulink,
ELECTRONICS AND ELECTRICAL ENGINEERING 2011. No.
3(109).
F.Bouchafaa1, D.Beriber1, M.S.Boucherit, Modeling and control
of a gird connected PV generation system, 18th Mediterranean
Conference on Control & Automation Congress Palace Hotel,
Marrakech, Morocco June 23-25, 2010
Altas, I. H.; Sharaf, A.M., A Photovoltaic Array Simulation
Model for Matlab-Simulink GUI Environment, Clean Electrical
Power, 2007. ICCEP '07. International Conference on 21-23 May
2007 Page(s):341 345.
Huan-Liang Tsai, Ci-Siang Tu, and Yi-Jie Su, Development of
Generalized Photovoltaic Model Using MATLAB/SIMULINK
Proceedings of the World Congress on Engineering and Computer
Science 2008 WCECS 2008, October 22 - 24, 2008, San Francisco,
USA
R. Mechouma, B.Azoui, M.Chaabane, Three-Phase Grid
Connected Inverter for Photovoltaic Systems, a Review, 2012
First International Conference on Renewable Energies and
Vehicular Technology

20. Anita Marangoly George 2012 utility scale solar pv plant a guide
for developers and investors
21. Array to Inverter Matching Mastering Manual Design Calculations
solar weekly By John Berdner
22. Scaling Up for Commercial PV Systems solar weekly By John
Berdner
23. Crystalline Silicon Photovoltaic Modules solar weekly by John
Berdner
24. Array Trackers Increase Energy Yield & Return on Investment
solar weekly by Stephen Smith.
25. Juan Luis Agorreta, Mikel Borrega, Jesus Lopez, Modeling and
Control of N-Paralleled Grid-Connected Inverters With LCL Filter
Coupled Due to Grid Impedance in PV Plants Member, IEEE, and
Luis Marroyo, Member, IEEE.
26. PV syst software.
27. F. Liu, X. Zha, and S. Duan, Three-phase inverter with LCL filter
design parameters and research, Electric Power Systems, March
2010, pp. 110-115.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


25

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

LITERATURE REVIEW ON EFFICIENT DETECTION AND FILTERING OF HIGH


DENSITY IMPULSE NOISE - A NOVEL ADAPTIVE WEIGHT ALGORITHM
1

3
N.Pushpalatha
B. Neelima
Associate Professor
Assistant professor
Department of ECE, AITS
Department of ECE, AITS
Annamacharya Institute of Technology and Sciences,Tirupati,India-517520

Ch.v.Nagendra Babu
M.Tech(DECS), Student

1
nagendra.cherukupalli@gmail.com
pushpalatha_nainaru@rediffmail.com
3
neeli405@gmail.com

Abstract This paper presents a literature review on


image filters, which are used to remove salt and pepper
noise. Image filtering, is a fundamental and crucial for
vision perception of human bare eye, can remove noise
from the noisy image. There are various image filtering
techniques to de-noise the noisy image. Each filtering
techniques has its own features to filter an image. The
overall goal of this paper is to explore the benefits and
limits of existing techniques. A few among the existing
common salt and pepper noise filtering algorithms
includes: Traditional Median (TM) filter algorithm;
Switching Median (SM) filter algorithm, Decision
based median algorithm etc. It is found that Adaptive
weight algorithm has some advantages over existing
techniques when to reduce salt and pepper noise.

Key Terms: Impulse noise, Salt and pepper noise,


Traditional Median filter(TMF), Switching Median
filter(SMF), Decision based median (DBM)algorithm.

I. INTRODUCTION
Images captured by cameras may produce noise
due to malfunction of camera pixels. And these
captured images often were polluted by various
noises during the course in which they are generated
or transmitted. There are different types of noises,
such as White noise, salt and pepper noise, which
affect the vision of an image. Among all the types of
noises salt and pepper noise is the most frequent one.
Salt and pepper noise is a type of noise normally
seen on images. Salt and "impulsive" noise is
sometimes called salt-and-pepper noise or spike
noise. It represents itself as randomly occurring
white and black pixels. An image containing saltand-pepper noise will have dark pixels in bright
regions and bright pixels in dark regions. This type
of noise can be getting by analog-to-digital converter
errors, bit errors in transmission, etc. Here, the noise
is get by errors in the data transmission. The
corrupted pixels will be set to the either maximum

value (Which looks like snow in the image) or have


single bits flipped over. In some cases, only single
pixels are set alternatively to zero or to the
maximum value, giving the image a `salt and pepper'
like appearance. De-noised pixels always remain
unchanged. The noise is generally quantified by the
percentage of pixels which are corrupted.
Image noise is a random variation of brightness
or color information in images, and is generally an
aspect of electronic noise. Digital Image processing
is an Electronic Domain in which image is divided
into small unit called pixel and subsequently various
operation has been carried out on the pixels. Noise
can be usually originated in the sensor or
transmission channel during the acquisition and
transfer procedure for the digital signal images. In
the Digital Image Processing field, removing the
noise from the image is the critical issue.
In past years, linear filters became the most
popular filters in image signal processing. The
reason of their popularity is caused by the existence
of robust mathematical models which can be used
for their design and analysis. However, there exist
many areas in which the nonlinear filters provide
significantly better results. The benefit of nonlinear
filters lies in their ability to preserve edges and
suppress the noise without details loss. The success
of nonlinear filters is due to the fact that image
signals as well as existing noise types are usually
nonlinear. As salt and pepper noise is a random shot
noise, it is very hard to remove this type of noise
using linear filters. Median filters are non-linear.
Median based filters have attracted very much
attention due to their simplicity and information
preservation capabilities [1][2], during last one
decade. A few of the median image filters to denoise salt and pepper noise includes: Traditional
Median (TM) filter algorithm, Switching Median
(SM) filter algorithm, Decision based filtering

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


26

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

algorithm etc. Traditional Median filter, Switching


median filter (SM) algorithms are good at the lower
noise density due to less numbers of the noisy pixels
which are replaced with the median values which are
replaced with the median values[3][4].
II. RELATED WORK
2.1 EXISTING IMAGE FILTERING
TECHNIQUES
A. TRADITIONAL MEDIAN FILTER
Median filtering is a non-linear filtering
technique that is well known for the ability to
remove impulsive type noise, while preserving sharp
edges. The median filter is an order statistics filter.

details. Some pixel details will be removed with the


mean filter algorithm. But with the median filter, we
do not replace the pixel value with the mean of
neighboring pixel values, instead it replaces with the
median of those values. The median is calculated by
first sorting all the pixel values in ascending order
from the surrounding neighborhood into numerical
order and then replacing the pixel being considered
with the middle pixel value. In median filter, the
pixel value of a point p is replaced by the median of
pixel value of 8-neighbourhood of a point p. Fig
illustrates an example calculation.
.

123

125

126

130

140

122

124

126

127

135

118

120

150

125

134

119

115

117

121

133

111

116

110

120

130

Fig 2: Pixels of an image


Neighborhood values:
115,117,120,121,124,125,126,127,150.
Median values: 124.
The median filter gives best result when the
impulse noise percentage is less than 0.1%. When
the quantity of impulse noise is increased the median
filter not gives best result.

Fig.3. Experimental results of the Traditional median


filter.

Fig.1. Flow chart of traditional median filter

The procedural steps for traditional median filtering:


step1: Consider the Image pixel matrix of [m x n]
size

Mean filter is also used to remove the impulse


noise. Mean filter replaces pixel with the mean of
the pixels values but it does not preserve image

step2: Now pre-allocate another matrix of size [m+2


x n+2]. i.e., Pad the matrix with zeros on all sides of
the original image pixel matrix.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


27

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Step3: Consider a window of size 3 x 3. The window


of can be any size.
Step4: The value to be changed is the middle value
of the considered window matrix.
Step5: Sort the window matrix. After sorting the
output value is found using median values of the
neighborhood pixels.
The mean value is the average of the all values,
we can calculate it by adding all the values together
and then dividing that number by the number of
values you have. But the median is the middle
number of all values. We can calculate it by placing
all values in ascending order, and that final middle
value is the median value. The calculation of median
value is explained in fig.2. The traditional median
filtering is best known for its simplicity, but it has its
own limitations that it is not suitable in case of noise
higher than 25%. The main drawback of the median
filter is that it also modifies non noisy pixels thus
removing some fine details of the image. Therefore
it is only suitable for very low level noise density[5].
At high noise density it shows the blurring.

= , , ,, . . , ,
Ti is a threshold and yi,j is the filtered pixel
locating at position (i,j) . x Ti means that the
current pixel is much more different from its
neighbors and can be treated as a noise. x < Ti
denotes the current pixel to be regarded as a noisefree pixel. In fact, the impulse noise value is
distributed uniformly, once its value is rather close
to its neighbors such that x < Ti happens, the noise
pixel cannot be detected by SWM. Hence, this noise
pixel cannot be filtered unless the threshold is
lowered down. The lower the threshold is used, the
more are noise pixels detected, but less detail pixels
are preserved. In other words, there is a trade-off
between noise detection and detail preservation on
tuning the threshold[7].

B. SWITCHING MEDIAN FILTER


Switching median filters[6] are well known
image filtering algorithm. Detecting noisy pixels and
processing only noisy pixels is the main principle in
switching-based median filters. There are 3 stages in
switching-based median filtering, namely, detection
of noise, noise-free pixels estimation and
replacement. The principle of detecting noisy pixels
and processing only noisy pixels has been effective
in reducing processing time as well as image
degradation. The mathematical operation of
switching median filter is explained below.
 , represeLet, , ,, . . ,
nts the input sample in the (2L+1) x (2L+1) sliding
window where xi,j is the current pixel locating at
position (i,j) in the image. The output of SWM is
defined as

, =
,
where = ,


<
Fig.4. Flow chart of Switching Median Filter

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


28

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

The limitation of switching median filter is that


defining a robust decision measure is difficult
because the decision is usually based on a predefined
threshold value[8][9]. In addition the noisy pixels
are replaced by some median value in their vicinity
without taking into account of local features such as
presence of edges. So that, edges and fine details are
not recovered satisfactorily, particularly when the
noise level is very high. In order to overcome these
drawbacks a two-phase algorithm has proposed. In
the first phase an adaptive median filter is used to
classify noise affected and unaffected pixels. In the
second phase, specialized regularization method is
applied to the noisy pixels to preserve the edges
besides noise suppression. The main limitation of the
switching median filter is due to this some details
and edges are also removed particularly in case of
high noise density. Noise density higher than 50%,
switching median filter is not suitable.
C. DECISION BASED ALGORITHM
To overcome the drawbacks of the switching
median filter the decision based algorithm is
proposed[10]. The decision based algorithm first
detects the impulse noise in the image. The noise
affected and unaffected pixels in the image are
detected by checking the pixel element value against
the maximum and minimum values in the window
selected. The maximum and minimum values that
the impulse noise takes will be in the dynamic range
(0, 255). If the pixel being currently processed has a
value within the minimum and maximum values in
the window of processing, then it is an uncorrupted
pixel and no modification is made to that pixel. If
the value doesnt lie within the range, then it is a
corrupted pixel and will be replaced by either the
median pixel value or by the mean of the
neighborhood processed pixels (if the median itself
is noisy), which will ensure a smooth transition
among the pixels. In the case of high noise density,
the median value itself can be noisy. It is in this
case, the pixel value is replaced by the mean of the
neighborhood processed pixels.

In the 33 window above, P1, P2, P3 and P4


indicates already processed pixel values, C indicates

the current pixel being processed, and Q1, Q2, Q3


and Q4 indicates the pixels yet to be processed. If
the median value of the above window itself is
noisy, then, the current pixel value C will be
replaced by the mean of the neighborhood processed
pixels, that is, the mean of P1, P2, P3 and P4. And
the values of the pixels Q1, Q2, Q3 and Q4 will not
be taken into account since they represent
unprocessed pixels.
The steps of the algorithm are elucidated as follows:
Step 1: Select a two dimensional window W of size
33. Assume that the pixel being processed is, .
Step 2: Compute - , and - the
minimum, median and maximum of the pixel values
in the window W respectively.
Step 3:
Case (i) If < , < , then , is an
uncorrupted pixel and its value is left unchanged.
Otherwise , is a noisy pixel.
Case (ii) If , is a noisy pixel, it will be replaced
by , the median value, only if <
< .
Case (iii) If < < is not satisfied,
Wmed itself is a noisy pixel value. In this case, ,
will be replaced by the mean of the neighborhood
processed pixels.
Step 4: Repeat Steps 1 to 3 until all the image pixels
are processed.
In the decision based algorithm, the nature of the
pixel being processed, that it is corrupted or not, will
be checked. Then the value of the pixel being
processed is replaced with the corresponding value
as in the Cases (i), (ii), and (iii) of Step 3. The
window is then subsequently moved to form a new
set of values, on the next pixel to be processed at the
window centre. This process is repeated till the last
image pixel is processed.
The limitation of the Decision based filter is that
it is based on the predefined threshold. In this
algorithm, image is de-noised by using a 3X3
window. The image is de-noised for pixel value 0
or 255 else it is left unaltered. At very high noise
density the median value will be 0 or 255 which
is noisy. So in such case, neighboring pixel is used

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


29

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Processing, vol. 4, no. 4, 1995, pp. 499


502.

for replacement. Such repeated replacement of


neighboring pixel produces streaking effect.
III.CONCLUSION
In this paper we have discussed different
existing image filtering methods (Traditional median
filtering algorithm, Switching filtering algorithm and
Decision based algorithm) used to remove salt and
pepper noise.
A new adaptive weight algorithm is proposed
which gives better performance in terms of
PSNR[11]. The proposed method consists of two
major blocks, detection and filtering. The detection
block uses neighborhood pixels correlations to
divide the pixels into signal pixels and noise pixels.
Only noise pixels are processed and signal pixels are
kept the same. For filtering block, different
approaches were considered according to the noise
density. In the low noise density case, neighborhood
signal pixels mean method is adopted. While in the
high noise density case, a new adaptive weight
algorithm is used.
The performance of the proposed algorithm has
been tested at low noise, medium and high noise
densities on grayscale images. Even at high noise
density levels the algorithm gives better results in
comparison with other existing image filtering
techniques. Both visual and quantitative results are
demonstrated. Adaptive weight algorithm, the
proposed algorithm, is effective for salt and pepper
noise removal in images at high noise densities.

REFERRENCES

1.

R. C. Gonzalez, and Woods R.E, Digital


Image
Processing.
Addison-Wesley,
Boston, 2005.

2.

T. Sun and Y. Neuvo, Detail-preserving


median based filters in image processing,
Pattern Recognition Letters, vol. 15, no. 4,
1994, pp. 341347.

3.

Z. Wang and D. Zhang, Progressive


switching median filter for the removal of
impulse noise from highly corrupted
images, IEEE Transactions on Circuits and
Systems-II, vol. 46, 1999, pp.7880.

4.

H. Hwang and R. A. Hadded, Adaptive


median filter: new algorithms and
results,IEEE Transactions on Image

5.

J.
Astola
and
P.
Kuosmaneen,
Fundamentals of
Nonlinear
Digital
Filtering. Boca Raton, FL: CRC, 1997.

6.

S. Zhang and M. A. Karim, A new impulse


detector for switching median filters, IEEE
Signal Processing Letters, vol. 9,no. 11,
2002, pp. 360363.

7.

R. H. Chan, C.W. Ho and M. Nikolova,


Salt-and-pepper noise removal by mediantype noise detectors and detail preserving
regularization, IEEE Transactions on Image
Processing, vol. 14, no.10, 2005, pp. 1479
1485.
P. E. Ng and K. K. Ma, A switching
median filter with boundary discriminative
noise detection for extremely corrupted
images, IEEE Trans. Image Process., vol.
15, no. 6, pp. 15061516, Jun. 2006.

8.

9.

V. Jayaraj and D. Ebenezer, A new


switching-based median filtering scheme
and algorithm for removal of high-density
salt and pepper noise in image, EURASIP
J. Adv. Signal Process, 2010.

10. K. S. Srinivasan and D. Ebenezer, A new


fast and efficient decision based algorithm
for removal of high density impulse noise,
IEEE Signal Process. Lett., vol. 14, no. 3,
pp. 189192, Mar. 2007.

11. Zhu, Rong, and Yong Wang. "Application


of Improved Median Filter on Image
Processing." Journal of Computers 7, no. 4
(2012): 838-841.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


30

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

TIME CONSTRAINED SELF-DESTRUCTING


DATA SYSTEM (SeDaS) FOR DATA PRIVACY
1

S. Savitha, PG Scholar,

Dr. D. Thilagavathy, Professor,

Department of CSE,

Department of CSE,

Adhiyamaan College of Engineering,

Adhiyamaan College of Engineering,

Hosur-635109, Tamil Nadu, India.

Hosur-635109, Tamil Nadu, India.

savithasclick@gmail.com

Abstract--Development of Cloud and popularization of mobile


Internet, Cloud services are becoming more and more important
for peoples life where they are subjected to post personal
credentials like passwords, account number and many more.
These details are cached and archived by cloud service providers
where security is an important issue to be taken into
consideration. Self-destructing data aims at providing privacy to
these data which becomes destructed after a user-specified time.
The data along with its copies becomes unreadable after a certain
period of time. To meet this challenge some cryptographic
techniques with active storage framework is used. The
performance for uploading/downloading the files has also been
achieved better compared to the previous system. Thus the paper
tells a short analysis of how the research has been carried out in
these areas with various techniques.

thilagakarthick@yahoo.co.in

data is transformed and processed it is cached and copied on


many systems in the network which is not up to the
knowledge of the users. So there are chances of leaking the
private details of the users via Cloud Service Providers
negligence, hackers intrusion or some legal actions.
Vanish [1] provides idea for protecting and sharing privacy
where the secret key is divided and stored in a P2P system
with distributed hash table (DHTs).

Index Terms--cloud computing, time constrained self-destruction,


active storage, data privacy

I. INTRODUCTION
Fig. 1. The Vanish system architecture [1]

Internet-based development and use of computer technology


has opened up to several trends in the era of cloud computing.
The software as a service (SaaS) computing architecture
together with cheaper and powerful processors has
transformed the data centers into pools of computing service
on a huge scale. Services that reside solely on remote data
centers can be accessed with high quality due to increased
network bandwidth and reliable network connections. Moving
data into the cloud offers great convenience to users since they
dont have to care about the complexities of direct hardware
management.
Cloud computing vendors like Amazon Simple Storage
Service (S3), Amazon Elastic Compute Cloud (EC2) are well
known to all. When people rely more and more on internet
and cloud technology the privacy of the users must be
achieved through an important issue called security. When

In order to avoid hopping attacks which is one kind of Sybil


attack [18],[19] we go for a new scheme, called Self Vanish
[4] by extending the length range of key shares along with
some enhancement on Shamir secret sharing algorithm [2]
implemented in vanish system.

Fig. 2(a). The push operation in the VuzeDHT network.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


31

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Fig. 2(b). Hopping Attack

Another scenario for storing the data and files is active storage
framework which has become one of the most important
research branches in the domain of intelligent storage systems.
For instance, Wickremesinghe et al. [34] proposed a model of
load-managed active storage, which strives to integrate
computation with storage access in a way that the system can
predict the effects of ofoading computation to Active Storage
Units (ASU). Hence, applications can be congured to match
hardware capabilities and load conditions. MVSS [35], a
storage system for active storage devices, provided a single
framework 2550 IEEE TRANSACTIONS ON MAGNETICS,
VOL. 49, NO. 6, JUNE 2013 to support various services at the
device level. MVSS separated the deployment of services
from le systems and thus allowed services to be migrated to
storage devices.
III. DISCUSSION AND RESULT
Various techniques has been covered to provide security for
the data stored in cloud alone with performance evaluation for
uploading and downloading the files. Researchers have mainly
concentrated on the algorithms that is used for key
encryption/decryption and sharing. Let us discuss various
approaches that has been used for the same.
This paper [3] describes vanish implementation that leads to
two Sybil attacks, where the encryption keys are stored in
million-node Vuze Bit Torrent DHT. These attacks happens
by crawling the DHT and saving each stored value before its
time goes out. More than 99% of Vanish messages can be
recovered with the keys efficiently in this method.

Fig. 3. Increasing the length of range of key shares [4]

II. RELATED WORK


In cloud, providing privacy to the data stored in it is a major
task where performance measures are also important to be
done to achieve excellence. So accordingly storage and
retrieval plays an important role where the use of Objectbased storage (OBS) [21] uses an object-based storage device
(OSD) [22] as the underlying storage device. The T10 OSD
standard [22] is being developed by the Storage Networking
Industry Association (SNIA) and the INCITS T10 Technical
Committee. Each OSD consists of a CPU, network interface,
ROM, RAM, and storage device (disk or RAID subsystem)
and exports a high-level data object abstraction on the top of
device block read/write interface.

According to this paper [5] so as to take advantage of the


process capabilities of service migration they need used a
method known as Active storage. However, in recent analysis,
they have enforced a model of service execution that also
remains passive request-driven mode. In self-management
scenario, a mechanism for automatic service execution has
been implemented which is important. To handle this
drawback they have employed an energetic storage framework
for object-based device that provides a hybrid approach to mix
request-driven model and policy-driven model. Supported the
necessities of active storage, some enhancements area unit
additional into the present version T10 OSD specification
have been given in the paper. Finally, they have shown a
classification system example with the assistance of the active
storage mechanism, network delay may be dramatically
reduced.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


32

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

forging identities. The Sybil attack refers to the situation


where an adversary controls a set of fake identities, each
called a Sybil, and joins a targeted system multiple times
under these Sybil identities. In this paper, they have
considered an identity-based systems where each user is
intended to have a single identity and is expected to use this
identity when interacting with other users in the system. In
such systems, we call a user with multiple identities a Sybil
user and each identity the user uses a Sybil identity. The
solution to this attack has been given in the paper Safe Vanish
[4].
IV. PROPOSED WORK

Fig. 4. Active Storage in context of parallel file systems [5], [12]

According to this paper [9] they have introduced parallel I/O


interface that executes data analysis, mining, statistical
operation evaluated on an active storage system. They have
proposed a scheme where common analysis kernels are
embedded in parallel file systems. They have shown
experimentally that the overall performance of the proposed
system improved by 50.9% of all four benchmarks and that
the compute-intensive portion of the k-means clustering
kernel can be improved by 58.4% through GPU offloading
when executed with a larger computational load.
According to this paper [11] so as to reduce the data
management cost and to solve security concerns they have
used a concept called FADE to outsource the data to the thirdparty cloud storage services. FADE is designed to be readily
deployable in cloud storage system which focuses on
protecting deleted data with policy-based file assured deletion.
FADE guarantees privacy and integrity of the outsourced data
files using some standard cryptographic techniques encrypts
the outsourced data files. Important of all it assuredly deletes
files to make them unrecoverable to anyone (including those
who manage the cloud storage) when those files are tried to
access. This objective is implemented by a working prototype
of FADE atop Amazon S3 which is one of todays cloud
storage service uses the working prototype of FADE , which
provides policy based file assured deletion with a minimal
performance overhead. This work provides the insights of
how to incorporate value-added security features into data
outsourcing applications.
According to this paper [18] they have discussed about Sybil
attack in detail as how it occurs in a distributed hash table
(DHTs). Sybil attacks represents the situation where a
particular service in an identity-based system is subverted by

As per the proposed, the security measures have been taken


effectively for the files stored on the cloud server. Hence in
order to avoid unauthorized control over the users personal
data SeDas is proposed. Self-Destructing data system aim is to
destruct all the data along with its copies, either cached or
archived after certain period of time so that it becomes
unreadable even to the admin (say CSPs) who maintains it.
Whenever the user uploads/downloads a file SeDas works
such that the ttl (Time-to-Live) parameter will be given for
that particular file. This can be implemented by using Shamir
Secret Sharing algorithm which seems to be one of the
strongest algorithm in usage. An easy solution to this can be
provided by using the spring MVC framework that provides
model-view-controller architecture and ready components
which can be used to develop flexible and loosely coupled
web application which has interceptors as well as controllers,
making it easy to factor out behavior common to the handling
of many requests. It helps to create high performing, easily
testable, reusable code

Fig. 5. SeDaS system architecture

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


33

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Storing data in cloud might be safe on one side but on the


other hand what if the confidential data gets misused? There
are also some amount of data residing in the cloud which has
not been used for years and years. This leads to lower
performance in the cloud and issues in network traffic. So this
paper gives the solution for the above problems with the help
of SeDas. Thus the latency and throughput performance
measures are being improved here in this paper.
V. CONCLUSION
In cloud computing environment many a technique have been
used to provide security for the users data/files. As of the
above information many researchers have given many
techniques and ideas for the same. According to the above
analysis many techniques has been taken into work where the
data disappears but without the knowledge of the user. SeDas
makes the sensitive information such as credential details to
get self-destructed without any action on the users part so that
the details are unreadable to anyone after that supported by
object-based storage technique. The Experimental security
analysis sheds intuitive practicableness of the approach. This
time-constrained system can facilitate to produce researchers
with any valuable expertise to tell future of Cloud services.
REFERENCES
[1] R. Geambasu, T. Kohno, A. Levy, and H. M. Levy,
Vanish: Increasing data privacy with self-destructing
data, in Proc. USENIX Security Symp., Montreal,
Canada, Aug. 2009, pp. 299315.
[2] A. Shamir, How to share a secret, Commun. ACM, vol.
22, no. 11, pp. 612613, 1979.
[3] S. Wolchok, O. S. Hofmann, N. Heninger, E. W. Felten,
J. A. Halderman, C. J. Rossbach, B. Waters, and E.
Witchel, Defeating vanish with low-cost sybil attacks
against large DHEs, in Proc. Network and Distributed
System Security Symp., 2010.

[4] L. Zeng, Z. Shi, S. Xu, and D. Feng, Safevanish: An


improved data self-destruction for protecting data
privacy, in Proc. Second Int. Conf. Cloud Computing
Technology and Science (CloudCom), Indianapolis, IN,
USA, Dec. 2010, pp. 521528.
[5] L. Qin and D. Feng, Active storage framework for
object-based storage device, in Proc. IEEE 20th Int.
Conf.
Advanced Information
Networking and
Applications (AINA), 2006.
[6] S. W. Son, S. Lang, P. Carns, R. Ross, R. Thakur, B.
Ozisikyilmaz, W.-K. Liao, and A. Choudhary, Enabling
active storage on parallel I/O software stacks, in Proc.
IEEE 26th Symp. Mass Storage Systems and
Technologies (MSST), 2010.
[7] Y. Tang, P. P. C. Lee, J. C. S. Lui, and R. Perlman,
FADE: Secure overlay cloud storage with file assured
deletion, in Proc. SecureComm, 2010.
[8] J. R. Douceur, The sybil attack, in Proc. IPTPS 01:
Revised Papers from the First Int. Workshop on Peer-toPeer Systems, 2002.
[9] T. Cholez, I. Chrisment, and O. Festor, Evaluation of
sybil attack protection schemes in kad, in Proc. 3rd Int.
Conf. Autonomous Infrastructure,Management and
Security, Berlin, Germany, 2009, pp. 7082.
[10] M. Mesnier, G. Ganger, and E. Riedel, Objectbased
storage, IEEE Commun. Mag., vol. 41, no. 8, pp. 8490,
Aug. 2003.
[11] R. Weber, Information TechnologySCSI object-based
storage device commands (OSD) - vol. 41, no. 8, pp. 84
90, Aug. 2003.
[12] R. Wickremesinghe, J. Chase, and J. Vitter, Distributed
computing with load-managed active storage, in Proc.
11th IEEE Int. Symp. High Performance Distributed
Computing (HPDC), 2002, pp. 1323
[13] X. Ma and A. Reddy, MVSS: An active storage
architecture, IEEE Trans. Parallel Distributed Syst., vol.
14, no. 10, pp. 9931003, Oct. 2003.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


34

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

ANALYSIS ON PACKET SIZE OPTIMIZATION


TECHNIQUES IN WIRELESS SENSOR
NETWORKS
1

P. Venkatesh, PG Scholar,

Dr. M. Prabu, Professor

Department of CSE,

Department of CSE,

Adhiyamaan College of Engineering,

Adhiyamaan College of Engineering,

Hosur-635109, Tamil Nadu, India.

Hosur-635109, Tamil Nadu, India

venkimahalakshmi10@gmail.com

prabu_pdas@yahoo.co.in

Abstract--The foremost and important issue in wireless Sensor


networks is energy constrained. Packet Size plays an important
role in Wireless Sensor Networks. Large Packet Size may cause
data bit error and also needs higher frequency for Re-transmission
in Wireless Sensor Networks. Compared to large packet size, small
packet size is quite easy-way and also produces an efficient result in
Wireless Sensor Networks. But creation of short packet size might
cause problems like higher overhead and startup energy
consumption for each packet. Consecutively to develop energy
efficient Wireless Sensor Networks, an optimal packet size must be
chosen. In this paper short analysis of various techniques developed
by researchers in this area and computing the performance of
Wireless Sensor Networks has been carried out.
Index Terms--Packet length optimization, link estimation,
aggregation, fragmentation, wireless sensor networks.

I. INTRODUCTION
Wireless Sensor Networks is collection of sensing devices that
can communicate wirelessly. Each device can perform three
important tasks such as, Sense, process and talk to its peers.
Hence it has centralized Collection point (sink or base station).
A WSN can be defined as network devices, denoted as node,
which can sense the environment and communicate through
wireless links. The data is forwarded, possibly via multiple hops
to sink, that can use its locally or is connected to other network
(e.g. internet) through gateway. The node can be Stationary or
moving. They can be homogeneous or not [1].
The traditional single-sink WSN may suffer from lack of
scalability. So by increasing large number of nodes, amount of
data gathered by sink increases and once its capacity is reached,
the network cannot be increased. Furthermore, for reasons
related to MAC and routing aspects, network performance
cannot be considered independent from the network size.

Fig.1. Architecture of wireless sensor network

As there are many problems in the single sink scenario,


moving to multiple sink scenario can be scalable and also
increase the performance of the WSN in terms of increasing the
number of the nodes, which it not possible in the single sink
scenario. In many cases nodes send the collected data to one
sink, select among many, which forward data to the gateway,
towards the final user. The selection of sink is based on certain
suitable criteria that could be, for example, minimum delay,
maximum throughput, minimum number of hops etc., Hence the
presence of multiple sink ensures better network performance
with respect to single sink case where designing part is more
complex for
communication protocol and must design
according to suitable criteria[2].
The WSN can be used for a variety of applications such as
Environment monitoring [3], healthcare, positing [4]and
tracking [5] etc., The applications of the wireless sensor network
can be classified according to the Event Detection (ED) and the
Spatial Process Estimation (SPE) .

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


35

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

short packet size degrades the performance of the WSN. Also


management of packet at each node will become complicated.
So many techniques were developed so far to get an optimal
packet size for the WSN, but most of the researchers suggest
fixed packet size [7].The minority researchers are promoting the
use of the dynamic packet length [8] i.e. variable size of data
packets in WSN. In this survey report numbers of techniques
have been discussed to obtain an appropriate data packet size in
WSN and finally the conclusion for each technique.
III. DISCUSSION AND RESULT
Various techniques are used for packet size optimization for
wireless sensor networks. A range of techniques were developed
by the different researcher for the packet size optimization in
WSN.The researchers have majorly focus on the two approaches
which is, either fixed packet size or variable packet size
approach. In this section we discuss those approaches and
results.
A. Fixed size packet in WSN
Fig. 2. Left side Single-sink scenario and Right side Multi-sink-scenario] [2]

In the ED scenario, the sensor is deployed to detect the events


such as, fire in the forest, Earthquake. In SPE scenario it is
deployed to monitor the physical phenomenon (for example
atmospheric pressure in a wide area or temperature variation in a
small volcanic site), which can be modeled as a bi-dimensional
random process (generally non-stationary).
Power consumption plays an important role in the WSN, so the
designers are now mainly focusing on the power aware- protocol
and algorithm for design of energy efficient sensor network. For
all the operations to be performed in the network, such as
sensing information, processing the information and forwarding
to the sink node. Hence the power consumption and power
management are more important in the wireless sensor networks
[1].
II. RELATED WORK
In the WSN packet size is a major problem, which will directly
affect the reliability and the performance of communication
between the nodes. However choosing the packet size must be
optimal. According to the first scenario the packet size is long in
WSN that causes data bit corruption and data packet retransmission [6]. Power consumption is also high during the
transmission of data packet to the sink which in turn ultimately
loses the performance of the WSN when the packet size is long.
In second scenario the packet size is small, which increase the
data transmission reliability and reduces the data bit error. But

In the [7] they have used the fixed packet size in WSN rather
than the variable packet size. Even though the variable packet
size will increase the throughput of the channel and enhance the
wireless sensor network transmission mechanism the simplicity
of such independent system is also compromised. Since
choosing the variable packet size leads to the resource
management overhead they choose the fixed size data packets
for energy efficient WSN. Basically, there are three fields in the
data packet.
1) Packet header.
2) Payload/Data Segment.
3) Packet Trailer.
The packet header contains many fields that are usually less
important for WSN nodes and removing those will help us to
reduce the packet size in the WSNs. Those fields include current
segment number, total number of segments, packet identifiers,
source and destination identifiers [7].By employing these
method the overall throughput and efficiency is increased.

Fig 3. Packet format [7]

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


36

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

B. Variable Size packet in WSNs


In the [8] variable packet size in WSNs plays a vital role and
this paper describes the creation of packet size according to the
channel condition i.e. in a dynamic manner, they developed a
scheme called dynamic packet length control. In the WSNs if
the channel is noisy or busy (means it is congested having lots
of packets) it will automatically create small packets. When the
channel is empty or channel if it is capable of processing large
packet means it will automatically generate the large packet
size. By using this method they are increasing the overall
throughput and efficiency.
C. Framework for optimization of packet size
There are various researchers who developed lot of frameworks
for creating or generating an optimal packet size for reducing
the energy consumption and to increase the throughput and
energy efficiency in WSN. In this framework [9] for packet
optimization in WSNs, they are describing that the longer packet
size is more appropriate than the shorter packet size in some
case. In certain situation this may lead to inefficiency in the
WSNs. The framework must be employed there to find an
appropriate method for optimal solution to the problem in
wireless sensor networks. The paper [9] used a framework to
find the optimal packet size based on some performance metric.
The metric consists of the throughput, energy consumption per
bit, latency, and packet error rate.
D.

Various packet size used in different techniques

Fig. 4. Effect of packet size on the ESB [10]


There are some other packet formats designed by the researchers
for energy efficiency in wireless sensor networks. In the paper
[11] they describe different header formats and researchers
could use predefined formats for designing their own packets.
Designers have to design their packet header using common
header format that is shown in the figure below

In this paper [10], they describe that if small packet size


produces more energy efficient in WSN, overhead of each
packet is ignored. Tracking per packet overhead created in WSN
will lead to favor large size packet for this type of resource
constrained in tiny sensor node. So it depends on overhead
produced by each packet generation in WSN. There are some
suggested packet sizes as follows

Fig. 5. Packet header format [11]

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


37

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

In the paper [12], they describe the Dynamic Packet length


control scheme that provides more efficient terms of channel
utilization than the paper [8]. They provide two services, i.e.,
small message aggregation and large message Fragmentation.
By using those services they provide better performance
compared to the previous works. The two service are shown
clearly in the below figure.

areas, lifetime and also decreases the power consumptions by


the protocol. Source node sends packet to destination node
where the source node has the backup and increases the
networks lifetime generated and maintain by the centralized and
localized algorithm. While sending the source node is in off
mode and after receiving the acknowledgement it moves to on
mode so, in this way the energy consumption is minimized,
increases the data size and lifetime and avoids the latency.
V. CONCLUSION

Fig. 6. DPLC overview [12]

IV. PROPOSED WORK


The proposed work describes and improves the data aggregation
i.e. decrease the power consumption and increase the life time of
packet send between the two nodes using BEAR protocol. The
data aggregation scheme is used to improve the network
functionality with energy competence. Each and every sensor is
used to minimize the energy consumption. In data aggregation
there are various algorithms used to measure the performance
such as lifetime, data accuracy and latency. To improve the
lifetime of the mobility nodes based on the centralized and
localized algorithm by using the BEAR [A Balanced Energy
Aware Routing] Protocols, the dynamic fixed length packets
lifetime is measured and this protocol increases the coverage
areas to get better performance and where large nodes are to be
used.
Wireless sensor network latency refers to data transmission, data
aggregation and routing. It defines the time delay between the
sink and destination. This paper decides to improve the coverage

In the Wireless Sensor Networks major factor deciding the


performance, i.e. to choose the packet size leads to efficiency in
energy. There are so many researchers who proposed packet size
format and there are also some framework approaches for the
same. According to above analysis some of the researcher have
encouraged fixed size packet for the data transmission in the
sensor node, whereas at the same time other researchers
encourage variable size packet for data transmission in the
sensor node according to the channel capacity. The former
approaches are easy to implement and process less overhead but
they are inefficient with regards to energy efficiency, overall
throughput and performance. Next approaches are capable with
respect to energy efficiency, throughput and performance but
major drawback is it possess a lot of overhead at each node.
Each and every approaches and framework has their own
negative aspect and the positive aspect. Yet we develop an
optimal approach which combines the advantages of the
previous approach and avoids the drawbacks in those
approaches.
REFERENCES
1.

2.

3.

4.

5.

I.F.Akyildiz, W.Su, Y.Sankarasubramaniam and E. Cayirci,


A Survey on Sensor Networks, IEEE Communications
Magazine, pp. 102-114, August 2002.
Lin, C.; Tseng, Y.; Lai, T. Message-Efficient In-Network
Location Management in a Multi-sink Wireless Sensor
Network. In Proceedings of IEEE International Conference
on
Sensor Networks, Ubiquitous, and Trustworthy
Computing, Taichung, Taiwan, 2006; pp. 18.
Ong, J.; You, Y.Z.; Mills-Beale, J.; Tan, E.L.; Pereles, B.;
Ghee, K. A wireless, passive embedded sensor for real-time
monitoring of water content in civil engineering materials.
IEEE Sensors J.2008, 8, 20532058
Lee, D.-S.; Lee, Y.-D.; Chung, W.-Y.; Myllyla, R. Vital
sign monitoring system with life emergency event detection
using wireless sensor network. In Proceedings of IEEE
Conference on Sensors, Daegu, Korea, 2006.
Hao, J.; Brady, J.; Guenther, B.; Burchett, J.; Shankar, M.;
Feller, S. Human tracking with wireless distributed
pyroelectric sensors. IEEE Sensors J. 2006, 6, 16831696.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


38

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

6.

7.

8.

9.

Low Tang Jung, Azween Abdullah, Wireless Sensor


Networks: Data Packet Size Optimization, Universiti
Teknologi PETRONAS, Malaysia, 2012.
Y. Sankarasubramaniam, I. E Akyildiz and S. W.
Mchughlin, Energy Efficiency based Packet Size
Optimization in Wireless Sensor Networks, School of
Electrical & Computer Engineering Georgia, Institute of
Technology, Atlanta, GA 30332, 2003.
Wei Dong, Xue Liu, Chun Chen, Yuan He, Gong Chen,
Yunhao Liu, and Jiajun Bu,and Zhejiang Key, DPLC:
Dynamic Packet Length Control in Wireless Sensor
Networks, Lab. of Service Robot, College of Comp. Sci.,
Zhejiang University School of Comp. Sci., McGill
University 2010.
Vuran, M.C.; Akyildiz, I.F.;, "Cross-Layer Packet Size
Optimization for Wireless Terrestrial, Underwater, and
Underground Sensor Networks," INFOCOM 2008. The 27th

Conference on Computer Communications. IEEE, vol., no.,


pp.226-230, 13-18 April 2008.
10. Matthew Holland, Tianqi Wang, Bulent Tavli, Alireza
Seyedi, and Wendi Heinzelman, Optimizing physical-layer
parameters for wireless sensor networks, ACM Trans. Sen.
Netw. 7, 4, Article 28 (February 2011), 20 pages.
DOI=10.1145/1921621.1921622
http://doi.acm.org/10.1145/1921621.1921622.
11. Haboub, Rachid, and Mohammed Ouzzif. "SECURE
ROUTING IN WSN."International Journal 2 (2011).
12. Wei Dong, Member, IEEE, Chun Chen, Member, IEEE,
Xue Liu, Member, IEEE, Yuan He, Member, IEEE, Yunhao
Liu, Senior Member, IEEE, Jiajun Bu, Member, IEEE, and
Xianghua Xu, Member, IEEE Dynamic Packet Length
Control
in
Wireless Sensor
Networks
IEEE
TRANSACTIONS
ON
WIRELESS
COMMUNICATIONS, VOL. 13, NO. 3, MARCH 2014.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


39

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

COMPRESSED AIR VEHICLE (CAV)


(A Practical approach with design)
Sai Chakradhar. Kommuri, Student, Department of Mechanical Engineering, QISCET, Ongole.
Email i.d:saichakradharkommuri@gmail.com contact:8143691783

Tunnel and other


Gotthardbahn.

ABSTRACT:
A Compressed air car is a car that
uses a motor powered by compressed air.
The car can be powered solely by air, or
combined (as in a hybrid electric vehicle).
Compressed air as a source of energy in
different uses in general and as a nonpolluting fuel in compressed air vehicles
has attracted scientists and engineers for
centuries. Efforts are being made by many
developers and manufacturers to master
the compressed air vehicle technology in
all respects for its earliest use by the
mankind. The present paper gives a brief
introduction to the latest developments of a
compressed-air vehicle along with an
introduction
to
various
problems
associated with the technology and their
solution. While developing of compressed
air vehicle, control of compressed air
parameters like temperature, energy
density, requirement of input power,
energy release and emission control have
to be mastered for the development of a
safe, light and cost effective compressed
air vehicle in near future.

tunnels

of

the

In 1903, the Liquid Air Company


located in London England manufactured a
number of compressed-air and liquefiedair cars. The major problem with these cars
and all compressed-air cars is the lack of
torque produced by the "engines" and the
cost of compressing the air.

INTRODUCTION
Compared to batteries, compressed
air is favourable because of a high energy
density, low toxicity, fast filling at low
cost and long service life. These issues
make it technically challenging to design
air engines for all kind of compressed air
driven vehicles. To meet the growing
demand of public transportation,
sustainable
with
environmental
consciousness, people are in the search for
the ultimate clean car with zero-emissions.
Many concept vehicles were proposed that
run on everything from solar power to
algae, but most of them are expensive and
require hard-to-find fuels .Compressed air
vehicle project in the form of light utility
vehicle (LUV) (i.e., air car in particular)
has been a topic of great interest for the
last decade and many theoretical and
experimental investigations.

HISTORY OF CAV:
Compressed air has been used
since
the
19th
century
to
power mine locomotives and trams in
cities such as Paris (via a central, citylevel, compressed), and was previously the
basis of naval torpedo propulsion.
During the construction of
the Gotthardbahn from 1872 to 1882,
pneumatic locomotives were used in the
construction
of
the Gotthard
Rail

COMPRESSED
TECHNOLOGY

AIR

Mankind has been making use of


uncompressed airpower from centuries in
1

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


40

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

different application viz., windmills,


sailing, balloon car, hot air balloon flying
and hang gliding etc. The use of
compressed air for storing energy is a
method that is not only efficient and clean,
but also economical and has been used
since the 19th century to power mine
locomotives, and was previously the basis
of naval torpedo propulsion. The laws of
physics dictate that uncontained gases
will fill any given space. The easiest way
to see this in action is to inflate a balloon.
The elastic skin of the balloon holds the air
tightly inside, but the moment you use a
pin to create a hole in the balloon's surface,
the air expands outward with so much
energy that the balloon explodes.
Compressing a gas into a small space is a
way to store energy. When the gas expands
again, that energy is released to do work.
That's the basic principle behind what
makes an air car go. The air compressors
are
built
into
them.
The principle of compressed-air
propulsion is to pressurize the storage tank
and then connect it to something very like
a reciprocating engine of the vehicle.
Instead of mixing fuel with air and burning
it in the engine to drive pistons with hot
expanding
gases, compressed
air
vehicles (CAV) use the expansion of
compressed air to drive their pistons. Thus,
making the technology free from
difficulties.
The air is compressed at pressure
about 150 times the rate the air is
pressurized into car tyres or bicycle. The
tanks must be designed to safety standards
appropriate for a pressure vessel. The
storage tank may be made of steel,
aluminium, carbon fibre, Kevlar or other
materials, or combinations of the above.
The fibre materials are considerably lighter
than metals but generally more expensive.
Metal tanks can withstand a large number
of pressure cycles, but must be checked for
corrosion periodically. A company has
stated to store air in tanks at 4,500 pounds
per square inch (about 30 MPa) and hold
nearly 3,200 cubic feet (around 90 cubic

metres) of air. The tanks may be refilled at


a service station equipped with heat
exchangers, or in a few hours at home or in
parking lots, plugging the vehicle into an
on-board compressor. The cost of driving
such a car is typically projected to be
around Rs. 60 per 100 km, with a complete
refill at the "tank-station" at about Rs. 120
only.
The compression, storage and
release of the air together are termed as
the Compressed Air Technology. This
technology has been utilized in different
pneumatic systems. This technology has
been undergoing several years of research
to
improve
its
applications.

WORKING
The air-powered car runs on a
pneumatic motor that is powered by
compressed air stored on board of the
vehicle. Once compressed air is transferred
into the on board storage tank, it is slowly
released to power the cars pistons The
engine that is installed on the compressed
air car, uses compressed air which is stored
in the cars tank. The compressed air
drives the piston down as the power
stroke. At the end of the power stroke, the
compressed air is released through the
exhaust valves and the exhaust is only air.
The motor then converts the air power into
mechanical power. That power is then
transferred to the wheels and becomes the
source of power for the car. ( i.e. the
pistons were connected to the wheels
through the HERO HONDA bikes 4
speed transmission.). This modified engine
was
mounted
on
a
rectangular
crossectional frame and a body that looked
like a curious crossbreed of a car.

2
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT
41

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Modifying the cam shaft of the


engine.
1.

COMPRESSED
AIR
TANK
Compressed air tank is one of the
most important part of these cars. These
tanks hold 0.05m3 of air to 30 bars. It is
similar to the tanks used to carry the liquid
gas. The tank enjoys the same technology
developed to containing natural gas. These
tanks do not explode in case of accidents
since there are no metals in them if made
of carbon fibre. So the selection of
material for the storage tank matters
much
in
safety.

PROPERTIES OF AIR CAR


ENGINE
The properties of air car engine are:

2. ENGINE SPECIFICATIONS
Made: HERO HONDA
Model: CD100
Stroke: 4 stroke
No. of cylinders: Single cylinder
Displacement: 110 cc

1. Approximately
0.05
mt3 of
compressed air is stored in mild
steel tank in the vehicle.
2. The engine is powered by
compressed air, stored in the tank
at 30bar. In order to reduce weight,
The tank can be made of carbon
fibre.
3. The expansion of this air pushes
the piston and creates movement.
The atmospheric temperature is
used to reheat the engine and
increase the road coverage.
4. The air condition system can be
made use of the expelled cold air.

3. THE CHASSIS
Based on design principles in
aeronautics, engine has put together
highly-resistant, yet light, chassis, zinc
rods welded together. Using these rods
enables us to build a more shockresistant chassis than regular chassis,
allowing quick assembly and a more
secure. This system helps to reduce
manufacture time.

We only need a simple piston-cylinder


arrangement with an intlet and an exhaust.
But as we know a normal two stroke
engine contained several ports and it also
had the spark plug which we didnt
require. So, due to the presence of ports in
a two stroke engine, it is difficult to get
required output from the engine. So,
several modifications had to be done on
the four stroke engine to suit our purpose.

The

modifications

comprised

4. AIR FILTER
The engine works with air taken
from the atmosphere. Air is compressed by
the off-board compressor or at service
stations equipped with a high-pressure
compressor. Before compression, the air
must be filtered to get rid of any
impurities that could damage the
engine. Carbon filters are used to
eliminate dirt, dust, humidity and other
particles which, unfortunately, are found in
the air in our cities.

of:-

Providing a suitable connector at


the cylinder head.
Removing the spark plug from the
cylinder head.
3

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


42

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

improve the mileage and


efficiency.
In case of leakage or
accident, there wont be
any fire.
Engine vibrations were
very less and sound
pollution was also very low.
Operating cost is ten times
less than that of gasoline
engine.

5. REFILLING
As these energies are so easy to
store Filling stations are setup as for
petrol and diesel. The filling of tank of an
air car nearly takes 3 to 4 minutes for cars.
Either, we can set up a filling equipment
too in our house, which is quite cheaper.
6. SPECIAL FEATURES
There is absolutely no fuel
required and no combustion
in the engine cylinder.
There is no pollution at all
as only air is taken in and
air is ejected out.
No Heat is generated, as
there is no combustion.
No engine cooling system
is required, like water
Pump, radiator, and water
Circulating pipes. It was
measured practically that
the engine exhaust is a
cooled air; its temperature
was measured as low as 5
degrees Celsius.
No
air
conditioning
system in the car is
required if used, the
exhaust chilled and clean
air can be recirculated
partly in the car to cool it.
The
atmospheric
temperature can fall down,
as the exhaust is a clean and
chilled air, so the problem
of pollution can be
permanently eradicated.
Very less maintenance is
required as there wont be
any soot formation.
Very low cost materials
can be used, as there is no
heat involvement.
Weight of the engine can
be reduced in the absence
of cooling system and
because of lightweight
material,
which
will

7. EMISSION OUTPUT
Since the compressed air is
filtered to protect the compressor
machinery, the air discharged has less
suspended dust in it, though there may be
carry-over of lubricants used in the engine.
The car works when gas expands.

ADVANTAGES
Compressed-air
vehicles
are
comparable in many ways to electric
vehicles, but use compressed air to store
the energy instead of batteries. Their
potential advantages over other vehicles
include:.

Compressed-air technology reduces


the cost of vehicle production by about
20%, because there is no need to build
a cooling system, fuel tank, Ignition
Systems or silencers.
The engine can be massively reduced
in size.
The engine runs on cold or warm air,
so can be made of lower strength light
weight material such as aluminium,
plastic, low friction Teflon or a
combination.
Low manufacture and maintenance
costs as well as easy maintenance.
Compressed-air tanks can be disposed
of or recycled with less pollution than
batteries.
Compressed-air
vehicles
are
unconstrained by the degradation
problems associated with current
battery systems.

4
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT
43

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

The air tank may be refilled more


often and in less time than batteries
can be recharged, with re-filling rates
comparable to liquid fuels.
Lighter vehicles cause less damage to
roads, resulting in lower maintenance
cost.
The price of filling air powered
vehicles is significantly cheaper than
petrol, diesel or biofuel. If electricity is
cheap, then compressing air will also be
relatively cheap.

specialized equipment at service


stations may fill the tanks in only 3
minutes.

Tanks get very hot when filled rapidly.


SCUBA
tanks
are
sometimes
immersed in water to cool them down
when they are being filled. That would
not be possible with tanks in a car and
thus it would either take a long time to
fill the tanks, or they would have to
take less than a full charge, since heat
drives up the pressure.

Early tests have demonstrated the


limited storage capacity of the tanks;
the only published test of a vehicle
running on compressed air alone was
limited to a range of 7.22 km (4 mi).

A 2005 study demonstrated that cars


running on lithium-ion batteries outperform both compressed-air and fuel
cell vehicle more than threefold at
same speeds.[10] MDI has recently
claimed that an air car will be able to
travel 140 km (87 mi) in urban driving,
and have a range of 80 km (50 mi)
with a top speed of 110 km/h (68 mph)
on highways, when operating on
compressed air alone.

DISADVANTAGES
The principal disadvantage is the
indirect use of energy. Energy is used to
compress air, which in turn provides
the energy to run the motor. Any
conversion of energy between forms
results in loss. For conventional
combustion motor cars, the energy is lost
when oil is converted to usable fuel
including drilling, refinement, labor,
storage, eventually transportation to the
end-user. For compressed-air cars, energy
is lost when electrical energy is converted
to compressed air.

When air expands, as it would in the


engine, it cools dramatically (Charles's
law) and must be heated to ambient
temperature using a heat exchanger
similar to the Intercooler used for
internal combustion engines. The
heating is necessary in order to obtain
a significant fraction of the theoretical
energy output. The heat exchanger can
be problematic. While it performs a
similar task to the Intercooler, the
temperature difference between the
incoming air and the working gas is
smaller. In heating the stored air, the
device gets very cold and may ice up
in cool, moist climates.

POSSIBLE IMPROVEMENTS
Compressed-air vehicles operate to
a thermodynamic process as air cools
down when expanding and heats up when
being compressed. As it is not possible in
practice to use a theoretically ideal
process, losses occur and improvements
may involve reducing these, e.g., by using
large heat exchangers in order to use heat
from the ambient air and at the same time
provide air cooling in the passenger
compartment. At the other end, the heat
produced during compression can be
stored in water systems, physical or
chemical systems and reused later.

Refueling
the
compressed-air
container using a home or low-end
conventional air compressor may take
as long as 4 hours, while the
5

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


44

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

It may be possible to store


compressed air at lower pressure using an
absorption material within the tank.
Absorption materials such as Activated
carbon, or a metal organic framework is
used to store compressed natural gas at
500 psi instead of 4500 psi, which
amounts to a large energy saving.

cost effective and economical for deriving.


The storage of compressed air (initially as
well as during journey) with all benefits
like no heating, high energy density and
provisions to make use of cooling
produced during adiabatic expansion
during the energy release have to be taken
care off in a much more controlled
manner. Electric-powered cars and bikes
already available on the market put a
strong competition to compressed air car
not only in terms of cost but also their
environment friendly role. The technology
still looks distant but that has not deterred
inventors from working on it.

CONCLUSION
Its important to remember that
while vehicles running on only
compressed air might seem like a distant
dream, but they still have public interest
due to their environmental friendly nature.
Efforts should be to make them light, safe,

6
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT
45

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

DESIGN OF AN OPTIMISED LOW POWER FULL


ADDER USING DOUBLE GATED MOSFET AT 45nm
TECHNOLOGY
T Nithyoosha1, M.Rajeswara Rao2
1

P.G Student in VLSI, Department of E.C.E, SIETK, Tirupati.


2
Assistant Professor, Department of E.C.E, SIETK, Tirupati.
E-Mail to:nithyoosha705@gmail.com, raj_vlsi@yahoo.co.in

Abstract Full adder performs addition in many computers


and other processors. The performance of digital electronics
circuit can be improved by improving the performance of the
adder where adder is employed. A 10T double gated full adder is
a choice for low power design. It achieves 31.25% reduction in
active power and 95% reduction in leakage current as compared
to 14T double gate full adder. The leakage current, average
power and Delay is also calculated for the designed 10T and 14T
full adder. Double Gate (DG) MOSFETs using lightly doped
ultra thin layers seem to be a very promising option for ultimate
scaling of CMOS technology Excellent short-channel effect
immunity, high transconductance and ideal subthreshold factor
have been reported by many theoretical and experimental studies
on this device.
Index TermsDouble gate MOSFET, full adder, leakage
current, active power and delay.

1. INTRODUCTION

Cout = AB + A B Cin

A
B

(3)

Cin

Cout

Fig.1. Block Diagram of basic full adder circuit

2.1 DOUBLE GATE 14T FULL ADDER CIRCUIT

In very large scale integration (VLSI) systems, full adder


circuit is used in arithmetic operations for addition,
multipliers and Arithmetic Logic Unit (ALU). It is a building
block of the application of VLSI, digital signal processing,
image processing and microprocessors. Most of full adder
systems are considered performance of circuits, number of
transistor, speed of circuit, chip area, threshold loss and full
swing output and the most important is power consumption.
In the future, portable devices such as cell phone, laptop
computer, tablet etc. need low power and high speed full
adders.

Double gate MOSFET will be constructed by connecting two


transistors in parallel as a way that their supply and drain are
connected together. Double gate MOSFET can be classified
in two types, based on biasing of gate. Once the front and
back gate area unit connected along, initial kind is achieved
and it is referred as three terminal devices. This device is
used for direct replacement of single gate transistor. Second
kind is achieved by independent gate control.
In this section single bit full adder circuit is designed by
using double gate MOSFET for improve the performance of
adder in terms of power and leakage using 14 transistors.

The power consumption for CMOS circuit is given by the


following equation:
Pavg = Pdynamic + Pleak + Pshort-circuit
=CLVdd VFclk + IleakVdd +IscVdd

(1)

Lowering the supply voltage would significantly lower the


power consumption of the circuit. This basic concept of
would be used to improve the performance of adder circuit.
2. FULL ADDER
Full adder circuit is designed for addition binary logics. Sum
signal (SUM) and carry out signal (Cout) are the output of
I-bit full adder. Both of them are generated by input A, B and
CIN, following Boolean equation as:
SUM = A B Cin

(2)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


46

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Fig.4. Output waveform of double gate 14T full adder


2.2 DOUBLE GATE 10T FULL ADDER CIRUIT
Fig.2. Schematic of 4T XOR
This cell is constructed by using the 4T XOR gate. In the 14T
full adder cell two 4T XOR gate are used. Fig3 shows the
double gate 14T full adder circuit .Here 4T XOR is used gate
to increase circuit density using this XOR gate, reduction in
size of full adder is achieved and overall leakage is also
reduced. Output waveform of 14T full adder is shown in fig.4

In this section one bit full adder circuit is presented by using


double gate transistors for improve the performance of adder
in terms of power and leakage. Fig5 shows the double gate
10T full adder circuit. This cell is made by using the 4T XOR
gate. It is the essential component of full adder cell and
generates the essential addition operation of adder cell. It
behaves like a single half adder cell.
Here 4T XOR gate is used to increase circuit density. Fig.2
shows the 4T XOR gate. Using this XOR gate, reduction in
size of full adder is achieved and overall leakage is also
reduced. The schematic of full adder is shown in fig.5 and
output waveform is shown in fig.6.10T double gate full adder
achieves 31.25% reduction in active power and 95%
reduction in leakage current as compared to 14T double gate
full adder.

Cout

Sum

Fig.3. Double gate 14T full adder


C

Fig.5. Double gate 10T full adder

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


47

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

4. SIMULATION AND RESULT


A single bit full adder circuit based on double gate MOSFET
technique is proposed. It is used for measurement of leakage,
power consumption and delay of the proposed circuit at
45nm technology with different supply voltage from 0.7v to
1.0v.

Fig.6. Output waveform of double gate 10T full adder


3. DOUBLE GATE MOSFET
The MOS transistor model from COMSOL as a template is
used to model the double gate MOSFET. Double gate
transistors are developed to resolve short channel effect
problems in actual MOSFET structures. So that such
architectures are directly related to the constant Reduction of
the feature size in microelectronic technology. At the present
time, it seems that double gate devices- going to non-planar
transistor. Architectures- could be a solution for sub-32nm
Nodes. In addition, new design flexibility is allowed when
gates are not interconnected. However, appropriate models
must be developed. Threshold voltage (Vth) modeling of
double gate (DG) MOSFETs was performed, for the first
time, by considering barrier lowering in the short channel
devices. As the gate length of DG MOSFETs scales down,
the overlapped charge-sharing length (xh) in the channel
which is related to the barrier lowering becomes very
important. A fitting parameter w was introduced
semiempirically with the fin body width and body doping
concentration for higher accuracy. The Vth model predicted
well the Vth behavior with fin body thickness, body doping
concentration, and gate length. Recently, bulk FinFETs have
been considering very promising candidate for next
generation memory cell transistors to be applicable to
dynamic random access memory (DRAM) and flash
memory. As the gate length of bulk FinFETs scales down,
barrier lowering occurs in spite of low drain bias (VDS
=0.05V) because the depleted charge-sharing length (xh) by
source and drain in short channel is overlapped. To apply the
devices to integrated circuits, it is strongly required to model
threshold voltage (Vth) considering the barrier lowering in
short channel. However, the Vth model has not been
developed since the xh modeling in short channel devices is
very complicated with device geometry and doping
concentration. For Vth modeling of the devices, double-gate
(DG) nature is key point and needs to be understood well. In
this paper, we propose Vth model of DG MOSFETs based on
the correction of xh considering barrier lowering, and verify
the Vth model by comparing with device simulation in terms
of gate length (Lg), fin width (Wfin) and body doping (Nb).
Threshold voltages were extracted by using gm,max for a
given VDS of 0.05 V in this paper.

4.1. LEAKAGE CURRENT


Leakage current is the current that flows through the
protective ground conductor to ground. In the absence of a
grounding connection, it is the current that could flow from
any conductive part or the surface of non-conductive parts to
ground if a conductive path was available (such as a human
body). There are always extraneous currents flowing in the
safety ground conductor. Tunnelling leakage can also occur
across junctions between heavily doped p-type and n-type.
The basic equation of leakage current is shown in Eq.4
Ileak = Isub + Iox

(4)

Where, Isub = sub-threshold leakage current, Iox =


gate-oxide leakage current.

Where, K1 and n are experimentally derived, W = gate


width, Vth = threshold voltage, n = slope shape factor, V =
thermal voltage.

Where, K2 and are derived experimentally, Tox = oxide


thickness.
Table 1 shows the leakage current of 10T and 14T full adder
cell using double gate MOSFET at different supply voltage.
Fig 7 and fig 8 shows the leakage current waveform of double
gate 14T and 10T full adder cell at 0.7V.
Table 1. Leakage current at difference voltages of 10t
and 14t full adder
Voltage 10T Full Adder (pA) 14T Full Adder (pA)
0.7

3.646

58.2

0.8

3.915

84.6

0.9

5.165

144

1.0

5.401

220

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


48

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


Fig.8. Leakage Current Waveform of Double Gate 10T
Full Adder at 0.7V
4.2 ACTIVE POWER
At the time of operating the power is dissipated by the circuit
is known as active power. Active power includes both static
power and dynamic power of the circuit. The basic equation
of active power is shown in Eq. (7)

Pactive = Pdyanamic + Pstatic

(7)

=Pswitching + Pshort-circuit + Pleakage

Where,
Cl = load capacitance,
fclk = clock frequency,
= switching activity,
Isc = short circuit current,
Ileakage = leakage current,
Vdd = supply voltage
Table 2. Active power at difference voltages of 14t
full adder
Voltage 10T Full Adder(W) 14T Full Adder (W)
Fig.7. Leakage Current Waveform of Double Gate 14T
Full Adder at 0.7V
There are two types of leakage current: ac leakage and dc
leakage. Dc leakage current usually applies only to
end-product equipment, not to power supplies. Ac leakage
current is caused by a parallel combination of capacitance
and dc resistance between a voltage source (ac line) and the
grounded conductive parts of the equipment. The leakage
caused by the dc resistance usually is insignificant compared
to the ac impedance of various parallel capacitances.

0.7

9.34

13.7

0.8

15.21

23.56

0.9

21.95

39.69

29.61

61.52

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


49

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


Eq. (9) the propagation delay for an integrated circuit (IC)
logic gate may differ for each of the inputs. If all other factors
are held constant, the average propagation delay in a logic
gate IC increases as the complexity of the internal circuitry
increases.

Some IC technologies have inherently longer tpd values than


others, and are considered "slower." Propagation delay is
important because it has a direct effect on the speed at which
a digital device, such as a computer, can operate. This is true
of memory chips as well as microprocessors.

Fig.9. Active Power Waveform of Double Gate 14T


Full Adder at 0.7V

Fig.11.Delay Comparison Graph of 10T and 14T Full


Adder Circuit.

Fig.10. Active Power Waveform of Double Gate 10T Full


Adder at 0.7V

5. CONCLUSION
The analysis carried out while analyzing both l0T and 14T
full adders individually and comparing them on the basis of
calculation of active power, leakage current and delay by
varying different parameters. The outcomes of the simulation
show that l0T full adder to be a better option with improved
performance over 14T structure. As compare to 14T double
gate full adder active power of 10T full adder is reduced from
13.7W to 9.34 W at 0.7V. As compare to 14T double gate
full adder Leakage current of 10T full adder is reduced from
58.2pA to 3.646pA at 0.7V.As compare to 10T double gate
full adder Delay of 14T double gate full adder is reduced from
171.3ps to 151ps.

Table 2 shows the active power of 10T and 14T full adder cell
using double gate MOSFET at different supply voltage. Fig 9
and fig 10 shows the active power waveform of double gate
14T and 10T full adder cell at 0.7V.

6. REFERENCES
[1] Sun, X.-G., Mao, Z.-G., and Lai, F.-C. A 64 bit parallel
CMOS adder for high performance processors, Proc. IEEE
Asia-Pacific Conf. on ASIC, 2002, pp. 205208.

4.3 DELAY
Propagation delay is required by a digital signal to travel
from that input of the circuit to the output. The propagation
delay is inversely proportional to the speed of the architecture
and hence it is important performance parameter. The basic
equation of delay in presence of sleep transistor is shown in

[2] Vahid Moalemi and Ali Afzali-Kusha, Subthreshold


1-bit Full adder cells in sub-100nm technologies, IEEE
Computer Society Annual Symposium on VLSI
(ISVLSI-07), Porto Alegre, Brazil, March 9-11, 2007 (ISBN
0-7695-2896-1).

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


50

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


[3] Lu Junming; Shu Yan; Lin Zhenghui; Wang Ling," A
Novel IO-transistor Low-power High-speed Full adder cell",
Proceedings of 6th International Conference on Solid-State
and
Integrated-Circuit
Technology,
vol-2,
pp.
1155-1158,2001.
[4] Dan Wang, Maofeng Yang, Wu Cheng, Xuguang Guan,
Zhangming Zhu, Yintang Yang, "Novel Low Power Full
Adder Cells in 180nm CMOS Technology", 4th IEEE
Conference on Industrial Electronics and Applications,
ICIEA 2009, pp 430-433.
[5] Adarsh Kumar Agrawal, Shivshankar Mishra, and R K.
Nagaria, "Proposing a Novel Low-Power High-Speed Mixed
GDI Full Adder Topology", accepted in Proceeding of IEEE
International Conference on Power, Control and Embedded
System (ICPCES), 28 Nov.-1 Dec. 2010.
[6] Shipra Mishra, Shelendra Singh Tomar and Shyam
Akashe, Design low power 10T full adder using process and
circuit techniques, 7th IEEE International Conference on
Intelligent Systems and Control(ISCO) Coimbatore 2013,
pp. 325-328.
[7] Mohammad Hossein Moaiyeri and Reza Faghih Mirzaee,
Keivan Navi, 'Two New Low-Power and High-Performance
Full Adders", Journal Of Computers, Vol. 4,N o. 2,February
2009.
[8]
Mariano
Aguirre-Hernandez
and
Monico
Linares-Aranda, "CMOS Full Adders for Energy-Efficient
Arithmetic Applications", IEEE Transactions On Very
Large Scale Integration (Vlsi) Systems, Vol. 19, No. 4, April
2011.
[9] G.Shyam Kishore, Associate.Prof, ECE dept, JITS,
Karimnagar, Andhra Pradesh, India, "A Novel Full Adder
with High Speed Low Area", Proceedings published in
International Journal of Computer Applications (UCA) ,
2nd National Conference on Information and
Communication Technology (NCICT) 2011.

MOSFET, IEEE Trans. Electron Devices, vol. 45, pp.


11271134, May 1998.
[15] L. T. Su, M. J. Sherony, H. Hu, J. E. Chung, and D. A.
Antoniadis, Optimization of series resistance in sub-0.2 um
SOI MOSFETs, in IEDM Tech. Dig., 1993, pp. 723726.
[16] D. Hisamoto et al., Metallized ultra-shallow-junction
device technology for sub-0.1 um gate MOSFETs, IEEE
Trans. Electron Devices, vol. 41, pp. 745750, May 1994.
[17] J. Hwang and G. Pollack, Novel polysilicon/TiN
stacked-gate structure for fully-depleted SOI/CMOS, IEDM
Tech. Dig., pp. 345348, 1992.
[18] D. Hisamoto et al., A folded-channel MOSFET for
deep-sub-tenth micron era, IEDM Tech. Dig., pp.
10321034, 1998.
[19] D. Hisamoto, T. Kaga, and E. Takeda, Impact of the
vertical SOI Delta structure on planar device technology,
IEEE Trans. Electron Devices, vol. 38, pp. 14191424,
1991.
[20] S. Kimura, H. Noda, D. Hisamoto, and E. Takeda, A
0.1 _m-gate elevated source and drain MOSFET fabricated
by phase-shifted lithography, in IEDM Tech. Dig., 1991,
pp. 950952.
[21] T. Ushiki et al., Reliable tantalum gate
fully-depleted-SOI MOSFETs with 0.15 _m gate length by
low-temperature processing below 500 C, in IEDM Tech.
Dig., 1996, pp. 117120.
[22] T.-J. King, J. P. McVittie, K. C. Saraswat, and J. R.
Pfiester, Electrical properties of heavily doped
polycrystalline silicon-germanium films, IEEE Trans.
Electron Devices, vol. 41, pp. 228232, Feb. 1994.

[10] D. J. Frank, S. E. Laux, and M. V. Fischetti, Monte


carlo simulation of a 30 nm dual-gate MOSFET: How short
can Si go?, in IEDM Tech. Dig., 1992, pp. 553556.
[11] C. Fiegna et al., A new scaling methodology for the
0.10.025 um MOSFET, in VLSI Symp. Tech. Dig., 1993,
pp. 3334.
[12] K. Suzuki et al., Scaling theory for double-gate SOI
MOSFETs, IEEE Trans. Electron Devices, vol. 40, pp.
23262329, 1993.
[13] H. S. Wong, D. J. Frank, Y. Taur, and J. M. C. Stork,
Design and performance considerations for sub-0.1 um
double-gate SOI MOSFETs, in IEDM Tech. Dig., 1994,
pp. 747750.
[14] B. Majkusiak, T. Janik, and J. Walczak,
Semiconductor thickness effects in the double-gate SOI

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


51

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

AN MTCMOS TECHNIQUE FOR OPTIMIZING LOW


POWER FLIP-FLOP DESIGNS
S.ROJA1, C.VIJAYA BHASKAR2
1
2

P.G Student in VLSI, Department of E.C.E, SIETK, PUTTUR.


Assistant Professor, Department of E.C.E, SIETK, PUTTUR.
E-Mail to:roja.samala9@gmail.com, vijayas4u@gmail.com

Abstract
In any integrated circuit Power consumption is an important
parameter in semiconductors industry. Normally flipflop and

Pdynamic=CV2.
Pshort circuit is the short circuit power which is caused by the
finite rise and fall time of input signals, resulting in both the
pull up network and pull down network to be ON for a short
while

clock distribution network Consumes more amount of power as


they make maximum number of internal transitions. In this
paper, various flipflops are designed for reducing flip-flop
power as well as clock power. Among those techniques clocked
pair shared flip-flop (CPSFF) consume least power than
conditional data mapping flip-flop (CDMFF), conditional
discharge flip flop (CDFF). In modern high performance

Pshort circuit= Ishort circuitVdd


Pleakage is the leakage power. With supply voltage scaling
down, the threshold voltage also decreases to maintain
performance. However, this leads to the exponential growth
of the sub threshold leakage current. Sub threshold leakage is
the dominant leakage now.
Pleakage= IleakageVdd.

integrated circuits more than 40% of the total active mode


energy can be dissipated due to the leakage currents.
Multi-Threshold Voltage CMOS (MTCMOS) is one of the highly
accepted circuit technique in the reduction of the leakage
current. The sleep transistor is used to achieve high performance
by reducing the leakage current and increases the circuits speed
performance. The proposed MT-CPSFF circuit consumes 66.3%
less power as compared to conventional CPSFF.
Index TermsLow power flip-flops, MT-CMOS.

I. INTRODUCTION
Power consumption being the major problem in achieving
high performance and it is listed as one of the top three
challenges in electronics industry. The clock system, which
consists of the clock distribution network and flip-flops and
latches, is one of the most power consuming components in a
VLSI system. It accounts for 30% to 60% of the total power
dissipation in a system. As a result, reducing the power
consumed by flip-flops will have a deep impact on the total
power consumed. A large portion of the on chip power is
consumed by the clock circuits.
Power consumption is determined by several factors
including frequency , supply voltage V, data activity ,
capacitance C, leakage, and short circuit current
P=Pdynamic+Pshort circuit+Pleakage
In the above equation, dynamic power Pdynamic is also called
the switching power,

Flip-Flop is an electronic circuit that stores a logical state of


one or more data input signals in response to a clock pulse.
Flip-flops are often used in computational circuits to operate
in selected sequences during recurring clock intervals to
receive and maintain data for a limited time period sufficient
for other circuits within a system to further process data. At
each rising or falling edge of a clock signal, the data stored in
a set of Flip-Flops is readily available so that it can be applied
as inputs to other combinational or sequential circuitry. Such
flip-flops that store data on both the leading edge and the
trailing edge of a clock pulse are referred to as double-edge
triggered Flip-Flops otherwise it is called as single edge
triggered Flip-Flops.
In digital CMOS circuits there are three sources of power
dissipation, the first is due to signal transition, the second
comes from short circuit current which flows directly from
supply to ground terminal and the last is due to leakage
currents. As technology scales down the short circuit power
becomes comparable to dynamic power dissipation.
Furthermore, the leakage power also becomes highly
significant. High leakage current is becoming a significant
contributor to power dissipation of CMOS circuits as
threshold voltage, channel length and gate oxide thickness
are reduced. Consequently, the identification and modeling
of different leakage components is very important for
estimation and reduction of leakage power especially for
High-speed and low-power applications. Multithreshold
Voltage Based CMOS (MTCMOS) and voltage scaling are
two of the low power techniques used to reduce power.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


52

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


II. SOURCES OF POWER DISSIPATION
Power dissipation in digital CMOS circuits is caused by four
sources as follows.
Low Power Design Space:
The three degrees of freedom inherent in the low-power
design space: voltage, physical capacitance, and data
activity. Optimizing for power entails an attempt to reduce
one or more of these factors.
1) Voltage: Voltage reduction offers the most effective
means of minimizing power consumption the useful range of
Vdd to a minimum of two to three times Vt. One approach to
reduce the supply voltage without loss in throughput is to
modify the Vt of the devices. Reducing the Vt allows the
supply voltage to be scaled down without loss in speed.

III. LOW POWER FLIP-FLOPS


Some of the different flip-flops are designed for reducing
power consumption. The following are the different low
power flip-flops they are
1) Conditional discharge flip-flop (CDFF)
2) Conditional data mapping flip-flop (CDMFF)
3) Clock pair shared flip-flop (CPSFF)
4) Multi threshold CMOS -Clocked paired shared flip-flop
(MT-CMOS CPSFF)
1) Conditional discharge flip-flop (CDFF):

2) Switching Activity:
If there is no switching in a circuit, then no dynamic power
will be consumed. There are two components to switching
activity, fclk which specifies the average periodicity of data
arrivals and E(sw) which determines how many transitions
each arrival will generate
3) Physical Capacitance:
Minimizing capacitances offers another technique for
minimizing power consumption. In order to consider this
possibility we must first understand what factors contribute
to the physical capacitance of a circuit. Power dissipation is
dependent on the physical capacitances seen by individual
gates in the circuit.
The leakage current, which is primarily determined by the
fabrication technology, consists of two components
1) Reverse bias current in the parasitic diodes formed
between source and drain diffusions and the bulk region in a
MOS transistor.
2) The sub threshold current that arises from the inversion
charge that exists at the gate voltages below the threshold
voltage.
3) The standby current which is the DC current drawn
continuously from Vdd to ground.
4) The short-circuit current which is due to the DC path
between the supply rails during output transitions.
5) The capacitance current which flows to charge and
discharge capacitive loads during logic changes.

This paper surveys various low power flip-flops is described


in Section II. Section III gives the proposed MTCMOS
technique. Section IV represents MT-CPSFF and section V
presents simulation results. Section VI concludes this paper.

FIG 3.1: CDFF


The schematic diagram of the conditional discharge flip-flop
(CDFF), is shown in Fig. It uses a pulse generator as in,
which is suitable for double-edge sampling. The flip-flop is
made up of two stages. Stage one is responsible for capturing
the LOW-to-HIGH transition. If the input D is HIGH in the
sampling window, the internal node X is discharged,
assuming (q,qb)that were initially (LOW, HIGH) for the
discharge path to be enabled. As a result, the output node will
be charged to HIGH through P2 in the second stage. Stage 2
captures the HIGH-to-LOW input transition. If the input D
was LOW during the sampling period, then the first stage is
disabled and node X retains its precharge state. Whereas,
node Y will be HIGH, and the discharge path in the second
stage will be enabled in the sampling period, allowing the
output node to discharge and to correctly capture the input
data. CDFF uses 13clocked transistors resulting in reduction
of the power consumption.
2) Conditional data mapping flip-flop (CDFF):
IN Conditional data mapping flip-flop (CDMFF) used only
seven clocked transistors, resulting in about 50% reduction
in the number of clocked transistors; hence CDMFF used less
power than CDFF. This shows the effectiveness of reducing
clocked transistor numbers to achieve low power. The
conditional data mapping methodology exploits the property
of the flip-flop, by providing the flip-flop with a stage to map
its inputs to (0, 0) if a redundant event is predicted, such that
the outputs will be unchanged when clock signal is triggered.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


53

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


A conditional data mapper is deployed in the circuit to map
the inputs by using outputs as control signals.

easily for power reduction to the proposed flip flop. It will be


a less power consumption than other flip flops.

FIG 3.2: CDMFF


3) Clock paired shared flip-flop:
This low power flip flop is the improved version of
Conditional Data Mapping Flip flop (CDMFF). It has totally
19 transistors including 4 clocked transistors as shown in
Figure. The N3 and N4 are called clocked pair which is
shared by first and second stage. The floating problem is
avoided by the transistor P1 (always ON) which is used to
charge the internal node X. This flip flop will operate, when
clk and clkdb is at logic 1. When D=1, Q=0, Qb_kpr=1,
N5=OFF, N1=ON, the ground voltage will pass through N3,
N4 and N1 then switch on the P2. That is Q output pulls up
through P2. When D=0, Q=1, Qb_kpr=0, N5= ON, N1=
OFF, Y=1, N2= ON, then Q output pulls down to zero
through N2, N3 and N4.
The flip flop output is depending upon the previous output
Qand Qb_kpr in addition with clock and data input. So the
initial condition should be like when D=1 the previous state
of Q should be 0 and Qb_kpr should be 1. Similarly when
D=0 the previous state of Q should be 1 and Qb_kpr should
be 0. Whenever the D=1 the transistor N5 is idle, Whenever
the D=0 input transmission gate is idle.
In high frequency operation the input transmission gate
andN5 will acquire incorrect initial conditions due to the
feedback from the output. The noise coupling occurred in the
Q output due to continuous switching at high frequency. The
glitch will be appearing in the Q output. It will propagate to
the next stage which makes the system more vulnerable to
noise. In order to avoid the above drawbacks and reduce the
power consumption in proposed flip flop, we can make the
flip-flop output as independent of previous state. That is
without initial conditions and removal of noise coupling
transistors. In addition double edge triggering can be applied

FIG 3.3: CPSFF


IV. MTCMOS TECHNIQUE
Multi-Threshold Voltage CMOS (MTCMOS) is one of the
highly accepted circuit technique in the reduction of the
leakage current. For efficient power management in
MTCMOS technology, the circuit works on two modes, one
being the "active" and the other "sleep" operational modes.
The conventional circuit works on single threshold voltage
(Vt) while the circuit employing the MTCMOS technique
works on two different threshold voltage switches are Low Vt
and High Vt. The circuit comprises of two different set of
transistors- one which works on High Vt are termed as
"sleep" transistors and the transistors which works on Low
Vt comprises the logical circuit. The sleep transistors are
used to achieve high performance by reducing the leakage
current while the Low Vt transistors enhance the circuits
speed performance.
The power gating technique using MTCMOS is shown in
Fig. The diagram consists of two sleep transistors S1 and S2
with higher Vt. The logic circuit between the S1 and S2 is not
directly connected to real supply lines Vdd and Gnd, but in
turn it is connected to virtual power supply lines Vddv and
Gndv and has low Vt. Both the sleep transistors are given
complementary inputs S and SBAR. The above circuit
operates in two modes active mode and standby mode.
In active mode, S=0 and SBAR=1 such that S1 and S2 are
ON and virtual supply lines Vddv and Gndv work as real
supply lines therefore the logic circuit operates normally and

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


54

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


at a higher speed. In sleep mode, S=1 and SBAR=0 such that
S1 And S2 are OFF and this will cause virtual power supply
lines to float and large leakage current present in circuit is
suppressed by sleep transistors S1 and S2 resulting in lower
leakage current and thus reducing power consumption.

FIG 4.1:MTCMOS TECHNIQUE

V.PROPOSED LOW POWER FLIP-FLOP DESIGNS


USING MT-CMOS
To reduce standby leakage power consumption and to ensure
efficient implementation of sequential elements, we propose
clocked pair shared flip-flop using MTCMOS technique. We
are designing this circuit keeping the number of clocked
transistors same as in the actual circuit. The schematic of
MT-CPSFF is shown in Fig.5.1

In this proposed Clocked Pair Shared Flip Flop, a high


threshold voltage NMOS transistor is provided with a sleep
signal S, which is high in the active mode and low during the
standby mode.Here, the first and the second stage shares the
same clocked pair (M5 and M6). Furthermore, the pMOS M1
is always turned on and is connected to the power supply
Vdd, thus charging the internal node X all the time. This
reduces the floating of node X and enhances the noise
robustness.
The flip flop works, when both clk and clkdb are at logic 1.
Pseudo nMOS and conditional mapping technique both are
combined using the above scheme. The nMOS M3 is
controlled by a feedback signal. For input D=1and S=1,Q
will be high, switching ON the transistor M8, and turning
OFF M3 thus parrying redundant switching activity and flow
of shortcircuit current at the node X. When D transits to 1 the
output Q is pulled up by pMOS M2 whereas M4 is used to
pull down Q when D=0 and Y=1 at the arrival of clock pulse.
When the input D transits from 0-1 the short-circuit occurs
for once even though M1 is always ON, thus disconnecting
the discharge path and turning off M3 after two gates delay
by feedback signal. There will be no short-circuit even if the
input D stays high as M3 disconnects the discharge path. The
output of the flip flop depends upon the state previously
acquired by Q and QB along with the clock and the data
signal inputs provided.

T-FLIP-FLOP MTCMOS TECHNIQUE


The below diagram which shows the extension of the
MT-CPSFF.T flip-flop which uses for the reducing switching
activity and also power consumption. This is the another
proposed MTCMOS technique, The schematic of
MT-CPSFF is shown in Fig.5.2

Fig.5.1 Schematic of Proposed CPSFF using MTCMOS


Fig.5.2 Schematic of Proposed T- CPSFF using
MTCMOS

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


55

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

VI.SIMULATION RESULTS
The simulation results for all existing and proposed flip-flops
were obtained in a 90nm CMOS technology at room
temperature using Tanner EDA Tools over various supply
voltages and frequencies. Table I show power comparison
results for the CDFF, CDMFF, CPSFF and the proposed
MT-CPSFF for 1.5, 2.5 and 3.5 supply voltage (Vdd) over
750MHz and 500MHz clock frequencies. Table I show that
CDFF has 38.39% less power consumption than
conventional DEFF at 750MHz clock frequency and 1.5Vdd.
The simulation results for all existing and proposed flip-flops
were obtained in a 90nm CMOS technology at room
temperature using Tanner EDA Tools 13.0 over
varioussupply voltages and frequencies. Table I show power
comparison results for the CDFF, CDMFF, CPSFF and the
proposed MT-CPSFF for 1.5, 2.5 and 3.5 supply voltage
(Vdd) over 750MHz and 500MHz clock frequencies. Table I
show that CDFF has 38.39% less power consumption.
Similarly at 500MHz and 3Vdd CDFF consumes 74.02% less
power than conventional DEFF. With the reduction in the
number of clocked transistor in the CDMFF as compared to
CDFF, the power consumption by CDMFF at 750MHz and
at1.5Vdd is 81.02% less as compared to the CDFF. Albeit
CDMFF reduces the power consumption to a considerable
amount, but it is susceptible to redundant clocking in
addition to a floating node. The CPSFF overcomes this
drawback by reducing the number of clocking transistors. For
500 MHz and at 3Vdd CPSFF consumes 53.80% less power
than CDMFF. Similarly at 750MHz and 1.5Vdd CPSFF
consumes 9.74% less power as compared to CDMFF. The
comparison shows that reducing the clocked transistors has a
major effect on reducing the total power consumption of the
design circuit.

Fig:6.1 Output wave form for D-ff

The proposed MT-CPSFF which makes use of MTCMOS


technique shows higher performance as well as smaller
Stand by leakage current. The low Vt MOSFETs enhances
the speed, while the higher Vt MOSFETs reduces the standby
leakage current. Table I show that for MT-CPSFF at
500MHz and 3Vdd, the proposed circuit consumes 66.3%
less power as compared to conventional CPSFF. Similarly at
750MHz and1.5Vdd MTCPSFF consumes 15.2% less power
than conventional cpsff.

Fig: 6.2Output waveform for T-ff

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


56

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

[5] N.Weste and D. Harris, CMOS VLSI Design. Reading,


MA: Addison Wesley, 2004.

FREQUENCY
DESIGN
NAME

ARE
A

SWI
TICHI
NG
TRAN
SISTO
R

500MHZ

POWER CONSUMPTION
1.5

CDFF

26

13

CDMFF

20

CPSFF

17

MT-CPSFF

21

MT-CPSFF

27

700MHZ

SUPPLY VOLTAGE
3
1.5
3V

9.7

54.8

14.6

82.0

1.9

49.6

2.7

53.7

1.3

22.9

2.5

32.8

1.2

7.7

2.12

11.6

1.1

7.5

2.00

10.5

VII CONCLUSION
In this paper, a new design for D and T flip-flop is introduced
to reduce internal switching activity of nodes and stand by
leakage power; along with this variety of design techniques
for low power clocking system are reviewed. This proposed
flip-flop reduces local clock transistor number and power
consumption as well. The proposed MT-CPSFF outperforms
previously existing CDFF, CDMFF and CPSFF in terms of
power and good output response by approximately 20% to
85%. Furthermore, several low power techniques, including
low swing and double edge clocking, can be explored to
incorporate into the new flip-flop to build system.

VIII REFERENCES
[1] M. Pedram, Power minimization in IC Design:
Principles and applications, ACM Transactions on Design
Automation of Electronic Systems, vol. 1, pp.3-56, Jan.
1996.
[2] S.M. Kang, Y. Leblebici CMOS Digital Integrated
Circuits analysis and design third edition, TMH, 2003.
[3] A. Keshavarzi, K. Roy, and C. F. Hawkins, Intrinsic
leakage in low power deep submicron CMOS ICs, in Proc.
Int. Test Conf., pp. 146 155, 1997.
[4] Z. Peiyi, M. Jason, K. Weidong, W. Nan, and W.
Zhongfeng Design of Sequential Elements for Low Power
Clocking System IEEE Transaction of Very large Scale
Integration July 2010.

[6] M. Pedram, Q. Wu, and X. Wu, A New Design for


Double Edge Triggered Flip-flops, in Jan.2002
.
[7] P. Zhao, T. K. Darwish, and M. A. Bayoumi,
High-Performance and Low-Power Conditional Discharge
Flip-Flop, IEEE transactions on very large scale integration
(VLSI) systems, vol.12 no.5, May 2004.
[8] P. Zhao, T. K. Darwish, and M. A. Bayoumi,
High-Performance and Low-Power Conditional Discharge
Flip-Flop, IEEE transactions on very large scale integration
(VLSI) systems, vol.12 no.5, May 2004.
[9] T.Kavitha, Dr.V.Sumalatha A New Reduced Clock
Power Flipflop for Future SOC Applications. International
Journal of Computer Trends and Technology,
volume3Issue4, 2012.
[10] C. K. Teh, M. Hamada, T. Fujita,H. Hara, N. Ikumi, and
Y. Oowaki, Conditional Data Mapping Flip-Flops for
Low-Power
and
High-Performance Systems.IEEE
Transactions on very large scale integration (VLSI) systems,
vol. 14, no. 12,December 2006.
[11] F. Mohammad, L. A. Abhilasand P. SrinivasA new
parallel counter architecture with reduced transistor count
for power and area optimization, international conference
on Electrical and Electronics Engineering, Sept., 2012.
[12] BhuvanaS, SangeethaRA Survey on Sequential
Elements for Low Power Clocking System, Journal of
Computer Applications ISSN: 0974 1925, Volume-5, Issue
EICA2012-3, and February 10, 2012.
[13] B. Kousalya, Low Power Sequential Elements for
Multimedia and Wireless Communication applications,
July 2012.
[14] Mutoh S et al, 1-V Power supply high-speed digital
circuit technology with multithreshold- voltage CMOS,
IEEE J. Solid State Circuits, Vol. 30, pp. 847-854, August
1995.
[15] HemanthaS, Dhawan A and Kar H, Multi-threshold
CMOS design for low power digital circuits, TENCON
2008-2008 IEEE Region 10 Conference, pp.1-5, 2008.
[16] Q. Zhou, X.Zhao, Y.Cai, X.Hong, An MTCMOS
technology for low-power physical design, Integration
VLSI J. (2008).

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


57

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Security And Privacy Enhancement Using Jammer In


Downlink Cellular Networks
1

S.Shafil Mohammad1,T.Prasad2

P.G Student in VLSI, Department of E.C.E, SIETK, Tirupati.


2
Assistant Professor, Department of E.C.E, SIETK, Tirupati.
E-Mail to:shafil.mohammad71@gmail.com,prasadgmt@gmail.com
Abstract this paper, a novel transmission scheme is proposed
to improve the security of downlink cellular network.
The confidential message intended to one of K mobile users
(MUs) should be securely kept from the undesired recipients.
In this work, the K 1 remaining users are regarded as
potential eavesdroppers and called as internal eavesdroppers.
For the security enhancement, we propose an adaptation of a
single cooperative jammer (CJ) to increase the ambiguity at all
malicious users by distracting them with artificial interference.
With the help of CJ, we derive the optimal joint transmission
scheme by beamforming solution to maximize the secrecy rate
of the intended user. Numerical results show the performance
improvement of the proposed scheme.
Index Terms Physical layer security, downlink cellular
network,cooperative jammer, beamforming.

1.

INTRODUCTION

Wireless communication is vulnerable to


eavesdropping due to the broadcast nature of wireless
medium. In broadcast channel where one transmitter
disperses the information to the multiple users, the
importance of security becomes more emphasized.
Wyner introduced the information theoretic view of the
physical-layer security in [1]. From this perspective, the
effective transmission with the multiple antennas and
the cooperative jammer (CJ) to improve the security
were studied in [2][5].
Recently, with the ever-increasing demand of
broadcast services (e.g. downlink communication in
cellular networks and wireless local networks
(WLAN)), the concern on broadcast channel with the
confidential messages has been growing. Specially in
multiuser system, since all users within the
communication range can overhear the wireless signal,
there occurs a possibility that the licensed users might
be the maliciouseavesdroppers1.Thus, we should
counteract two sorts of eavesdroppers such as internal
and external. If they are from
legitimate user set of the transmitter, we call them
internal eavesdroppers and otherwise external
eavesdroppers. Authors of [6] and [7] investigated the
secure transmission against a single external
eavesdropper. For coping with the internal
eavesdroppers, [8] proposed the transmission technique
for the base station (BS) via semi definite programming
(SDP).
In this paper2, we aim at designing the optimal
transmission
in downlink cellular network against the multiple internal
eavesdroppers. In our scenario, BS desires to send the private

message to one of K mobile users (MUs) while keeping the


privacy from the K-1 remaining MUs, i.e. internal
eavesdroppers. For the secrecy enhancement of the intended
user, we design a novel secure transmission which jointly
optimizes both BS and CJ for the multiple-antenna link. This
joint cooperation has the robustness for the case where BSs
are in the severe fading conditions or where the eavesdropper
is close to BSs or MUs. Additionally, the employment of CJ
can lessen the complexity burden compared to the beam
forming solution solely at BS with no help of CJ such as
[8]3.Unlike the studies enlisting the aid of optimization
software tools for the transmission design in multiuser setting
[8], [10][12], our strategy is also grounded on an explicit
parametrization for an arbitrary number of users and antennas
with the constructed programming problem.
2. SYSTEM MODEL
The system model under consideration is illustrated in Fig.1.
The cellular network consists of a base station (BS), K mobile
users (MUs) and a single cooperative jammer (CJ).
All users can have two types of data traffic such as private
and open, but in this paper we suppose that each user has only

Fig.1 System Model


private message transmitted from the BS. The BS wants to
securely transmit the private message to each user over the
same spectral band simultaneously. However, such public
dispersion unavoidably brings the information leakage to the
unintended users over the associated cross-channels. We
assume that the remaining users except the desired recipient
are considered the potential eavesdroppers and we refer to
them as the as internal eavesdroppers. In this work, a CJ
creating the artificial noise toward the multiple
eavesdroppers is applied to improve the secrecy of a desired
point-to-point link. In doing so, we can obtain the strength on
security level in the channel condition vulnerable to

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


58

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


eavesdropping comparing to the system only controlling the
signal direction by beam forming at BS. Also,with the help of
CJ, BS can lighten the complexity cost for the secrecy
enhancement. To accomplish this mission, we propose the
joint transmission technique for both BS and CJ under
transmit power constraint. We assume that BS and CJ possess
Nb and Nc transmit antennas, respectively, while all MUs are
single-antenna nodes. The transmission is composed of scalar
coding followed by beam forming and all propagation
channels are supposed to be frequency-flat.
With these assumptions, the received signal at MUk can be
modeled as

the symbols transmitted by BS and CJ, hk and gk are the


(complex-valued) Nb 1 channel-vector between BS and
MUk and Nc 1 channel-vector between CJ and MUk, and
wb and wc are the transmit beam forming vectors used by BS
and CJ, respectively. The variable nk models the complex
Gaussian noise with zero mean and variance 2 k. We assume
that BS and CJ have full channel state information (CSI). The
objective of the transmit beamformers wb and wc is to convey
the symbol sb in a secure way.
3.MODELING ASSUMPTIONS:
3.1 Sensor Network Model
We consider a wireless sensor network deployed over a
large area and operating under a single-carrier slotted Aloha
type random access protocol. We assume symmetric
trans-mission and reception in the sense that a node i can
receive a signal from node j if and only if node j can receive a
signal from. Time is divided into time slots and the slot size is
equal to the size of a packet. All nodes are assumed to be
synchronized when transmitting with respect to time slot
boundaries. Each node transmits at a fixed power level P with
an omni directional antenna and its transmission range R and
sensing range Rs are circular with sharp boundary.
Transmission and sensing ranges are defined by two
thresh- olds of received signal strength. A node within
transmission range of node i can correctly decode transmitted
messages from i, while a node within sensing range can just
sense activity due to higher signal strength than noise, but
cannot decode the transmitted message. Typically, Rs is a
small multiple of R, ranging from 2 to 3.
Anode within distance R of anode i(excluding node i
itself)is called a neighbor of i. The neighborhood of i, N i is
the set of all neighbors of i with ni j Nij being the size of is
neighborhood. Transmissions from node i are received by all
its neighbors. The sensor network is represented by an
undirected graph. where S is the set of sensor nodes and E is
the set of edges where edge denotes that sensor i and j are
within transmission range of each other. Sensor nodes are
uniformly distributed in an area,with spatial density nodes
per unit area and the topology is static, i.e., we assume no
mobility. Each node has an initial amount of energy E. We do
not consider the energy consumed in reception. Each node is
equipped with a single transceiver, so that it cannot transmit
and receive simultaneously. All nodes are assumed to be
continuously backlogged, so that there are always packets in
each nodes buffer in each slot. Packets can be generated by
higher layers of a node, or they may come from other nodes

and need to be forwarded or they may be previously sent and


collided packets to be retransmitted.
A transmission on edge is successful if and only if no node
in N j [f] jgnf ig transmits during that transmission. In this
work, we consider the class of slotted Aloha type random
access protocols that are characterized by a common channel
access probability for all network nodes in each slot. This
provides us with a straightforward means to quantify the
network effort to withstand and confront the attack by
regulating the amount of transmitted traffic and essentially
exposing the attacker to the detection system, as will become
clear in the sequel.
Provided that it remains silent in a slot, a receiver node j
experiences collision if at least two nodes in its neighborhood
transmit simultaneously, regardless of whether the
transmitted packets are destined for node j or for other nodes.
Thus, the probability of collision at node j in a slot is:

If node j attempts to transmit at a slot while it


receives a message, a collision occurs as well. In that
case, the receiver is not in position to tell whether the
collision is due to its own transmission or whether it
would occur anyway. In the sequel, we will term
collision an event addressing the case of multiple
simultaneous transmissions received by (not necessarily
intended to) a node and no transmission attempt by that
node. Note that, if we include the possibility that the
receiver also attempts to access the channel, the
probability of collision is the same as the one above with
nj substituted by nj 1. Whenever a collision occurs at a
receiver, the packet is retransmitted in the next slot if the
transmitter accesses the channel again. If a node does not
have any neighbors (i.e., it is nj 0), then this node does
not receive any packets and does not experience
collisions.
3.2Attacker Model:
We consider one attacker, the jammer, in the sensor
network area. The jammer is neither authenticated nor
associated with the network. The objective of the
jammer is to corrupt legitimate transmissions of sensor
nodes by causing intentional packet collisions at
receivers. Intentional collision leads to retransmission,
which is translated into additional energy consumption
for a certain amount of attainable throughput or
equivalently reduced throughput for a given amount of
consumed energy. In this paper, we do not consider the
attacker that is capable of node capture.
The jammer may use its sensing ability in order to
sense ongoing activity in the network. Clearly, sensing
ongoing network activity prior to jamming is beneficial
for the attacker in the sense that its energy resources are
not aimlessly consumed and the jammer is not needlessly
exposed to the network. The jammer transmits a small
packet which collides with legitimate transmitted
packets at their intended receivers.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


59

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Fig .2.Illustration of Jamming attack


As argued in, a beacon packet of a few bits suffices to disrupt
a transmitted packet in the network. The jammer is assumed
to have energy resources denoted by Em, yet the
corresponding energy constraint in the optimization problems
of the next section may be redundant if the jammer adheres to
the policy above.
The jammer uses an omni directional antenna with
circular sensing range Rms and adaptable transmission range
Rm that is realized by controlling transmission power Pm as
illustrated inFig.1.Thejammeralsocontrolstheprobabilityq of
jamming the area within its transmission range in a slot, thus,
controlling the aggressiveness of the attack.
The attack space is, therefore, specified by set P- 0;1,
where P is the discrete set of employed power levels. The
attacker attempts to strike a balance between short-and
long-term benefits, as a more aggressive attack increases
instantaneous benefit but exposes the attacker to the detection
system, while a milder attack may prolong detection time.
If the jammer senses the channel prior to deciding
whether to jam or not, collision occurs at node j if the jammer
jams and at least one neighbor transmits. Thus, conditioned
on existence of a jammer, the probability of collision at node j
is

On the other hand, if jamming occurs without prior channel


sensing, the probability of collision is

Thus, the probability of collision is the same regardless of


channel sensing prior to jamming. This implies that jamming
can be viewed as a multiple access situation between a
network of legitimate nodes, each with access probability and
the jammer with access probability q. Nevertheless, by using
sensing, the adversary does not was energy on empty slots
and conserves energy by a factor of 1where denotes the
number of legitimate nodes in the jammers sensing range.
For large 1.
Namely, for a den sensor network, It is very likely that
some transmission will always occur in the network and,
therefore, it does not really make a difference whether the
attacker will sense the channel or not. In the sequel, we will
not consider the energy saving facto We will subsequently

assume that the adversary pos- sesses different amounts of


knowledge about the network, ranging from full knowledge
about network parameters such as access probability and the
neighborhood of a monitor node to no knowledge at all.
Networks differing levels of knowledge about an attacker
will be considered as well.
3.3 Attack Detection Model:
The network employs a mechanism for monitoring network
status and detecting potential malicious activity.
The monitoring mechanism consists of:
1)Determination of a subset of nodes M that act as monitors.
2) Employment of a detection algorithm at each monitor
node.
The assignment of the role of monitor to anode is affected by
potential existing energy consumption and node
computational complexity limitations, and by detection
performance specifications. In this work, we consider a fixed
set M, and formulate optimization problems for one or
several monitor nodes. We fix attention to a specific monitor
node and the detection scheme that it employs.
First, we need to define the quantity to be observed at each
monitor. In our case, the readily available metric is the
probability of collision that a monitor node experiences,
namely the percentage of packets that are erroneously
received.
During normal network operation and in the absence of a
jammer, we consider a large enough training period in which
the monitor node learns the percentage of collisions it
experiences as the long-term limit of the ratio of number of
slots where there was collision over total number of slots of
the training period. Now let the network operate in the open
after the training period has elapsed and fix attention to a time
window much smaller than the training period.
An increased percentage of collisions in the time window
compared to the learned long-term ratio may be an indication
of an ongoing jamming attack that causes additional
collisions. However, it may happen as well that the network
operates normally and there is just a temporary irregular
increase in the percentage of collisions compared to the
learned ratio for that specific interval.
A detection algorithm is part of the detection module at a
monitor node; it takes as input observation samples obtained
by the monitor node (i.e., collision/not collision) and decides
whether there is an attack or not.
On one hand, the observation window should be small
enough, such that the attack is detected in a timely manner
and appropriate countermeasures are initiated.
On the other hand, this window should be sufficiently
large, such that the chance of a false alarm notification is
reduced.
The sequential nature of observations at consecutive time
slots motivates the use of sequential detection techniques.
A sequential decision rule consists of:
1) A stopping time, indicating when to stop taking
observations.
2) Afinal decision rule that decides between the two
hypotheses.
A sequential decision rule is efficient if it can provide
reliable decision as fast as possible. The probability of false
alarm PFA and probability of missed detection PM constitute
inherent trade-offs in a detection scheme in the sense that a
faster decision unavoidably leads to higher values of these

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


60

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


probabilities while lower values are attained at the expense of
detection delay.Forgiven values of PFA and PM, the
detection test that minimizes the average number of required
observations (and thus average delay) to reach a decision
among all sequential and non sequential tests for which PFA
and PM do not exceed the predefined values above is Walds
Sequential Probability Ratio Test (SPRT).
When SPRT is used for sequential testing between two
hypotheses concerning two probability distributions, SPRT is
optimal in that sense as well.
3.4 Notification Delay
Following detection of an attack, the network needs to be
notified in order to launch appropriate counter measures. The
transfer of the notification message out of the jammed area is
performed with multiple routing from the monitor node to a
node out of the jammed region.
The same random access protocol with channel access
probability is employed by a node to forward the message to
the next node. Having assumed a single-channel sensor
network, we implicitly exclude the existence of a control
channel that is used for signaling attack notification
messages.
Hence, the transfer of the notification message out of the
jammed will take place in the same channel and will still
under go jamming. Clearly, the time that is needed for the
notification message to be passed out of the jammed area
depends on the jamming strategy as well as the network
channel access probability. For that reason, we use the sum of
detection and notification delay as a metric that captures the
objective of the attacker and the network. It is understood that
if there exists a control channel for signaling notification
messages that is not jammed, then only the detection delay is
needed as a performance objective. If this control channel is
jammed, then one needs to consider the notification time but
also assess the cost incurred by jamming an additional
channel. We discuss briefly this issue in the last section as
part of future work.
We now compute the average time needed for the
notification message to be carried out of the jammed area.
The probability of successful channel access for a node i
along the route of the notification message in the presence of
jamming is
.
Hence, the ex-pected number of transmission attempts
before successful transmission, which also denotes expected
delay for node i before successful transmission is slots. In a
single-channel network, the adversary can cause additional
disruption to the network by jamming the alert message even
after being detected. In order to find the average delay for
transmitting an alert out of the jammed region, let us first
denote the average number of hops to deliver the alarm out of
jammed area Am by H.
Clearly, the expected notification delay depends on the
expected number of hops it takes for the notification message
to leave the jammed area which in turn depends on the
position of the monitor node. We assume dense sensor
deployment and, thus, roughly approximate the route
followed by the notification message with an almost straight
line. This means that H Rm=2p, namely, H is equal to the
average distance of a monitor from the boundary of the

jammed area (Rm=2) divided by the node transmission range


R.
We adhere to this approximation since the exact
expression for the distribution of H depends on knowledge
about the network topology and the location of the monitor.
Such knowledge is rather unrealistic to assume for the
attacker and even for the network itself.
The average time needed for the alarm to propagate out of
the jamming area, also referred to as notification delay, is

4.BLOCK DIAGRAM:

Fig.3.Block diagram of Mobile jamming circuit.


with the help of this Speaker,Keypad and Mick are
blocked.
5. RANDOM MODEL WITH ONE TRANSMITTER
AND ONE JAMMER:
We rst consider one transmitter node (node 1) and one
jammer node (node 2) at a single channel access point as
shown in Figure 2. Packets arrive randomly at node 1s queue
with rate (packets per time slot) and they are transmitted
over a single channel. Node 2 does not have its own trafc
and jams node 1s transmissions. We assume that each
(packet or jamming signal) transmission consumes one unit
of energy. We consider a synchronous slotted system, in
which each packet transmission (or jamming attempt) takes
one time slot. Hence, the jammer cannot wait to detect the
start of a transmission before jamming. Later in Section VI,
we will consider the effects of channel sensing and allow the
jammer to detect any transmission before starting a denial of
service attack.

Fig.4.Single Channel Access point With one Transmitter


and one Jammer

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


61

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


From the above Fig4.Single channel Access point with one
transmitter and one jammer is used for the prototype model.
6.CONCLUSIONS
We have considered the secure communication in the K
user downlink cellular system. For the secrecy improvement
of the intended user, we develop the joint transmission
strategy with the assistance of CJ based on the framework of
power gain region. The numerical results verify the
performance improvement of the proposed scheme.
REFERENCES
[1] A. D. Wyner, The wire-tap channel, The Bell Sys. Tech.
J., vol. 54,no. 8, pp. 13551387, Oct. 1975.
[2] X. Tang, R. Liu, P. Spasojevic, and H. V. Poor,
Interference assisted secret communication, to appear in
IEEE Trans. Inf. Theory. Preprint available on
arXiv:0908.2397, Aug. 2009.
[3] S. Goel and R. Negi, Guaranteeing secrecy using
artificial noise, IEEE
Trans. Wireless Commun., vol. 7, no. 6, pp. 21802189, June
2008.
[4] A. Khisti and G. Wornell, Secure transmission with
multiple antennasI: the MISOME wiretap channel, IEEE
Trans. Inf. Theory,
vol. 56, no. 7, pp. 30883104, Jul. 2010.
[5] E. A. Jorswieck, Secrecy capacity of single- and
multi-antenna channels
with simple helpers, in Proc. 2010 Int. ITG Conf. on Source
and Channel Coding, pp. 16.
[6] A. Khisti, A. Tchamkerten, and G. Wornell, Secure
broadcasting over fading channels, IEEE Trans. Inf. Theory,
vol. 54, no. 6, pp. 17, Jun.
2008.
[7] A. Mukherjee and A. L. Swindlehurst, Utility of
beamforming strategies for secrecy in multiuser MIMO
wiretap channels, in Proc. 2009 Allerton Conf. on Comm.
Control and Comp., pp. 11341141.
[8] M. Vzquez, A. Prez-Neira, and M. Lagunas,
Confidential communication
in downlink beamforming, in Proc. 2012 IEEE Workshop
on Sign.
Proc. Adv. in Wireless Comm., pp. 349353.
[9] S. Jeong, K. Lee, J. Kang, Y. Baek, and B. Koo,
Cooperative jammer design in cellular network with internal
eavesdroppers, in Proc. 2012
IEEE Mil. Comm. Conf., pp. 15.
[10] H. D. Ly, T. Liu, and Y. Liang, Multiple-input
multiple-output Gaussian broadcast channels with common
and confidential messages, IEEE
Trans. Inf. Theory, vol. 56, no. 11, pp. 54775487, Nov.
2010.
[11] Q. Li and W. K. Ma, Optimal and robust transmit
designs for MISO
channel secrecy by semi definite programming, IEEE Trans.
Signal Process., vol. 59, no. 8, pp. 37993812, Aug. 2011.

[12] Y. Yang, W.-K. Ma, J. Ge, and P. C. Ching,


Cooperative secure
beamforming for AF relay networks with multiple
eavesdroppers, IEEE
Signal Process. Lett., vol. 20, no. 1, pp. 3538, Jan. 2013.
[13] E. G. Larsson and E. A. Jorswieck, Competition versus
cooperation on the MISO interference channel, IEEE J. Sel.
Areas Commun., vol. 26,
no. 7, pp. 10591069, Sept. 2008.
[14] Y. Liang, G. Kramer, H. V. Poor, and S. Shamai,
Compound wire-tap channels, in Eurasip J. Wireless
Commun. and Networking, vol. 2009,
no. 5, Mar. 2009.
[15] R. Mochaourab and E. A. Jorswieck, Optimal
beamforming in interference
networks with perfect local channel information, submitted
to IEEE Trans. Signal Process. Preprint available on
arXiv:1004.4492, Oct.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


62

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

LITERATURE REVIEW ON TRAFFIC SIGNAL CONTROL SYSTEM BASED ON


WIRELESS TECHNOLOGY
1

3
A.Rajani
P.Rajesh
Assistant Professor
Associate professor
Department of ECE
Department of ECE
Annamacharya Institute of Technology and Sciences,Tirupati,India-517520

G.Krishnaiah
M.Tech(DSCE), Student

sreekrishna.g@gmail.com
rajanirevanth446@gmail.com
3
rajeshpatur.aits@gmail.com

Abstract This paper presents a literature review on


different types of techniques used in the traffic signal
control system. We known to the fact one of the main
issues to be addressed by the todays traffic
management schemes is, traffic congestion on city road
neetworks. Traffic congestion on city roads many times
leads to delay of our works. All these heavy traffic in
these days is due to the known fact that, number of
vehicles are increasing exponentially, and the limited
technology using for traffic control on roads. Traffic
police at cross roads and automatic traffic signal
scheme are the most used traffic control schemes in
India. Contrast to the regular schemes, intelligent
traffic management schemes based on Image processing
and Wireless sensor networks are using to control
traffic, from last five years. But besides their
advantagesthese schemes has some limitations. It is
found that using wireless system along with embedded
system has benefits over the existing techniques.
Key words: Automatic traffic signals, Intelligent traffic
management schemes, Image processing, Wireless
sensor networks, Wireless communication technology.

I. INTRODUCTION
The continuous increase in the congestion level,
especially at rush hours, on public roads , is a critical
problem in many countries and is becoming a major
concern to transportation specialists and decision
makers. The existing methods for traffic
management, surveillance and control are not
adequately efficient in terms of the performance,
cost, and the effort needed for maintenance and
support.
Many techniques has been used including,
ground level sensors like video image processing,
microwave radar, laser radar, passive infrared,
ultrasonic, and passive acoustic array. But, these
systems have a high equipment cost and their
accuracy is depends on environment conditions [1].

At another widely-used technique in conventional


traffic surveillance systems is based on intrusive and
non-intrusive sensors with inductive loop detectors,
in addition to video cameras for the efficient
management of public roads [2][3]. Among them,
intrusive sensors may cause disruption of traffic upon
installation and repair, and may result in a high
installation and maintenance cost. non-intrusive
sensors, On the other hand, tend to be large size,
power hungry, and affected by the road and weather
conditions; thus resulting in degraded efficiency in
controlling the traffic flow. Main problem occurs,
when this traffic congestion costs life of someone, if
in case any emergency vehicles like ambulance
strucks in traffic.
This paper gives a brief review of few techniques
that has been implemented in one or more than one
country around the world. In this paper traditional
traffic management schemes has been discussed,
below sub-section explains the traditional methods
used in traffic control system.
1.1 Traditional Traffic Control Scystem
We have different traditional traffic control
schemes, in our world, used around the world. A
traffic police standing at junction and later automatic
traffic control signals are among them.Each of which
are explained below.
A. Traffic Police standing at junction roads
A traffic police standing at junctions, or at cross
roads, is the simplest and the oldest method used for
the traffic management. It includes a human in the
traffic ssytem. A trffic officer is placed on each and
every cross sections of roads, and he manually
controls the traffic. A police officer stands at middle
of the road and monitoring the flow of traffic is
shown in fig.1. The police officer gives signals to the
vehicle driver whether to drive or start. And he
always monitors the every road, and decides which
lane has to give first priority. Based on his own

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


63

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

knowledge on his own takes the decisions, that which


lane has to allow and which one to stop.

system is it cannot identify the amount of traffic in


one particular line, so there is a chance of traffic jam.

Fig.2. Automatic traffic signals


Fig.1. A Traffic poile officer standing in middle of
the road and controling the traffic.
This method is the most efficient among all other
ssytem, if traffic police officer monitors traffic
without error. As it includes a human as part of the
system, the efficiency depends on, that particular
officer. So, this might not be good for heavy traffic
conditions and all the day, as we know humans
always make mistakes.
B. Automatic Traffic signals
The drawback of the above system will be
removed with this automatic traffic signals system.
As like we seen every day, the automatic traffic
signal system includes simple 3 color traffic signals.
Normally 120 seconds of green light is onfor each
lane. A yellow light will be flashes, before the blue
light, for 20 seconds, sigling the vehicle owners to
strat their vehiles and be ready to go. When the green
light is on in one lane then all other lanes will display
a redlight, The automatic traffic signalling system is
shown in the fig.2. Where red signal indicates stop,
yellow light signal indicates ready to go, and green
light signal indicates go. The major problem with this

II. RELATED WORK


2.1 Existing Traffic control schemes
A. Controling traffic lights by Image Processing
Scheme
In this scheme the number of vehicles are
detected by the system through images rather using
electronic sensors. And along side the traffic lights,
cameras will be fixed, these cameras captures the
images of vehicles. It shows that it can avoid the time
being wasted by green light on an empty road, and so
decreases traffic congestion.
Image processing based on Intelligent traffic
controller, vikramaditya dangi, ss. Rathode and Amol
parab[4], a camera is fixed on tall poles to monitor
the traffic. Images are extracted from cameras will be
analysed to detect number of vehicles on the road, on
each lane, And depend on the signal cycle time is
allotted to each lane.
For this traffic control signaling system using
image processing requires matlab to perform the
opertaion on the images captured by the cameras

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


64

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Fig.3. Block diagram of traffic control using image processing.

fixed on the tall poles. Image acquisition, edge


detection and Image enhancement are the major steps
in this scheme. The procedure of image processing
scheme is can be clearly understand with the fig.3.
Following 4 steps are the main blocks involved in the
process of traffic control:
1. Image acquisition
2. RGB to gray conversion
3. Image enhancement
4. Edge detection ( Image matching)

Image acquisition, primarily, is done with the


help of web camera. Initially image of the road is
captured when there is no traffic on the road, and that
empty road image is saved as reference image in the
specified program. On the reference image RGB to
gray conversion will be done.
Then the images of road, while traffic, is captured.
On the captured images, image acquisition is
performed and rgb to gray conversion as well.Then

after the edge detaction method the reference image


and original (traffic on road), are mtched. Based on
the percent of matching the time duration of the green
light will be allocated. Green light is on for 90
seconds, if the matching is between 0 to 10% . Green
light is on for 60 seconds, if the matching is between
10 to 50%. Green light is on for 30 second, if the
matching is between 50 to 70%. Green light is on for
20 seconds, if the matching is between 70 to 90%.
And Red light is on for 60 seconds, if matching is
between 90 to 100%.

B. Wireless Sensor Networks


Many studies suggested the use of WSN[5]
technology for traffic control [6, 7, 8, 9]. In [7], a
dynamic vehicle detection method and a signal
control algorithm to control the state of the signal
light in a road intersection using the WSN technology
was proposed. In [9], energyefficient protocols that
can be used to improve traffic safety using WSN
were proposed and used to implement an intelligent
traffic management system. In [10], Inter-vehicle

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


65

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

communication scheme between neighbouring


vehicles and in the absence of a central base station
(BS) was proposed.
In this paper, an intelligent and novel traffic light
control system based on WSN is presented. The
system has the potential to revolutionize traffic
surveillance and control technology because of its
low cost and potential for large scale deployment.
The proposed system consists of two parts: WSN and
a control box (e.g. base-station) running control
algorithms. The WSN, which consists of a group of
traffic sensor nodes (TSNs), is designed to provide
the traffic communication infrastructure and to
facilitate easy and large deployment of traffic
systems.

used to model the traffic flow between these multiple


intersections. In the mesh topology, the intersections
that are at the boundary are called edge intersections
while the remaining intersections are called receiving
and forwarding intersections. The average speeds for
all intersections are assumed to be constant. All
queues' lengths for all active directions are initialized
to zero. The distances (horizontal or vertical)
between any pair of the intersections are assumed
fixed and equal to a predefined base distance (d).

In this section, we present the system model


including some definitions and assumptions. We
assume a single intersection at urban areas with each
side having two legs. A configuration example for the
system is given in Fig. 4 for an urban intersection.
Vehicles arrive to the traffic light intersection (TLI)
according to certain random distribution and depart
after waiting for some time, which also follows a
certain random distribution. For simplicity, and
without loss of generality, we assume that each side
of the TLI is modeled as M/M/1 queue.
Fig.5. The in-house built traffic sensor node
The vehicle detection system requires four
components: a sensor to sense the signals generated
by vehicles, a processor to process the sensed data, a
communication unit to transfer the processed data to
the BS for further processing, and an energy source.
We adopt a simple time division multiple accesses
(TDMA) scheme at the MAC layer since it is more
power efficient as it allows the nodes in the network
to enter inactive states until their allocated time slots.
The scheme embodies a simple scheduling
algorithm that minimizes the time needed for
collecting data from all nodes back at the BS. The
algorithm assigns a group of non-conflicting nodes to
transmit in each time slot, in such a way that the data
packets generated at each node reaches the BS by the
end of the scheduling frame.

Fig. 4. Single intersection configuration example of


WSN

To streamline our presentation, we present some


useful notations and definitions that will be used
throughout the paper presented in the following
bulletins:

For urban areas with multiple intersections, we


assume a mesh network of intersections with
rectilinear topology. An open queuing network is

Traffic Phase: defined as the group of directions


that allow waiting vehicles to pass the intersection at
the same time without any conflict.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


66

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Traffic Phase Plan: defined as the sequence of


traffic phases in time.
The Traffic Cycle: defined as one complete series
of a traffic phase plan executed in a round robin
fashion.
The Traffic Cycle Duration (T): is the time of one
traffic cycle needed for the green and
red time durations for each traffic signal.

The design of WSN that is used as


communication infrastructure in the proposed traffic
light controller. We have designed, built, and
implemented a complete functional WSN and used it
to validate our proposed algorithms. Fig. 5 shows the
final product of the in-house developed TSN. The
functional TSN was built using some available of
the-shelf components (NB. commercial sensor nodes
like MICA motes were not available). The entire
TSN is encased in such a manner to be placed on
pavement made on the testing roads. For the system
components to be able to communicate (e.g., traffic
control box and the BS), a traffic WSN
communication and vehicle detection algorithms
were devised. To be specific, two algorithms are
developed, namely, the traffic system communication
algorithm (TSCA) that is presented in this section
and the traffic signals time manipulation algorithm
(TSTMA), which is presented in the next section.
These algorithms interact with each other and with
other system components for the successful operation
of the control system. The process starts from the
traffic WSN (which includes the TSNs and the traffic
BS), the TSCA, and the TSTMA, and ending by
applying the efficient time setting on the traffic
signals for traffic light durations. The TSCA is
developed to find and control the communication
routes between all.
The power restrictions of sensor nodes are raised
due to the their small physical size and lack of wires.
Since the absence of wires results in lack of a
constant power supply, not many power options exist.
Power limitations greatly affect security, since
encryption algorithms introdces a communication
over head between the nodes.
III. CONCLUSION
In this paper different existing traffic control
schemes(using Image Processing, using Wireless
Sensor Netwotks) are dicussed.

The new traffic control system based on wireless


communication have proposed . In the urban road
traffic control system generally includes signal
control machine, traffic lights, variable message
signs(VMS) and othe detectors. The wireless traffic
signal control system composed by master node and
slave node. The master node is the center of system,
it is a signal ccontrol machine and could provide
control signal to slave nodes; the slave node is the
end point devices of system, it recieves and executes
instructions from master node and then returns a
report. All of these devices communicated by
wireless communication models.
The performance of the proposed scheme for
traffic control sheme has benefits over the all other
existing schemes.

REFERENCES

1. The Vehicle Detector Clearinghouse, A


summary of vehicle detection and
surveillance technologies used in intelligent
transportation
systems,
Southwest
Technology Development Institute, 2000

2. Minnesota Department of Transportation,


Portable non-intrusive traffic detection
system,http://www3.dot.state.mn.us/guidest
ar/pdf/pnitds/techmemo-axlebased.pdf.

3. S. Coleri, S. Y. Cheung, and P. Varaiya,


Sensor networks for monitoring traffic, in
Proceedings of the 42nd Annual Allerton
Conference on Communication, Control,
and Computing, 2004, pp. 32-40.

4. Image Processing Based Intelligent Traffic


Controller by Vikramaditya Dangi, Amol
Parab, and S.S. Rathode. Undergraduate
Academic Research Journal, ISSN: 22781129, Vol-1, Iss-1, 2012.

5. I.

F.
Akyildiz,
W.
Su,
Y.
Sankarasubramaniam, and E. Cayirci, A
survey on sensor networks, IEEE
Communications Magazine, Vol. 40, 2002,
pp. 102-114.

6. A. N. Knaian, A wireless sensor network


for smart roadbeds and intelligent
transportation systems, Technical Report,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


67

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Electrical
Science
and
Engineering,
Massachusetts Institute of Technology, June
2000.

7. W. J. Chen, L. F. Chen, Z. L. Chen, and S.


L. Tu, A realtime dynamic traffic control
system based on wireless sensor network,
in Proceedings of the 2005 Interna tional
Conference
on
Parallel
Processing
Workshops, Vol. 14, 2005, pp. 258-264.

8. Y. Lai, Y. Zheng, and J. Cao, Protocols for


traffic safety using wireless sensor
network, Lecture Notes in Computer
Science, Vol. 4494, 2007, pp. 37-48.

9. J. S. Lee, System and method for intelligent


traffic control using wireless sensor and
actuator networks, Patent # 20080238720,
2008.

10. Z. Iqbal, Self-organizing wireless sensor


networks for inter-vehicle communication,
Master Thesis, Department of Computer and
Electrical
Engineering,
Halmstad
University, 2006.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


68

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Bluetooth Based Smart Sensor Networks


1.Raja Sekar.S(3rd Year ECE)
2.Paduchuri Harish(3rd year ECE)
Introduction:
The communications capability of
devices and continuous transparent
information routes are indispensable
components of future oriented
automation concepts. Communication
is increasing rapidly in industrial
environment even at field level.
In any industry the process can
be realized through sensors and can
be controlled through actuators. The
process is monitored on the central
control room by getting signals
through a pair of wires from each
field device in Distributed Control
Systems (DCS). With advent in
networking concept, the cost of
wiring is saved by networking the
field devices. But the latest trend is
elimination of wires i.e., wireless
networks.
Wireless sensor networks networks of small devices equipped
with sensors, microprocessor and
wireless communication interfaces.
In 1994, Ericsson Mobile
communications,
the
global
telecommunication company based in
Sweden, initiated
a study to
investigate, the feasibility of a low
power, low cost ratio interface, and to
find a way to eliminate cables
between
devices.
Finally,
the

engineers at the Ericsson named the


new wireless technology as Blue
tooth to honour the 10th century
king if Denmark, Harald Blue tooth
(940 to 985 A.D).
The goals of blue tooth are
unification and harmony as well,
specifically enabling different devices
to communicate through a commonly
accepted standard for wire less
connectivity.
Bluetooth:
Blue tooth operates in the
unlicensed ISM band at 2.4 GHZ
frequency band and use frequency
hopping spread spectrum technique. A
typical Blue tooth device has a range
of about 10 meters and can be
extended
to
100meters.
Communication channels supports
total bandwidth of 1 Mb / sec. A
single
connection
supports
a
maximum asymmetric data transfer
rate of 721 KBPS maximum of three
channels.
Bluetooth Networks:
In bluetooth, a Piconet is a
collection of up to 8 devices that
frequency hop together. Each Piconet
has one master usually a device that

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


69

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

initiated establishment of the Piconet,


and up to 7 slave devices. Masters
Blue tooth address is used for
definition of the frequency hopping
sequence. Slave devices use the
masters clock to synchronize their
clocks to be able to hop
simultaneously.

Wireless
Connectivity
Over Bluetooth:

Network Arrangement:
Bluetooth network arrangements can
be either point-to-point or point-tomultipoint.
The various network arrangements
regarding Bluetooth are:
a) Single-slave
b) Multi-slave(up to 7 Slaves on one
master)
c) Scatternet
A Piconet:
When a device wants to establish a
Piconet it has to perform inquiry to
discover other Blue tooth devices in
the range. Inquiry procedure is
defined in such a way to ensure that
two devices will after some time, visit
the same frequency same time when
that happens, required information is
exchanged and devices can use paging

The basic function is to provide


a standard wireless technology to
replace the multitude of propriety
cables currently linking computer
devices. Better than the image of the
spaghetti-free computer system is the
ability of the radio technology to the
network when away from traditional
networking structures such as
business
intranet.for,
example
imagine being on a business trip with
a laptop and a phone. The Bluetooth
technology allows interfacing the two.
Then picture meeting a client and
transferring files without cabling or
worrying about protocols. That is
what the Bluetooth will do.

SLA
VE 3

MAS
TER

SLA
VE 1

SLA
VE 2

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


MAS
ACTIVE
70

TER

SLAVE

www.iaetsd.in

SLAV
BY

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

procedure to establish connection.


When more than 7 devices
needs to communicate, there are two
options. The first one is to put one or
more devices into the park state. Blue
tooth defines three low power modes
sniff, hold and park. When a device
is in the park mode then it
disassociates from and Piconet, but
still maintains timing synchronization
with it. The master of the Piconet
periodically
broadcasts
beacons
(Warning) to invite the slave to rejoin
the Piconet or to allow the slave to
request to rejoin. The slave can rejoin
the Piconet only if there are less than
seven slaves already in the Piconet. If
not so, the master has to park one of
the active slaves first. All these
actions cause delay and for some
applications it can be unacceptable for
eg: process control applications, that
requires immediate response from the
command centre (central control
room).
Scatternet consists of several
Piconets connected by devices
participating in multiple Piconet.
These devices can be slaves in all
Piconets or master in one Piconet and
slave in other Piconet. Using
scatternets higher throughput is
available and multi-hop connections
between devices in different Piconet
are possible. i.e., The unit can
communicate in one Piconet at time
so they jump from pioneer to another
depending
upon
the
channel
parameter.

A Scatternet:

Frequency
doping
in
Bluetooth:
Bluetooth has been designed to
operate in a noisy radio frequency
environment, and uses a fast
acknowledgement and frequencyhopping scheme to make link robust,
communication wise. Bluetooth radio
modules avoid interference from other
signals by hopping to a frequency
after transmitting or receiving a
packet. Thus, Bluetooth modules use
Frequency-Hopping Spread Spectrum
(FHSS) techniques for voice and data
transmission. Compared with other
systems operating in the same
frequency band, Bluetooth radio
typically hops faster and uses shorter
packets.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


71

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

in Bluetooth and thats why Bluetooth


is considered Simply the Best.

FHSS uses packet-switching to send


data from the transmitter of one
Bluetooth module to the receiver of
another. Unlike circuit-switching
which establishes a communication
link on a certain frequency (channel),
FHSS breaks the data down into small
packets and transfers it on a wide
range of frequencies across the
available frequency band. Bluetooth
transceivers switch or hop among
79 hop frequencies in the 2.4 GHz
band at a rate of 1,600 frequency hops
per second.
Bluetooth Simply the
Best!:
Bluetooth competes with
existing technologies like IrDA,
WLAN and Home RF. IrDA is not
omni directional and it is a line-ofsight technology. WLANs are
essentially ordinary LAN-protocols
modulated on carrier waves, while
Bluetooth is more complex. HomeRF
is a voice and data home networking
which operates at a low speed.
Bluetooth hops very fast (1600
hops/second) between frequencies,
which does not allow for long data
blocks. These problems are overcome

Blue Based Sensor


Networks:
The main challenge in front of
Blue tooth developers now is to prove
interoperability between different
manufactures devices and to provide
numerous interesting applications.
One of such applications is a wireless
sensor network.
Wireless
sensor
networks
comprise number of small devices
equipped with a sensing unit,
microprocessors,
and
wireless
communication interface and power
source.
An important feature of wireless
sensor networks is collaboration of
network nodes during the task
execution.
Another specific characteristics of
wireless sensor network is Datacentric nature.
As deployment of smart sensor nodes
is not planned in advance and
positions of nodes in the field are not
determined, it could happen that some
sensor nodes end in such positions
that they either cannot perform
required measurement or the error
probability is high.
For that a
redundant number of smart nodes is
deployed in this field. These nodes
then communicate, collaborate and
share data, thus ensuring better
results.
Smart sensor nodes scattered in the
field, collect data and send it to users

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


72

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Short
range
wirel
ess
interf

sensor

via gateway using multiple hop


routes.

Gate
way
logic

Publi
c
netw
ork
interf

Wireless Sensor Networks:


The main functions of a gateway
are
Communication with sensor
Networks
Shortage wireless communication is
used.
It provides functions like discovery of
smart sensor nodes, generic methods
of sending and receiving data to and
from sensors, routing.
Gateway logic
It controls gateway interfaces and data
flow to and from sensor network.
It provides an abstraction level that
describes the existing sensors and
their characteristics.
It provides functions for uniform
access to sensors regardless of their
type, location or N/W topology, inject
queries and tasks and collect replies.
Communication With Users
Gateway communications with users
or other sensor networks over the
Internet, WAN, Satellite or some
shortage communication technology.
From the user point of view, querying
and tasking are two main services
provided by wireless sensor networks.
Queries are used when user requires
only the current value of the observed
phenomenon. Tasking is a more
complex operation and is used when a

Internet
User

phenomenon has to be observed over


a large period of time. Both queries
and tasks of time to the network by
the gateway, which also collects,
replies and forwards them to users.
SMART SENSOR
IMPLEMENTATION:
The main goal of the
implementation was to build a
hardware platform and generic
software solutions that can serve as
the basis and a test bed for the
research of wireless sensor network
protocols.
Implemented sensor network
consists of several smart sensor nodes
and a gateway. Each smart node can
have several sensors and is equipped
with a micro-controller and a
bluetooth radio module.
Gate way and smart nodes are
members of the Piconet and hence
maximum seven smart nodes can exist
simultaneously in the network.
For example, a pressure sensor
is implemented, as bluetooth node in a
following way.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


73

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

The sensor is connected to the


bluetooth node and consists of the
pressure-sensing
element,
smart
signal-conditioning
circuitry
including calibration and temperature
compensation, and the Transducer
Electronic Data Sheet (TEDS). These
features are built directly into the
sensor microcontroller used for node
communication control plus memory
for TEDS configuration information.
Smart
Sensor
Node
Architecture:
The architecture shown in
figure can easily be developed for
specific sensor configurations such as
thermocouples, strain gauges, and
other sensor technologies and can
include sensor signal conditioning as
well as communications functions.

Conditioned
along
sensor
signal is digitized and digital data is

then processed using stored TEDS


data. The pressure sensor node
collects data from multiple sensors
and transmits the data via bluetooth
wireless communications in the
2.4 GHZ base band to a network hub
or other internet appliance such as a
computer.
The node can supply excitation
to each sensor, or external sensor
power can be supplied. Up to eight
channels are available on each node
for analog inputs as well as digital
output. The sensor signal is digitized
with 16-bit A/D resolution for
transmission along with the TEDS for
each sensor. This allows each channel
to identify itself to the host system.
The node can operate from either an
external power supply or an attached
battery. The maximum transmission
distance is 10 meters with an optional
capability to 100 meters.

The IEEE 1451 family of standards


are used for definition of functional
boundaries and interfaces that are
necessary to enable smart transducer
to be easily connected to a variety of
networks. The standards define the
protocol and functions that give the
transducer
interchangeability
in
networked
system,
with
this
information a host microcomputer
recognized a pressure sensor, a
temperature sensor, or another sensor
type along with the measurement
range and scaling information based

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


74

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

on the information contained in the


TEDS data.
With blue tooth technology,
small transceiver modules can be built
into a wide range of products
including sensor systems, allowing
fast and secure transmission of data
within a given radius (Usually up to
10m).
A blue tooth module consists
primarily of three functional blocks
an analog 2.4 GHz., Blue tooth RF
transceiver unit, and a support unit for
link management and host controller
interface functions.
The host controller has a
hardware digital signal processing
part- the Link Controller (LC), a CPU
core, and it interfaces to the host
environment. The link controller
consists of hardware and software
parts that perform blue tooth based
band processing, and physical layer
protocols.
The
link controller
performs low-level digital-signal
processing to establish connections,
assemble or disassemble, packets,
control frequency hopping, correct
errors and encrypt data.

Bluetooth
module
Hardware Architecture
The CPU core allows the blue
tooth module to handle inquiries and
filter page request without involving
the host device. The host controller
can be programmed to answer certain
page messages and authenticate
remote links. The link manager(LM)
software runs on the CPU core. The
LM discovers other remote LMs and
communicates with them via the link
manager protocol (LMP) to perform
its service provider role using the
services of the underlying LC. The
link manager is a software function
that uses the services of the link
controller to perform link setup,
authentication, link configuration, and
other protocols. Depending on the
implementation, the link controller
and link manager functions may not
reside in the same processor.
Another function component is
of course, the antenna, which may be
integrated on the PCB or come as a
standalone item. A fully implemented
blue tooth module also incorporates
higher-level
software
protocols,
which govern the functionality and
interoperability with other modules.
Gate way plays the role of the
Piconets master in the sensor
network. It controls establishments of
the network, gathers information
about the existing smart sensor nodes
and sensor attached to them and
provides access to them.
Discovery Of The Smart Sensor
Nodes

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


75

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Smart sensor node discovery is


the first procedure that is executed
upon the gateway installation. It goals
to discover all sensor nodes in the
area and to build a list of sensors
characteristics and network topology.
Afterwards, it is executed periodically
to facilitate addition of new or
removal of the existing sensors. The
following algorithm is proposed.
When the gateway is initialized,
it
performs
bluetooth
inquiry
procedure. When the blue tooth
device is discovered, the major and
minor device classes are checked.
These parameters are set by each
smart node to define type of the
device and type of the attached
sensors. Service class field can be
used to give some additional
description of offered services. if
discovered device is not smart node it
is discarded. Otherwise service
database of the discovered smart node
is searched for sensor services. As
currently there is no specific sensor
profile, then database is searched for
the serial port profile connection
parameters. Once connection strings
is obtained from the device. Blue
tooth link is established and data
exchange with smart mode can start.
FUTURE TASKS:
Future work is aimed to develop and
design a blue tooth-enabled data
concentrator for data acquisition and
analysis.

Conclution:
Blue tooth represents a great chance
for sensor-networked architecture.
This architecture heralds wireless
future for home and also for industrial
implementation. With a blue tooth RF
link, users only need to bring the
devices with in range, and the devices
will automatically link up and
exchange information.
Thus implementation of blue
tooth technology for sensor networks
not only cuts wiring cost but also
integrates the industrial environment
to smarter environment.
Today,
with
a
broader
specifications
and
a
renewed
concentration on interoperability,
manufacturers are ready to forge
ahead and take blue tooth products to
the market place. Embedded design
can incorporate the blue tooth
wireless technology into a range of
new products to meet the growing
demand for connected information
appliances.
References:
1.G.I.Pottie,
W.J.Kaiser
Wireless
Integrated
network
sensors, Communications of the
ACM, May 2002.
2.C.Shen, C.Srisathapomphat
sensor networking architecture and
application,
IEEC
personal
communication. Aug,2001.
3.C.Chellappan,
RTCBPA,
June 2003.
4.Pappa,Transducer networks,
RTCBPA, June 2003.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


76

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

IMPLEMENTATION OF SECURE AUDIT PROCESS


FOR UNTRUSTED STORAGE SYSTEMS
SHAIK ISMAIL
M.Tech2ndyear, Dept.of CSE, ASCET, Gudur, India
Ismailcse17@gmail.com
_____________________________________________________________________________

Abstract Cloud computing is the vast

multiple replicas at a time, the level values of

computing utility, where users can remotely

nodes in MR-MHT are generated in a top-down

store their data into the cloud so to have the

order and all replica blocks each data block are

benefit of the on-demand availability of huge

organized into a same replica sub-tree.

and different applications and services from a

Index terms Auditing, Replica Management,

shared

Data privacy, Cloud computing.

pool

of

configurable

computing

resources.

I. INTRODUCTION
Public auditing scheme allow users
In recent years, the emerging cloud

to verify their outsourced data storage without


having to retrieve the whole data. However,
existing data auditing techniques suffer from
efficiency and security problems. First, thus
introducing very large communication cost for
dynamic data verification, because verification
process for each dynamic operation needs O(log
n) communication complexity and update all
replicas. Second, best our knowledge, there is no
existing integrity auditing scheme can provide
public verification and authentication of block
indices at the same time. To address these
problems, in this paper, we present a new public
auditing scheme named MuR-DPA. The new
scheme integrated with anew authenticated data
structure based on the Merkle hash tree, which is
name as MR-MHT. For the support of data
dynamic operations, authentication of block

computing is rapidly gaining thrust as an


alternative to traditional computing system.
Cloud

computing

provides

scalability

environment for growing amounts of data and


processes that work on various applications and
services by means of on-demand self-services.
But by seeing the popularities of cloud
computing services, its fast development and
advance technologies anyone can vote it as a
future of computing world. Cloud stores the
information that is locally stores in the computer,
it contains spreadsheets, presentations, audio,
photos, word processing documents, videos,
records,

photos.

But

for

sensitive

and

confidential data there should be some security


mechanism, so as to provide protection for
private data.

indices and efficient auditing of updates for

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


77

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

In many applications data updates are


very common, such as in social networks and

the gaps mentioned above through a newly


designed authenticated data structure.

business transactions. Therefore, it is very


important for cloud security mechanism.

II. Existing System and Challenges


A. Multiple Replicas

Three main dimensions in security are


For availability, storing multiple replicas

confidentiality, integrity and availability. For


integrity assurance, public auditing of cloud data
has been a widely investigated research problem
in recent years. As user data outsourced on cloud
servers are out of the users contact, auditing
from the client herself or a third party auditor is
a common request, no matter how powerful the
server side mechanisms claim to be. With
provable data possession (PDP) and proof of
retrieveability (POR), the data owner or a TPA
can verify integrity of their data without having
to retrieve their data. In that, the schemes stored
a

small

metadata

called

homomorphism

authenticator are stored along with data block,


and data verification is done by the client or a

is a default setting for cloud service providers.


Storing replicas at various servers and/or
locations will make user data easily recoverable
from service failures. A straightforward way for
users to verify the integrity of multiple replicas
is to store them as separate files and verify them
one by one. More importantly, an update for
each data block will require update of the
matching block in each replica. Currently, the
most common technique used to support
dynamic data is authenticated data structure
(ADS). If all replicas are indexed in their own
different ADS, the client must verify these
updates one by one to maintain verifiability.
The proof of update for each block

TPA through verifying the proof with the public

contains log (n) hash values as supplementary

keys.
Existing public auditing schemes can
already support public auditing and various
kinds of full dynamic data updates at the same
time. However, there are a few problems that we
aim to work on this. First, thus introducing very
large communication cost for dynamic data
verification, because verification process for
each dynamic operation needs O (log n)
communication complexity and update all
replicas. Second, best our knowledge, there is no
existing integrity auditing scheme can provide
public verification and authentication of block
indices at the same time.
In this paper, we present a multi-replica
dynamic public auditing scheme that can bridge

authentication

information.

Therefore,

the

communication cost in update verifications will


easily become a disaster for users whose cloud
data are highly dynamic. In previous schemes,
researchers have considered support for public
auditing, data dynamics and multiple replicas,
but none has considered them all together.
In

the

authors

proposed

multi-replica

verification scheme with great efficiency by


associating only one authenticator (HLA) for
each

block

and

all

replica

blocks.

The

verification requires the privately kept padding


random. To combining our considerations,
preferred properties of multi-replica verification
scheme should include the following

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


78

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

1. Public auditability and support for dynamic

III. Proposed System and Implementation

data enables a TPA to do the regular

a) Preliminaries

verification for the client and allow the client

i)

Bilinear Pairing

to verify data updates.


2. All-round

auditing

Assume a group G is a gap diffieenables

efficient

Hellman (GDH) with prime order p. A bilinear

verification for all replicas at a time so that

map is map constructed as :

the verifier will get better assurance.

a multiplicative cyclic group with prime order. A

3. Single-replicas auditing enables auditing for


an arbitrary replica for some specific blocks;

useful

where

is

should have the following properties:

1. Bilinearity - ,

because the verifier may only wants to know

( , ) ;

if at least one replica is intact for less

2. Non-degeneracy

important data.

( , ) 1;

B. Secure Dynamic Public Auditing

3. Computability

As demonstrated in fig.1, the three


parties in public auditing gamethe client, the

should

be

efficient

computable.
b) Merkle Hash Tree

cloud service provider and TPAare not fully

The Merkle Hash Tree (MHT) has been

trusted. By each other, ADS such as MHT or

intensively studied in the past. Similarly to

RASL can enable other parties to verify the

binary tree, each node N will have a maximum

content and updates of data blocks. The

of 2 child nodes. In fact, according to the update

authentication for a block is accomplished with

algorithm, every non-leaf node will constantly

the

have two child nodes. Information contained in

data

node

itself

and

its

auxiliary

authentication information which is constructed

one node N in an MHT T is constructed.

with node values on or near its path. Without

c) MUR-DPA Multi-Replica Dynamic Public

verification of block indices, a dishonest server

Auditing
A Multi-replica Merkle hash tree(MR-

can easily take another intact block and its AAI


to fake a proof that cloud pass authentication.

MHT) is a new authenticated data structure


designed for efficient audit of data updates, as
well as authentication for block indices. Each
MR-MHT is constructed based on not only
logically segmented file, but also all its replicas,
as well as a pre-defined cryptographic hash
function H. the MR-MHT, constructed based on
a file with 4 blocks and 3 replicas, is shown in

Figure 1 the relationships between participating

Fig.3.

parties.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


79

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

REFERENCES
[1] Yan Zhua, b, Hongxin Huc, Gail-Joon Ahnc,
Stephen

S.Yauc.

Efficient

audit

service

outsourcing for data integrity in clouds. In


The Journal of Systems and Software 85 (2012)
1083 1095.
[2] Available:
http://aws.amazon.com/apac/awssummit-au/,
accessed on 25 March, 2013.
Figure 3 an example of MR-MHT.

[3] Hadoop Map Reduce. Available:

The differences from the MHT are as follows:

http://hadoop.apache.org, accessed on 25

1. Value stored in the leaf nodes are hash values

March, 2013.

of stored replica blocks. In MR-MHT, leaf nodes


represent replica blocks

, namely the jth replica

of the ith file block.


2. Value stored in a node v from a none-leaf
level is computed from the hash values of its
child nodes and two other indices l (v) and r (v),
l (v). Is the level of node v and r (v) .Is the
maximum number of nodes in the leaf (bottom)
level that can be reached from v.
IV. CONCLUSION
In this paper, we presented a novel public

[4] KVM Hypervisor. Available: www.linuxkvm.org/, accessed on 25 March, 2013.


[5] OpenStack Open Source Cloud Software.
Available: http://openstack.org/, accessed on
25 March, 2013.
[6] G. Ateniese, R. Burns, R. Carmela, J.
Herring, O. Khan, L. Kissner, Z. Peterson,
and D. Song, "Remote Data Checking Using
Provable

Data

Possession,"

ACM

Transactions on Information and

System

Security, vol. 14, p. Article 12, 2011.

auditing scheme named MuR-DPA. The new


scheme incorporated a novel authenticated data
structure based on the Merkle hash tree, which
we name as MR-MHT. For support of full
dynamic data updates, authentication of block
indices and efficient verification of updates for
multiple replicas at the same time, the level
values of nodes in MR-MHT are generated in a
top-down order, and all replica blocks.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


80

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

AUTHORS

Mr.SHAIK ISMAIL
received

the

AVS

College of Engineering
&Technology, Nellore,
B.Tech

degree

in

computer science &


engineering from the
Jawaharlal

Nehru

Anantapur,

in

Audisankara

technological

2012,

College

and
of

university

received

the

Engineering

and

Technology, Nellore M.Tech degree in computer


science engineering from the Jawaharlal Nehru
technological university Anantapur in 2014,
respectively. He Participated National Level
Paper Symposiums in different Colleges. He
interests

Computer

Networks,

Mobile

Computing, Network Programming, and System


Hardware. He is a member of the IEEE.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


81

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

SECURE DATA DISSEMINATION BASED ON


MERKLE HASH TREE FOR WIRELESS
SENSOR NETWORKS
Udatha Hariprasad 1, K Riyazuddin 2
M.Tech Student 1, Asst. Professor2
Department Of Electronics & Communication Engineering
Annamacharya Institute Of Technology And Sciences, Rajampet
udathahariprasad@gmail.com1, riyazoo2002@yahoo.co.in2

Abstract Wireless

sensor communities (WSNs) are generally


widely applicable in supervising and command of natural
environment parameters. It might possibly be necessary to
disseminate facts through instant links once they are
deployed in order to adjust setup parameters associated
with sensors or maybe distribute management commands
and also queries to sensors. Several approaches are
proposed just lately for facts discovery and also
dissemination inside WSNs. Nevertheless, they all
consentrate on how to be sure reliability and also usually
forget about security vulnerabilities. This document
identifies the actual security vulnerabilities inside data
breakthrough and dissemination when used in WSNs. This
sort of vulnerabilities allow an attacker to revise a circle
with unwelcome values, erase critical specifics, or launch
denial-of-service (DoS) episodes. To deal with these
vulnerabilities, this document presents the design,
implementation, and evaluation of your secure, lightweight,
and DoS-resistant facts discovery and also dissemination
standard protocol named Se-Drip for WSNs..
Keywords Network Security, Mobile, Wireless Networks.

I. INTRODUCTION
Inside multi-hop Instant sensor sites (WSNs) happen
to be attracting great curiosity about many applications linked
to monitoring in addition to control associated with
environmental as well as physical circumstances, such as
industry monitoring and military operations. After a WSN will
be deployed inside field, it can be necessary in order to update
the particular installed applications or stashed parameters
inside sensor nodes. This is achieved simply by dissemination
services which ensure new applications or parameter values for
being propagated during the entire WSN making sure that all
nodes use a consistent duplicate. Normally, a brand new
program is on the order associated with kilobytes though a
parameter is simply few bytes extended. Due to such a vast
imbalance between their sizes, the look considerations in their
dissemination protocols will vary.

addition to configuration details. Recently, several files


discovery in addition to dissemination protocols happen to be
proposed. And this includes, Drip, DIP in addition to DHV are
renowned and included in Tiny OS distributions. However, to
the best of our knowledge, just about all existing files
discovery in addition to dissemination protocols only target
reliable files transmission, but provide no security procedure.
Certainly, this is really a critical issue which should be
addressed. In any other case, adversaries might, for example,
distribute viral or phony data in order to cripple a new WSN
deployed inside battlefield.
In this kind of proposed system we initial investigate
the particular security difficulties in files discovery in addition
to dissemination process of WSNs and explain that the possible
lack of authentication on the disseminated files introduces a
new vulnerability for the update associated with arbitrary files
in WSNs. We then create a secure, light and portable, and
Denial-of-Assistance (DoS)-resistant files discovery in addition
to dissemination protocol named Se-Drip for WSNs, that is a
secure extension of Spill. To gain DoS-attack resilience and
permit immediate verification of just about any received
packets, Se-Drip is based on a signed Merkle hash sapling. This
way the bottom station of your WSN must sign only the root of
this kind of tree. Furthermore, Se-Drip can easily tolerate the
particular compromise associated with some sensor nodes. For
boosting the stability and effectiveness, some extra
mechanisms such as message unique puzzle tactic are
incorporated in the design associated with SeDrip. Most of us
also implement the recommended protocol inside networks
associated with MicaZ in addition to TelosB motes,
respectively. Experimental benefits demonstrate their high
efficiency in practice. To the best of our knowledge, that is also
the 1st implemented secure data breakthrough and
dissemination protocol for WSNs.

II. PREVIOUS WORK


Among these protocols, Deluge is included in the Tiny
OS distributions. However, since the design of Deluge did not
take security into consideration, there have been several
extensions to Deluge to provide security protection for code
Code dissemination (also known as data dissemination dissemination. Among them, Seluge enjoys both strong
as well as reprogramming) protocols are created to correctly security and high efficiency. However, all these code
distribute extended messages in to a network, empowering dissemination protocols are based on the centralized approach
complete system reprogramming. On the other hand, data which assumes the existence of a base station and only the base
breakthrough and dissemination protocols are used to deliver station has the authority to reprogram sensor nodes. As shown
short emails, such as several two-byte configuration parameters, in Figure below, when the base station wants to disseminate a
in just a WSN. Common makes use of this type of protocols new code image, it broadcasts the signed code image and each
incorporate injecting little programs, commands, queries, in sensor node only accepts code images signed by it.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


82

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


Unfortunately, there are WSNs having no base station at all.
Examples of such networks include a military WSN in a
battlefield to monitor enemy activity (e.g., troop movements), a
WSN deployed along an international border to monitor
weapons smuggling or human trafficking, and a WSN situated
in a remote area of a national park monitoring illegal activities
(e.g., firearm discharge, illicit crop cultivation). Having a base
station in these WSNs introduces a single point of failure and a
very attractive attack target. Obviously, the centralized
approach is not applicable to such WSNs.
Trust Model
The network owner delegates his/her code dissemination
privilege to the network users who are willing to register. We
assume the special modules (e.g., authentication module for
each new program image proposed in this paper, the user
access log module) reside in the boot loader section of the
program flash on each sensor node which cannot be
overwritten by anyone except the network owner. To achieve
this goal, some existing approaches can be employed such as
hardware-based approaches (e.g., security chips) and software
based approaches (e.g., program code analysis).
Threat Model We assume that an adversary can launch both
outsider and insider attacks. In outsider attacks, the adversary
does not control any valid nodes in the WSN. The adversary
may eavesdrop, copy or replay the transmitted messages in the
WSN. He/she may also inject false messages or forge nonexisting links in the network by launching a wormhole
attack.With insider attacks, the adversary can compromise
some users (or sensor nodes) and then inject forged code
dissemination packets, or exploit specific weakness of the
secure protocol architecture.[1]
Experience with wireless sensor network deployments
across application domains has shown that sensor node tasks
typically change over time, for instance, to vary sensed
parameters, node duty cycles, or support debugging. Such
reprogramming
is
accomplished
through
wireless
communication using reprogrammable devices. The goal of
network reprogramming is to not only reprogram individual
sensors but to also ensure that all network sensors agree on the
task to be performed. Network reprogramming is typically
implemented on top of data dissemination protocols. For
reprogramming, the data can be configuration parameters,
code capsules, or binary images. We will refer to this data as a
code item. A node must detect if there is a different code item
in the network, identify if it is newer, and update its code with
minimal reprogramming cost, in terms of convergence speed
and energy.

containing the same information as it has. When a difference is


detected, the node resets the period to the lowest preset interval.
Trickle scales well with the number of nodes and has
successfully reduced the number of messages in the network.
Bit-level identification: Previous CCMPs have transmitted
the complete version number for a code item. We observe that
it may not always be necessary to do so. We treat the version
number as a bit array, with the versions of all the code items
representing a two dimensional bit array. DHV uses bit slicing
to quickly zero in on the out of date code segment, resulting in
fewer bits transmitted in the network.
Statelessness: Keeping state in the network, particularly with
mobility, is not scalable. DHV messages do not contain any
state and usually small in size. Preference of a large message
over multiple small messages: To reduce energy consumption,
it is better to transmit as much information possible in a single
maximum length message rather than transmit multiple small
messages. Sensor nodes turn off the radio when they are idle to
conserve energy. Radio start-up and turn-off times (300
microseconds) are much longer than the time used to transmit
one byte (30 microseconds). A long packet may affect the
collision rate and packet loss. However, that effect only
becomes noticeable under bursty data traffic conditions. [3]
This idea seems quite attractive at first. However, it has
several shortcomings. This work points to these shortcomings
and proposes methods to overcome them. Our description is
based mostly on TESLA, although the improvements apply to
the other schemes as well. We sketch some of these points:
1. In TESLA the receiver has to buffer packets, until the
sender discloses the corresponding key, and until the receiver
authenticates the packets. This may delay delivering the
information to the application, may cause storage problems,
and also generates vulnerability to denial-of-service (DoS)
attacks on the receiver (by flooding it with bogus packets). We
propose a method that allows receivers to authenticate most
packets immediately upon arrival, thus reducing the need for
buffering at the receiver side and in particular reduces the
susceptibility to this type of DoS attacks. This improvement
comes at the price of one extra hash per packet, plus some
buffering at the sender side. We believe that buffering at the
sender side is often more reasonable and acceptable than
buffering at the receiver side. In particular, it is not susceptible
to this type of DoS attacks. We also propose other methods to
alleviate this type of DoS attacks. These methods work even
when the receiver buffers packets as in TESLA.

Early attempts tried to adapt epidemic algorithms to


disseminate code updates during specific reprogramming
periods. But there is no way for new nodes to discover past
updates. If a node is not updated during the reprogramming
period, it will never get updated. To discover if a node needs an
update, a natural approach is to query or advertise its
information periodically.

2. When operating in an environment with heterogenous


network delay times for different receivers, TESLA
authenticates each packet using multiple keys, where the
different keys have different disclosure delay times. This
results in larger overhead, both in processing time and in
bandwidth. We propose a method for achieving the same
functionality (i.e., different receivers can authenticate the
packets at different delays) with a more moderate increase in
the overhead per packet.

The network as a whole may transmit an excessive and


unnecessary number of query and advertisement messages. To
address this problem, Levis et al developed the Trickle protocol
to allow nodes to suppress unnecessary transmissions. In
Trickle, a node periodically broadcasts its versions but politely
keeps quiet and increases the period if it hears several messages

3. In TESLA the sender needs to perform authenticated time


synchronization individually with each receiver. This may not
scale well, especially in cases where many receivers wish to
join the multicast group and synchronize with the sender at the
same time. This is so, since each synchronization involves a
costly public-key operation. We propose a method that uses

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


83

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


only a single public-key operation per time-unit, regardless of
the number of time synchronizations performed during this
time unit. This reduces the cost of synchronizing with a
receiver to practically the cost of setting up a simple,
unauthenticated connection.
We also explore time synchronization issues in greater depth
and describe direct and indirect time synchronization. For the
former method, the receiver synchronizes its time directly with
the sender, in the latter method both the sender and receiver
synchronize their time with a time synchronization server. For
both cases, we give a detailed analysis on how to choose the
key disclosure delay, a crucial parameter for TESLA.
TESLA assumes that all members have joined the group and
have synchronized with the sender before any transmission
starts. In reality, receivers may wish to join after the
transmission has started; furthermore, receivers may wish to
receive the transmission immediately, and perform the time
synchronization only later. We propose methods that enable
both functionalities. That is, our methods allow a receiver to
join in on the fly to an ongoing session; they also allow
receivers to synchronize at a later time and authenticate packets
only then. [4]

disseminated in each round, so the time required to complete


one round of dissemination should be very short. As a result,
each sensor node would not experience any ambiguity in
determining which round number is the latest even if there is a
wrap around in a round number.
B. Packet Pre-processing Phase
After system initialization, if the base station wants to
disseminate n data items: di = {round, keyi, versioni, datai}, i =
1, 2, . . ., n, it uses the Merkle hash tree method to construct the
packets of the respective data as follows. Merkle hash tree is a
tree of hashes, where the leaves in the tree are hashes of the
authentic packets Pi, i = 1, 2, . . ., n. Here the hash function is
calculated over packet header and data item di(= {round, keyi,
versioni, datai}). Nodes further up in the tree are the hashes of
their respective children. More exactly, the base station
computes ei = H(Pi)(i = 1, 2, 3, 4), and builds a binary tree by
computing internal nodes from adjacent children nodes. Each
internal node of the tree is the hash value of the two children
nodes. Subsequently, the base station constructs n packets
based on this Merkle hash tree. For packet Pi, it consists of the
packet header, the data item di and the values in its
authentication path (i.e., the siblings of the nodes in the path
from Pi to the root) in the Merkle hash tree.

III. PROPOSED SYSTEM


A. Initialization
Compared with the traditional approaches, elliptic curve
cryptography (ECC) is a better approach to public-key
cryptography in terms of key size, computational efficiency,
and communication efficiency. However, while ECC is feasible
on resource-limited sensor motes, heavily involving ECCbased authentication is still not practical. SeDrip combines
ECC public key algorithm and Merkle hash tree to avoid
frequent public key operations and achieve strong robustness
against various malicious attacks. Also, SeDrip inherits
robustness to packet loss from underlying Trickle algorithm,
because Trickle uses periodic retransmissions to ensure
eventual delivery of the message to every node in the network.

C. Packet Verification Phase


Upon receiving a packet (from any one-hop neighboring
node or the base station), each sensor node, say Si, first checks
the key field of the packet: If this is a signature packet P0, node
Si runs the following operations: If this is a new round (i.e., the
round included in this packet is newer than that of its stored <
round, root >), node Si uses the public key PK of the base
station to run an ECDSA verify operation to authenticate the
received signature packet. If this verification passes, node Si
accepts the root of the Merkle hash tree and then updates its
stored < round, root > by the corresponding values in packet P0;
otherwise, node Si simply drops the signature packet P0. If
node Si has recently heard an identical signature packet (i.e.,
the round included in this packet is same as that of its stored <
round, root >), it increases the broadcast interval of this packet
through the Trickle algorithm, thereby limiting energy costs
when a network is consistent.

SeDrip consists of three phases: system initialization, packet


pre-processing, and packet verification. The system
initialization phase is carried out before network deployment.
In this phase, the base station creates its public and private keys,
If this is an old round (i.e., the round included in this packet
and loads the public parameters on each sensor node. Then, is older than that of its stored < round, root >. That is, the
before disseminating data, the base station executes the packet signature packet distributed by its one-hop neighboring node is
preprocessing phase in which packets and their corresponding old), node Si broadcasts its stored signature packet.
Merkle hash tree are constructed from data items. Finally, in
the packet verification phase, a node verifies each received
IV. RESULTS
packet. If the result is positive, it updates the data according to
the received packet.
The concept of this paper is implemented and different
In SeDrip, we extend the 3-tuple (key, version, data) of Drip results are shown below, the proposed paper is implemented in
into a 4-tuple (round, key, version, data) to represent a data NS 2.34 on a Linux Fedora 10. The propose papers concepts
item, where round refers to which round of data dissemination shows efficient results and has been efficiently tested on
this data item belongs to (the higher the round, the newer the different Datasets. The below figures shows the real time
data dissemination), and the other three elements bear the same results compared.
meaning as existing protocols. Same as the Drip
implementation, key and version are 2 bytes and 4 bytes long,
respectively. For the round field, it can be just as short as 4 bits
because we can allow a wrap around in the number space to
take place. This is possible based on two characteristics of the
dissemination process. First, the configuration of a WSN is not
expected to change frequently and hence the dissemination rate
would be low. Second, only a small amount of data is

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


84

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014


Fig. 1 Packet Delivery Fraction Vs Pause Time

Fig. 6 Routing Overhead Vs Pause Time

V. CONCLUSION

Fig. 2 Packet Delivery Fraction Vs Pause Time

In this paper, we have now identified the particular


security vulnerabilities throughout data breakthrough and
dissemination of WSNs. We and then developed a lightweight
method named Se-Drip to allow efficient authentication on the
disseminated information items by enjoying efficient Merkle
woods algorithm. Se-Drip was designed to work from the
computation, ram and electricity limits of inexpensive sensor
motes. In addition to analyzing the particular security of SeDrip, this paper has reported the particular evaluation link
between Se-Drip in the experimental multilevel of resource
limited sensor nodes, which demonstrate that Se-Drip can be
efficient as well as feasible used.
REFERENCES
[1] D. He, C. Chen, S. Chan, and J. Bu, DiCode: DoS-resistant
and distributed code dissemination in wireless sensor
networks, IEEE Trans. Wireless Commun., vol. 11, no. 5, pp.
19461956, May 2012.

Fig. 3 Average End to End Delay Vs Pause Time

[2] K. Lin and P. Levis, Data discovery and dissemination


with DIP, in Proc. 2008 ACM/IEEE IPSN, pp. 433444.
[3] T. Dang, N. Bulusu, W. Feng, and S. Park, DHV: a code
consistency maintenance protocol for multi-hop wireless sensor
networks, in Proc. 2009 EWSN, pp. 327342.
[4] A. Perrig, R. Canetti, D. Song, and J. Tygar, Effcient and
secure source authentication for multicast, in Proc. 2001
NDSS, pp. 3546.

Fig. 4 Average End to End Delay Vs Pause Time

[5] P. Levis, N. Patel, D. Culler, and S. Shenker, Trickle: a


self-regulating algorithm for code maintenance and propagation
in wireless sensor networks, in Proc. 2004 NSDI, pp. 1528.
[6] S. Hyun, P. Ning, A. Liu, and W. Du, Seluge: secure and
DoSresistant code dissemination in wireless sensor networks,
in Proc. 2008 ACM/IEEE IPSN, pp. 445456.
[7] A. Liu and P. Ning, TinyECC: a configurable library for
elliptic curve cryptography in wireless sensor networks, in
Proc. 2008 ACM/IEEE IPSN, pp. 245256.

Fig. 5 Routing Overhead Vs Pause Time

[8] J. Lee, K. Kapitanova, and S. Son, The price of security in


wireless sensor networks, Comput. Netw., vol. 54, no. 17, pp.
29672978, Dec. 2010..

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


85

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

LOW POWER PULSE TRIGGERED FLIPFLOP WITH


CONDITIONAL PULSE ENHANCEMENT SCHEME
1

S.Sruthi1, V.Viswanath2

P.G Student in VLSI, Department of E.C.E, SIETK, Tirupati.


2
Assistant Professor, Department of E.C.E, SIETK, Tirupati.
E-Mail to: ssruthiece@gmail.com, raj_vlsi@yahoo.co.in

AbstractIn this paper, a novel low-power pulsetriggered ip-op (FF) design is presented. First, the
pulse generation control logic, an AND function, is
removed from the critical path to facilitate a faster
discharge operation. A simple two-transistor AND gate
design is used to reduce the circuit complexity. Second,
a conditional pulse-enhancement technique is devised
to speed up the discharge along the critical path only
when needed. As a result, transistor sizes in delay
inverter and pulse-generation circuit can be reduced
for power saving. Various postlayout simulation results
based on UMC CMOS 90-nm technology reveal that the
proposed design features the best power-delay-product
performance in seven FF designs under comparison.
Its maximum power saving against rival designs is up
to 38.4%. Compared with the conventional transmission
gate-based FF design, the average leakage power
consumption is also reduced by a factor of 3.52.
Index Terms Flip-op, Low power, Pulsetriggered.

1.

I NTRODUCTION

Flip-ops (FFs) are the basic storage elements used


extensively in all kinds of digital designs. In
particular, digital designs nowadays often adopt
intensive pipelining techniques and employ many FFrich modules. It is also estimated that the power
consumption of the clock system, which consists of
clock distribution networks and storage elements, is as
high as 20%45% of the total system power [1].
Pulse-triggered FF (P-FF) has been considered a
popular alternative to the conventional masterslavebased FF in the applications of high-speed operations
[2][5]. Besides the speed advantage, its circuit
simplicity is also benecial to lowering the power
consumption of the clock tree system. A P-FF consists
of a pulse generator for generating strobe signals and a
latch for data storage. Since triggering pulses
generated on the transition edges of the clock signal
are very narrow in pulse width, the latch acts like an
edge-triggered FF. The circuit complexity of a P-FF is
simplied since only one latch, as opposed to two used
in conventional masterslave conguration, is needed.
P-FFs also allow time borrowing across clock cycle
boundaries and feature a zero or even negative setup
time. P-FFs are thus less sensitive to clock jitter.
Despite these advantages, pulse generation circuitry

requires delicate pulsewidth control in the face of


process variation and the conguration of pulse
clock distribution network [4]. Depending on the
method of pulse generation, P-FF designs can be
classied as implicit or explicit [6]. In an implicittype P-FF, the pulse generator is a built-in logic of
the latch design, and no explicit pulse signals are
generated. In an explicit-type P-FF, the designs of
pulse generator and latch are separate. Implicit
pulse generation is often considered to be more
power efcient than explicit pulse generation. This
is because the former merely controls the
discharging path while the latter needs to
physically generate a pulse train. Implicit-type
designs, however, face a lengthened discharging
path in latch design, which leads to inferior timing
characteristics. The situation deteriorates further
when low-power techniques such as conditional
capture, conditional precharge, conditional
discharge, or conditional data mapping are applied
[7][10]. As a consequence, the transistors of pulse
generation logic are often enlarged to assure that
the generated pulses are sufficiently wide to trigger
the data capturing of the latch. Explicit-type P-FF
designs face a similar pulsewidth control issue, but
the problem is further complicated in the presence
of a large capacitive load, e.g., when one pulse
generator is shared among several latches. In this
paper, we will present a novel low-power implicittype P-FF design featuring a conditional pulseenhancement scheme. Three additional transistors
are employed to support this feature. In spite of a
slight increase in total transistor count, transistors
of the pulse generation logic benet from
signicant size reductions and the overall layout
area is even slightly reduced. This gives rise to
competitive power and power-delay product
performances against other P-FF designs.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


86

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Fig :Conventional Pulse-Triggered FF designs (a) ipDCO (b) MHLLF (c) SCCER

2. PROPOSED IMPLICIT-TYPE P-FF DESIGN


WITH PULSE CONTROL SCHEME
A. Conventional Implicit-Type P-FF Designs
Some conventional implicit-type P-FF designs,
which are used as the reference designs in later
performance comparisons, are rst reviewed. A
state-of-the-art P-FF design, named ip-DCO, is
given in Fig. 1(a) [6]. It contains an AND logicbased pulse generator and a semi-dynamic
structured latch design. Inverters I5 and I6 are used
to latch data and inverters I7 and I8 are used to
hold the internal node . The pulse generator takes
complementary and delay skewed clock signals to
generate a transparent window equal in size to the
delay by inverters I1-I3. Two practical problems
exist in this design. First, during the rising edge,
nMOS transistors N2 and N3 are turned on. If data
remains high, node will be discharged on every
rising edge of the clock. This leads to a large
switching power. The other problem is that node
controls two larger MOS transistors (P2 and N5).
The large capacitive load to node causes speed and
power performance degradation. Fig. 1(b) shows an
improved P-FF design, named MHLLF, by
employing a static latch structure presented in [11].
Node is no longer precharged periodically by the
clock signal. A weak pull-up transistor P1
controlled by the FF output signal Q is used to
maintain the node level at high when Q is zero.
This design eliminates the unnecessary discharging
problem at node . However, it encounters a longer
Data-to-Q (D-to-Q) delay during 0 to 1
transitions because node is not pre-discharged.
Larger transistors N3 and N4 are required to
enhance the discharging capability. Another
drawback of this design is that node becomes
oating when output Q and input Data both equal
to 1. Extra DC power emerges if node X is
drifted from an intact 1. Fig. 1(c) shows a rened
low power P-FF design named SCCER using a
conditional discharged technique [9], [12]. In this
design, the keeper logic (back-to-back inverters I7
and I8 in Fig. 1(a)) is replaced by a weak pull up
transistor P1 in conjunction with an inverter I2 to
reduce the load capacitance of node [12]. The

discharge path contains nMOS transistors N2 and


N1 connected in series. In order to eliminate
superuous switching at node, an extra nMOS
transistor N3 is employed. Since N3 is controlled
by Q_fdbk, no discharge occurs if input data
remains high. The worst case timing of this design
occurs when input data is 1 and node
is
discharged through four transistors in series, i.e.,
N1 through N4, while combating with the pull up
transistor P1. A powerful pull-down circuitry is
thus needed to ensure node discharge path contains
nMOS transistors N2 and N1 connected in series.
In order to eliminate superuous switching at node
, an extra nMOS transistor N3 is employed. Since
N3 is controlled by Q_fdbk, no discharge occurs if
input data remains high. The worst case timing of
this design occurs when input data is 1 and node
is discharged through four transistors in series, i.e.,
N1 through N4, while combating with the pull up
transistor P1. A powerful pull-down circuitry is
thus needed to ensure node can be properly
discharged. This implies wider N1 and N2
transistors and a longer delay from the delay
inverter I1 to widen the discharge pulse width.
B. Proposed P-FF Design
The proposed design, as shown in Fig. 2, adopts
two measures to overcome the problems associated
with existing P-FF designs. The rst one is
reducing the number of nMOS transistors stacked
in the discharging path. The second one is
supporting a mechanism to conditionally enhance
the pull down strength when input data is 1.
Refer to Fig. 2, the upper part latch design is
similar to the one employed in SCCER design [12].
As opposed to the transistor stacking design in Fig.
1(a) and (c), transistor N2 is removed from the
discharging path. Transistor N2, in conjunction
with an additional transistor N3, forms a two-input
pass transistor logic (PTL)-based AND gate [13],
[14] to control the discharge of transistor N1. Since
the two inputs to the AND logic are mostly
complementary (except during the transition edges
of the clock), the output node is kept at zero most
of the time. When both input signals equal to 0
(during the falling edges of the clock), temporary
oating at node is basically harmless. At the rising

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


87

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

edges of the clock, both transistors N2 and N3 are


turned on and collaborate to pass a weak logic high
to node , which then turns on transistor N1 by a
time span dened by the delay inverter I1. The
switching power at node can be reduced due to a
diminished voltage swing. Unlike the MHLLF
design [11], where the discharge control signal is
driven by a single transistor, parallel conduction of
two nMOS transistors (N2 and N3) speeds up the

Fig : Schematic of the proposed PFF design

operations of the pulse generation. With this design


measure, the number of stacked transistors along
the discharging path is reduced and the sizes of
transistors N1-N5 can be reduced also. In this
design, the longest discharging path is formed
when input data is 1 while the Qbar output is 1.
To enhance the discharging under this condition,
transistor P3 is added. Transistor P3 is normally
turned off because node is pulled high most of the
time. It steps in when node is discharged to
mod(Vtp) below the VDD. This provides additional
boost to node Z(from VDD V TH to VDD). The
generated pulse is taller, which enhances the pulldown strength of transistor N1. After the rising
edge of the clock, the delay inverter I1 drives node
Z back to zero through transistor N3 to shut down
the discharging path. The voltage level of Node
rises and turns off transistor P3 eventually. With
theinterventionofP3,the width of the generated
discharging pulse is stretched out. This means to
create a pulse with sufcient width for correct data
capturing, a bulky delay inverter design, which
constitutes most of the power consumption in pulse
generation logic, is no longer needed. It should be
noted that this conditional pulse enhancement
technique takes effects only when the FF output is
subject to a data change from 0 to 1. The leads to a
better power performance than those schemes using
an indiscriminate pulsewidth enhancement
approach. Another benet of this conditional pulse
enhancement scheme is the reduction in leakage
power due to shrunken transistors in the critical
discharging path and in the delay inverter.
3. SIMULATION RESULTS
To demonstrate the superiority of the proposed
design, postlayout simulations on various P-FF
designs were conducted to obtain their
Performance gures. These designs include the
three P-FF designs shown in Fig. 1 (ip-DCO [6],
MHLLF [11], SCCER [12]), another P-FF de- sign
called conditional capture FF (CCFF) [7], and two
other non- pulse-triggered FF designs, i.e., a senseamplier-based FF (SAFF) [2], and a conventional

transmission gate-based FF (TGFF). The target


technology is the UMC 90-nm CMOS process. The
operating condition used in simulations is 500
MHz/1.0 V. Since pulsewidth design is crucial to
the correctness of data capturing as well as the
power consumption, the pulse generator logic in all
designs are rst sized to function properly across
process variation. All designs are further optimized
subject to the tradeoff between power and D-to-Q
delay, i.e., minimizing the product of the two
terms. Fig. 3 shows the simulation setup model. To
mimic the signal rise and fall time delays, input
signals are generated through buffers. Considering
the loading effect of the FF to the previous stage
and the clock tree, the power consumptions of the
clock and data buffers are also included. The output
of the FF is loaded with a 20-fF capacitor. An extra
capacitance of 3fF is also placed after the clock
buffer. To illustrate the merits of the presented
work, Fig. 4 shows the simulation wave- forms of
the proposed P-FF design against the MHLLF
design.

Fig: Simulation Setup model

In the proposed design, pulses of node


are
generated on every rising edge of the clock. Due to
the extra voltage boost from transistor P3, pulses
generated to capture input data 1 are signicantly
enhanced in their heights and widths compared
with the pulses generated for capturing data 0
(0.84 V versus 0.65 V in height and 141ps versus
84 ps in width). In the MHLL design, there is no
such differentiation in their pulse generation. In
addition, no signal degradation occurs in the
internal node of the proposed design. In contrast,
the internal node in MHLLF in MHLLF design is
degraded when Q equals to 0 and data equals to
1. Node Q thus deviates slightly from an intact
value 0 and causes a DC power consumption at
the output stage. From Fig. 4, the height of its
pulses at node Z is around 0.68 V. Furthermore,
node is oating when clock equals 0 and its
value drifts gradually. To elaborate the power
consumption behavior of these FF designs, ve test
patterns, each exhibiting a different data switching
probability, are applied. Five of them are
deterministic patterns with 0% (all-zero or all-one),
25%, 50%, and 100% data transition probabilities,
respectively. The power consumption results are
summarized in Table I.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


88

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

power consumption problem at an internal node,


the ip-DCO design has the largest power
consumption when data switching activity is 0%
(all 1). Fig. 5 shows the curves of power-delayproduct (delay from D to Q ) versus setup time (for
50% data switching activity). The values of the
proposed design are the smallest in all designs
when the setup times are greater than 60 ps. Its
minimum value occurs when the setup time is
53.9ps and the corresponding to delay is 116.9ps.

Due to a shorter discharging path and the


employment of a conditional pulse enhancement
scheme, the power consumption of the proposed
design is the lowest in all test patterns. Take the
test pattern with 50% data transition probability as
an example, the power saving of proposed de- sign
ranges from 38.4% (against the ip-DCO design) to
5.6% (against the TGFF design). This savings is
even more pronounced when operating at lower
data switching activities, where the power
consumption of pulse generation circuitry
dominates. Because of a redundant switching
The CCFF design is ranked in the second place in

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


89

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

this evaluation with its optimal setup time as 67ps.


The setup time of the conventional TGFF design is
always positive and has the smallest PDPDQ
performance of each design under different data
switching activities. The proposed design takes the
lead in all types of data switching activity. The
SCCER and the CCFF designs almost tie in the
second place. Fig. 6(b) shows the PDPDQ
performance of these designs at different process
corners under the condition of 50% data switching
activity. The performance edge of the proposed
design is maintained as well. Notably, the MHLLF
design has the worst PDPDQ performance especially
at the performance especially at the SS process
corner due to a large D-to-Q delay and the poor
driving capability of its pulse generation circuit.
Table I also summarizes some important
performance indexes of these P-FF designs. These
include transistor count, layout area, setup time,
hold time, min D-to-Q delay, optimal PDP, and the
clock tree power. Although the transistor count of
the proposed design is not the lowest one, its actual
layout area is the smaller than all but the TGFF
design. The MHLLF design exhibits the largest
layout area because of an oversized pulse
generation circuit. Following the measurement
methods in [15], curves of -to- delay versus setup
time and
-to- delay versus hold time are
simulated rst. Setup time is dened as the point in
the curve where D-to-Q delay is the minimum.
Hold time is measured at the point where the slope
of the curve equals
1. The proposed design
features the shortest minimum D-to-Q delay. Its
hold time is longer than other designs because the
transistor (P3) for the pulse enhancement requires a
prolonged availability of data input. The power
drawn from the clock tree is calculated to evaluate
the impact of FF loading on the clock jitter.
Although the proposed FF design re- quires clock
signal connected to the drain of transistor N2, the
drawn current is not signicant. Due to
complementary switching behavior of N2 and N3,
there exists no signal path from the entry of the
clock signal to either The clock tree is only liable
for charging/discharging node Z. The optimal PDP
value of the proposed design is also signicantly
better than other designs. The simulation results
show that the clock tree power of the proposed
design is close to those of the two leading designs
(MHLFF and CCFF) and outperforms ip-DCO,
SCCER, TGFF, and SAFF, where clock signals
connected to gates of the transistors only. The setup
time is measured as the point where the minimum
PDP value occurs. The setup times of these designs
vary from 67 to 47 ps. Note that although the
optimal setup time of the proposed design is
53.9ps, its PDP value is lowest in all designs for
any setup time greater than 60ps. The D-to-Q

delay and the hold time are calculated subject to the


optimal setup time. The D-to-Q delay of the
proposed design is second to the SCCER design
only and outperforms the conventional TGFF
design by a margin of 44.7%. The hold time
requirement seems to be slightly larger due to a
negative setup time. This number reduces as the
setup time moves toward a positive value. Table II
gives the leakage power consumption comparison
of these FF designs in a standby mode (clock signal
is gated). For a fair comparison, we assume the
output Q as 0 when input data is 1 to exclude
the extra power consumption coming from the
discharging of the internal node. For different clock
and input data combinations, the proposed design
enjoys the minimum leakage power consumption,
which is mainly attributed to the reduction in the
transistor sizes along the discharging path. The
SAFF design experiences the worst leakage power
consumption when clock equals 0 because its
two precharge pMOS transistors are always turned
on. Compared to the conventional TGFF design,
the average leakage power is reduced by a factor of
3.52. Finally, to show the robustness of the
proposed design against the process variations,
Table III compiles the changes in the width and the
height of the generated discharge pulses under
different process corners. Although signicant
uctuations in pulsewidth and height are observed,
the unique conditional pulse-enhancement scheme
works well in all cases.

CONCLUSION
In this paper, we devise a novel low-power pulsetriggered FF design by employing two new design
measures. The rst one successfully reduces the
number of transistors stacked along the discharging
path by incorporating a PTL-based AND logic. The
second one supports conditional enhancement to
the height and width of the discharging pulse so
that the size of the transistors in the pulse
generation circuit can be kept minimum.
Simulation results indicate that the proposed design
excels rival designs in performance indexes such as
power, D-to-Q delay, and PDP. Coupled with
these design merits is a longer hold-time
requirement inherent in pulse-triggered FF designs.
However, hold-time violations are much easier to
x in circuit design compared with the failures in
speed or power.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


90

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

REFERENCES
[1] H. Kawaguchi and T. Sakurai, A reduced
clock-swing ip-op(RCSFF) for 63% power
reduction, IEEE J. Solid-State Circuits , vol. 33,
no. 5, pp. 807811, May 1998.

systems, IEEE Trans. Very Large Scale Integr.


(VLSI) Systems , vol. 14, pp. 13791383, Dec.
2006.

[2] A. G. M. Strollo, D. De Caro, E. Napoli, and


N. Petra, A novel high speed sense-amplierbased ip-op, IEEE Trans. Very Large Scale
Integr. (VLSI) Syst. , vol. 13, no. 11, pp. 1266
1274, Nov. 2005.

[11] S. H. Rasouli, A. Khademzadeh, A. AfzaliKusha, and M. Nourani, Lowpowersingleanddouble-edge-triggeredip-opsforhighspeed


applications,Proc. Inst. Electr. Eng.Circuits
Devices Syst. ,vol. 152, no. 2, pp. 118122, Apr.
2005.

[3] H. Partovi, R. Burd, U. Salim, F. Weber, L.


DiGregorio, and D. Draper, Flow-through latch
and edge-triggered ip-op hybrid elements, in
IEEE Te ch . Dig. ISSCC , 1996, pp. 138139.
[4] F. Klass,C. Amir,A.Das, K.Aingaran, C.
Truong, R.Wang, A.Mehta, R.Heald,andG.Yee, A
new family of semi-dynamic and dynamic ipops
with embedded logic for high-performance
processors, IEEE J. Solid-State Circuits , vol. 34,
no. 5, pp. 712716, May 1999.
[5] S. D. Naffziger, G. Colon-Bonet, T. Fischer, R.
Riedlinger, T. J. Sullivan, and T. Grutkowski, The
implementation of the Itanium 2 microprocessor,
IEEE J. Solid-State Circuits , vol. 37, no. 11, pp.
14481460, Nov. 2002.
[6] J. Tschanz, S. Narendra, Z. Chen, S. Borkar, M.
Sachdev, and V. De, Comparative delay and
energy of single edge-triggered and dual edge
triggered pulsed ip-ops for high-performance
microprocessors, in Proc. ISPLED , 2001, pp.
207212.

[12] H. Mahmoodi, V. Tirumalashetty, M. Cooke,


and K. Roy, Ultra low power clocking scheme
using energy recovery and clock gating, IEEE
Trans. Very Large Scale Integr. (VLSI) Syst. , vol.
17, pp. 3344, Jan. 2009.
[13] P. Zhao, J. McNeely, W. Kaung, N. Wang,
and Z. Wang, Design of sequential elements for
low power clocking system, IEEE Trans. Very
Large Scale Integr. (VLSI) Syst. , to be published.
[14] Y.-H. Shu, S.Tenqchen, M.-C. Sun, and W.-S.
Feng, XNOR-based double-edge-triggered ipop for two-phase pipelines, IEEE Trans. Circuits
Syst. II, Exp. Briefs , vol. 53, no. 2, pp. 138142,
Feb. 2006.
[15] V. G. Oklobdzija, Clocking and clocked
storage
elements
in
a
multigiga-hertz
environment,IBM J. Res. Devel. ,vol.47, pp. 567
584,Sep. 2003.

[7] B. Kong, S. Kim, and Y. Jun, Conditional-

capture ip-op for statis- tical power reduction,


IEEE J. Solid-State Circuits , vol. 36, no. 8, pp.
12631271, Aug. 2001.
[8] N. Nedovic, M. Aleksic, and V. G. Oklobdzija,
Conditional precharge techniques for powerefcient dual-edge clocking, in Proc. Int. Symp.
Low-Power Electron. Design , Monterey, CA, Aug.
1214, 2002, pp. 5659.
[9] P. Zhao, T. Darwish, and M. Bayoumi, Highperformance and low power conditional discharge
ip-op, IEEE Trans. Very Large Scale Integr.
(VLSI) Syst. , vol. 12, no. 5, pp. 477484, May
2004.
[10] C. K. Teh, M. Hamada, T. Fujita, H. Hara, N.
Ikumi, and Y. Oowaki, Conditional data mapping
ip-ops for low-power and high-perfor- mance

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


91

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Hierarchical fuzzy rule based classification


systems with genetic rule
selection To Filter Unwanted Messages
shaik masthan baba1, shaik aslam2 , K.Naresh babu3
1
Computer science and engineering.
2
Computer science and engineering
3
Computer science and engineering
1

masthan201@gmail.com,2 aslambasha592@gmail.com ,3 naresh.kosuri@gmail.com

Abstract: Social networking sites that


facilitate communication of information
between users allow users to post messages
as an important function. Unnecessary posts
could spam a users wall, which is the page
where posts are displayed, thus disabling
the user from viewing relevant messages.
The aim of this paper is to improve the
performance
of
fuzzy
rule
based
classication systems on imbalanced
domains, increasing the granularity of the
fuzzy partitions on the boundary areas
between the classes, in order to obtain a
better separability. We propose the use of a
hierarchical fuzzy rule based classication
system using the neural network learning
model to filter out unwanted messages
from Online Social Networking (OSN)
user walls, which is based on the renement
of a simple linguistic fuzzy model by means
of the extension of the structure of the
knowledge base in a hierarchical way and
the use of a genetic rule selection process in
order to get a compact and accurate model.

Keywords:On-line
Social
Networks,
Classication,Fuzzy rule based classication
systems,Imbalanced data-sets , Genetic rule
selection

1.INTRODUCTION
Online Social Networks (OSNs) are today
one of the most popular interactive medium
to communicate, share and disseminate a
considerable amount of human life
information.
Daily
and
continuous
communications imply the exchange of
several types of content, including free text,
image, audio and video data. The huge and
dynamic character of these data creates
the premise for the employment of web
content mining strategies aimed to
automatically discover useful information
dormant within the data. They are
instrumental to provide an active support in
complex and sophisticated tasks involved
in OSN management, such as for instance
access control or information filtering.
Information filtering has been greatly
explored for what concerns
textual
documents and more recently, web content
[1-3].Information filtering can therefore be
used to give users
the
ability
to
automatically control the messages written
on their own walls, by filtering out
unwanted messages. Indeed, today OSNs
provide very little support to prevent
unwanted messages on user walls [4-6]. No
content-based preferences are supported
and therefore it is not possible to prevent
undesired messages, Providing this service
is not only a matter of using previously

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


92

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

defined web content mining techniques for a


different application, rather it requires to
design ad hoc classification strategies. The
aim of the present work is therefore to
propose and experimentally evaluate an
automated system, called Filtered Wall
(FW), able to filter unwanted messages
from OSN user walls [7-9]. We exploit
Machine Learning (ML) text categorization
techniques to automatically assign with
each short text message a set of categories
based on its content. The major efforts in
building a robust short text classifier (STC)
are concentrated in the extraction and
selection of a set of characterizing and
discriminate features.
The solutions
investigated in which we inherit the learning
model and the elicitation procedure for
generating pre-classified data..In particular,
we base the overall short text classification
strategy on Radial Basis Function Networks
(RBFN) for their proven capabilities in
acting as soft classifiers, in managing noisy
data and intrinsically vague classes We
insert the neural model within a hierarchical
two level classification strategy [8-10]. In
the first level,the RBFNcategorizes short
messages as Neutral and Nonneutral;in the
second stage, No neutral messages are
classified producing gradual estimates of
appropriateness to each of the considered
category. Besides classification facilities,
the system provides a powerful rule
layer exploiting a flexible language to
specify Filtering Rules (FRs), by which
users can state what contents should not be
displayed on their walls. FRs can support a
variety of different filtering criteria that can
be combined and customized according to
the user needs. In addition, the system
provides the support for user-defined
Blacklists (BLs), that is, lists of users that
are temporarily prevented to post any kind
of messages on a user wall [11-13]. The
experiments we have carried out show
the effectiveness
of
the
developed

filtering techniques the proposal of a


system to automatically filter unwanted
messages from OSN user walls on the basis
of both message content and the message
creator relationships and characteristics.

2. LITERATURE REVIEW
The main contribution of this paper is the
design of a system providing customizable
content-based message filtering for OSNs,
based on ML techniques. As we have
pointed out in the introduction, to the best of
our knowledge we are the first proposing
such kind of application for OSNs.
However, our work has relationships both
with the state of the art in content-based
filtering, as well as with the field of policybased personalization for OSNs and, more in
general, web contents. Therefore, in what
follows, we survey the literature in both
these fields.
2.1 Content-based filtering:
Information filtering systems are designed
to classify a stream of dynamically
generated
information
dispatched
asynchronously by an information producer
and present to the user those information
that are likely to satisfy his/her requirements
[3]. In content-based filtering each user is
assumed to operate independently. As a
result, a content-based filtering system
selects information items based on the
correlation between the content of the items
and the user preferences as opposed to a
collaborative filtering system that chooses
items based on the correlation between
people with similar preferences. Documents
processed in content-based filtering are
mostly textual in nature and this makes
content-based filtering close to text
classification. The activity of filtering can be
modeled, in fact, as a case of single label,
binary classification, partitioning incoming

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


93

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

documents into relevant and non relevant


categories [4]. More complex filtering
systems
include
multi-label
text
categorization
automatically
labeling
messages into partial thematic categories.
Content-based filtering is mainly based on
the use of the ML paradigm according to
which a classifier is automatically induced
by learning from a set of pre-classified
examples. A remarkable variety of related
work has recently appeared, which differ for
the adopted feature extraction methods,
model learning, and collection of samples
[5], [6], [7], [8],[9]. The feature extraction
procedure maps text into a compact
representation of its content and is uniformly
applied to training and generalization
phases. The application of content-based
filtering on messages posted on OSN user
walls poses additional challenges given the
short length of these messages other than the
wide range of topics that can be discussed.
Short text classification has received up to
now few attention in the scientific
community. Recent work highlights
difficulties in defining robust features,
essentially due to the fact that the
description of the short text is concise, with
many misspellings, non standard terms and
noise. Focusing on the OSN domain, interest
in access control and privacy protection is
quite recent. As far as privacy is concerned,
current work is mainly focusing on privacypreserving data mining techniques, that is,
protecting information related to the
network, i.e., relationships/nodes, while
performing social network analysis [5].
Works more related to our proposals are
those in the field of access control. In this
field, many different access control models
and related mechanisms have been proposed
so far (e.g., [6,2,10]), which mainly differ on
the expressivity of the access control policy
language and on the way access control is
enforced (e.g., centralized vs. decentralized).
Most of these models express access control

requirements in terms of relationships that


the requestor should have with the resource
owner. We use a similar idea to identify the
users to which a filtering rule applies.
However, the overall goal of our proposal is
completely different, since we mainly deal
with filtering of unwanted contents rather
than with access control. As such, one of the
key ingredients of our system is the
availability of a description for the message
contents to be exploited by the filtering
mechanism as well as by the language to
express filtering rules. In contrast, no one of
the access control models previously cited
exploit the content of the resources to
enforce access control. We believe that this
is a fundamental difference. Moreover, the
notion of black- lists and their management
are not considered by any of these access
control models
2.2 Policy-based personalization of OSN
contents
Recently, there have been some proposals
exploiting classification mechanisms for
personalizing access in OSNs. For instance,
in [11] a classification method has been
proposed to categorize short text messages
in order to avoid overwhelming users of
microblogging services by raw data. The
system described in [11] focuses on Twitter2
and associates a set of categories with each
tweet describing its content. The user can
then view only certain types of tweets based
on his/her interests. In contrast, Golbeck and
Kuter [12] propose an application, called
FilmTrust, that exploits OSN trust
relationships and provenance information to
personalize access to the website. However,
such systems do not provide a filtering
policy layer by which the user can exploit
the result of the classification process to
decide how and to which extent filtering out
unwanted information. In contrast, our
filtering policy language allows the setting

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


94

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

of FRs according to a variety of criteria, that


do not consider only the results of the
classification process but also the
relationships of the wall owner with other
OSN users as well as information on the
user profile. Moreover, our system is
complemented by a flexible mechanism for
BL management that provides a further
opportunity of customization to the filtering
procedure. The only social networking
service we are aware of providing filtering
abilities to its users is MyWOT, a social
networking service which gives its
subscribers the ability to: 1) rate resources
with respect to four criteria: trustworthiness,
vendor reliability, privacy, and child safety;
2) specify preferences determining whether
the browser should block access to a given
resource, or should simply return a warning
message on the basis of the specified rating.
Despite the existence of some similarities,
the approach adopted by MyWOT is quite
different from ours. In particular, it supports
filtering criteria which are far less flexible
than the ones of Filtered Wall since they are
only based on the four above-mentioned
criteria.
Moreover,
no
automatic
classification mechanism is provided to the
end user. Our work is also inspired by the
many access control models and related
policy
languages
and
enforcement
mechanisms that have been proposed so far
for OSNs, since filtering shares several
similarities with access control. Actually,
content filtering can be considered as an
extension of access control, since it can be
used both to protect objects from
unauthorized subjects, and subjects from
inappropriate objects. In the field of OSNs,
the majority of access control models
proposed so far enforce topology-based
access control, according to which access
control requirements are expressed in terms
of relationships that the requester should
have with the resource owner. We use a
similar idea to identify the users to which a

FR applies. However, our filtering policy


language extends the languages proposed for
access control policy specification in OSNs
to cope with the extended requirements of
the filtering domain. Indeed, since we are
dealing with filtering of unwanted contents
rather than with access control, one of the
key ingredients of our system is the
availability of a description for the message
contents to be exploited by the filtering
mechanism. In contrast, no one of the access
control models previously cited exploit the
content of the resources to enforce access
control. Moreover, the notion of BLs and
their management are not considered by any
of the above-mentioned access control
models.
3. ANALYSIS OF PROBLEM:
The use of effective and appropriate
methods in facilitating projects enhances its
effectiveness and efficiency. The method
will be applied in system analysis and
design method where an existing system is
studied to proffer better options to solving
existing problems. Indeed, today OSNs
provide very little support to prevent
unwanted messages on user walls. For
example, Facebook allows users to state
who is allowed to insert messages in their
walls (i.e., friends, friends of friends, or
defined groups of friends). However, no
content-based preferences are supported and
therefore it is not possible to prevent
undesired messages, such as political or
vulgar ones, no matter of the user who posts
them.
However,
no
content-based
preferences are supported and therefore it is
not possible to prevent undesired messages,
no matter of the user who posts them.
Providing this service is not only a matter of
using previously defined web content
mining techniques for a different
application, rather it requires to design ad
hoc classification strategies. This is because
wall messages are constituted by short text

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


95

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

for which traditional classification methods


have serious limitations since short texts do
not provide sufficient word occurrences.
4.IMBALANCED
DATA-SETS
IN
CLASSIFICATION
In this section, we will first introduce the
problem of imbalanced data-sets. Then, we
will describe the preprocessing technique
that we have applied in order to deal with
the imbalanced data-sets: the SMOTE
algorithm [7]. Finally, we will present the
evaluation metrics for this kind of
classification problem.
4.1. The problem of imbalanced data-sets
Learning from imbalanced data is an
important topic that has recently appeared in
the machine learning community.When
treating with imbalanced data-sets, one or
more classes might be represented by a large
number of examples whereas the others are
represented by only a few.We focus on the
binary-class imbalanced data-sets, where
there is only one positive and one negative
class. We consider the positive class as the
one with the lowest number of examples and
the negative class the one with the highest
number of examples. Furthermore, in this
work we use the IR, defined as the ratio of
the number of instances of the majority class
and the minority class, to organize the
different data-sets according to their IR.The
problem of imbalanced data-sets is
extremely significant because it is implicit in
most real world applications, such as fraud
detection [16], text classification, risk
management
or medical applications.In
classification, this problem (also named the
class imbalance problem) will cause a
bias on the training of classifiers
and will result in the lower sensitivity of
detecting the minority class examples. For
this reason, a large number of approaches
have been previously proposed to deal with

the class imbalance problem. These


approaches can be categorized into two
groups: the internal approaches that create
new algorithms or modify existing ones to
take the class imbalance problem into
consideration [3] and external approaches
that preprocess the data in order to diminish
the effect cause by their class imbalance
[4,15].The internal approaches have the
disadvantage of being algorithm specific,
whereas
external
approaches
are
independentof the classifier used and are, for
this reason, more versatile. Furthermore, in
our previous work on this topic [18] we
analyzed the cooperation of some
preprocessing methods with FRBCSs,
showing a good behaviour for the oversampling methods,specially in the case of
the SMOTE methodology.According to this,
we will employ in this paper the SMOTE
algorithm in order to deal with the problem
of imbalanced data-sets. This method is
detailed in the next subsection.
4.2. Preprocessing imbalanced data-sets.
The SMOTE algorithm As mentioned
before, applying a preprocessing step in
order to balance the class distribution is a
positive solution to the imbalance data-set
problem [4]. Specifically, in this work we
have chosen an over-sampling method
which is a reference in this area: the
SMOTE algorithm [7].In this approach the
minority class is over-sampled by taking
each minority class sample and introducing
synthetic examples along the line segments
joining any/all of the k minority class
nearest neighbours. Depending upon the
amount
of
oversampling
required,
neighbours from the k-nearest neighbours
are randomly chosen. This process is
illustrated in Fig. 1,where xi is the selected
point, xi1 to xi4 are some selected nearest
neighbours and r1 to r4 the synthetic data
points created by the randomized
interpolation. The implementation employed

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


96

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

in this work uses only one nearest neighbour


using the euclidean distance, and balance
both
classes
to
the
50%
distribution.Synthetic samples are generated
in the following way: take the difference
between the feature vector (sample) under
consideration and its nearest neighbour.
Multiply this difference by a random
number between 0 and 1, and add it to the
feature vector under consideration. This
causes the selection of a random point along
the line segment between two specific
features.This approach effectively forces the
decision region of the minority class to
become more general. An example is
detailed in Fig. 2.In short, its main idea is to
form new minority class examples by
interpolating between several minority class
examples that lie together. Thus, the
overfitting problem is avoided and causes
the decision boundaries for the minority
class to spread further into the majority class
space.

of the positive (negative) label being true. In


other words, they assess the effectiveness
of the algorithm on a single class.

4.3. Evaluation in imbalanced domains


The measures of the quality of classification
are built from a confusion matrix (shown in
Table 1) which records correctly and
incorrectly recognized examples for each
class.The most used empirical measure,
accuracy (1), does not distinguish between
the number of correct labels of different
classes, which in the framework of
imbalanced problems may lead to erroneous
conclusions. For example a classifier that
obtains an accuracy of 90% in a data-set
with an IR value of 9, might not be accurate
if it does not cover correctly any minority
class instance.

Fig.1.An illustration on how to create the synthetic data points in


the SMOTE algorithm.

Because of this, instead of using accuracy,


more correct metrics are considered. Two
common
measures,
sensitivity
and
specificity (2,3), approximate the probability

The metric used in this work is the


geometric mean of the true rates [3], which
can be defined as

Fig.2.Example of SMOTE application

This metric attempts to maximize the


accuracy of each one of the two classes with
a good balance. It is a performance metric
that links both objectives.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


97

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Table.1. Confusion matrix for a two-class problem.

Class
Positive class
Negative class

Positive
prediction
True
positive(TP)
False
positive(FP)

Negative
Prediction
False
Negative(FN)
True
Negative(TN)

5. Hierarchical rule base genetic rule


selection process
In the previous section we have mentioned
that an excessive number of rules may not
produce a good performance and it makes
difficult to understand the model behaviour.
We may find different types of rules in a
large fuzzy rule set: irrelevant rules, which
do not contain significant information;
redundant rules, whose actions are covered
by other rules; erroneous rules, which are
wrong defined and distort the performance
of the FRBCS; and conflicting rules, which
perturb the performance of the FRBCS when
they coexist with others.In this work, we
consider the CHC genetic model [14] in
order to make the rule selection process,
since it has achieved good results for binary
selection problems [6]. In the following, the
main characteristics of this genetic approach
are presented.
1. Coding scheme and initial gene pool: It is
based on a binary coded GA where each
gene indicates whether a rule is selected or
not (alleles 1 or 0, respectively).
Considering that N rules are contained in the
preliminary/candidate
rule
set,
the
chromosome C = (c1, . . . ,cN) represents a
subset of rules composing the final HRB,
such that:

with Ri being the corresponding ith rule in


the candidate rule set and HRB being the
final hierarchical rule base.The initial

pool is obtained with an individual having


all genes with value 1 and the remaining
individuals generated at random in {0, 1}, so
that the initial HRB is taking into account in
the genetic selection process.
2. Chromosome evaluation: The fitness
function must be in accordance with the
framework of imbalanced data-sets. Thus,
we will use, as presented in Section 2.3, the
geometric mean of the true rates, defined in
(4) as:

3. Crossover operator: The half uniform


crossover scheme (HUX) is employed. In
this approach, the two parents are combined
to produce two new offspring. The
individual bits in the string are compared
between the two parents and exactly half of
the non-matching bits are swapped. Thus the
Hamming distance (the number of differing
bits) is first calculated.This number is
divided by two. The resulting number is how
many of the bits that do not match between
the two parents will be swapped.
4. Restarting approach: To get away from
local optima, this algorithm uses a restart
approach. In this case, the best chromosome
is maintained and the remaining are
generated at random in {1,0}. The restart
procedure is applied when a threshold value
is reached, which means that all the
individuals coexisting in the population are
very similar.
5. Evolutionary model: The CHC genetic
model makes use of a Population-based
Selection approach. N parents and their
corresponding offspring are combined to
select the best N individuals to take part of
the next population. The CHC approach
makes use of an incest prevention
mechanism and a restarting process to
provoke diversity in the population,instead
of the well-known mutation operator.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


98

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

This incest prevention mechanism will be


considered in order to apply the HUX
operator, i.e., two parents are crossed if their
hamming distance divided by 2 is higher
than a predetermined threshold, L. The
threshold value is initialized as:
L = (#Genes/4.0). Following the original
CHC scheme, L is decremented by one
when the population does not change in
one generation. The algorithm restarts when
L is below zero. We will stop the genetic
process if more than 3 restarts are performed
without including any new chromosome in
the population.

6. CONCLUSION
In this paper, we have proposed an HFRBCS
approach for classification with imbalanced
data-sets. Our aim was to employ a
hierarchical model to obtain a good balance
among different granularity levels. A fine
granularity is applied in the boundary areas,
and a thick granularity may be applied in the
rest of the classification space providing a
good generalization. Thus,this approach
enhances the classification performance in
the overlapping areas between the minority
and majority classes.Furthermore, we have
made use of the SMOTE algorithm in order
to balance the training data before the rule
learning
generation
phase.
This
preprocessing step enables the obtention of
better fuzzy rules than using the original
data-sets and therefore, we improve the
global performance of the fuzzy model to
filter out unwanted messages from Online
Social Networking (OSN).
7.REFERENCES
[1] R. Alcal, J. Alcal-Fdez, F. Herrera, J.
Otero, Genetic learning of accurate and
compact fuzzy rule based systems based on
the 2-tuples linguistic representation,

International Journal of Approximate


Reasoning 44 (2007) 4564.
[2] A. Asuncion, D. Newman, 2007. UCI
machine learning repository. University of
California, Irvine, School of Information
and Computer Sciences. URL:
<http://www.ics.uci.edu/~mlearn/MLReposi
tory.html>.
[3] R. Barandela, J.S. Snchez, V. Garca, E.
Rangel, Strategies for learning in class
imbalance problems, Pattern Recognition 36
(3) (2003) 849851.
[4] G.E.A.P.A. Batista, R.C. Prati, M.C.
Monard, A study of the behaviour of several
methods for balancing machine learning
training data, SIGKDD Explorations 6 (1)
(2004) 2029.
[5] P. Campadelli, E. Casiraghi, G.
Valentini, Support vector machines for
candidate nodules classification, Letters on
Neurocomputing 68 (2005) 281288.
[6] J.R. Cano, F. Herrera, M. Lozano, Using
evolutionary algorithms as instance selection
for data reduction in kdd: an experimental
study, IEEE Transactions on Evolutionary
Computation 7 (6) (2003) 561575.
[7] N.V. Chawla, K.W. Bowyer, L.O. Hall,
W.P. Kegelmeyer, Smote: synthetic
minority over-sampling technique, Journal
of Artificial Intelligent Research 16 (2002)
321357.
[8] N.V. Chawla, N. Japkowicz, A. Kolcz,
Editorial: special issue on learning from
imbalanced data-sets, SIGKDD Explorations
6 (1) (2004) 16.
[9] Z. Chi, H. Yan, T. Pham, Fuzzy
algorithms with applications to image
processing and pattern recognition, World
Scientific, 1996.
[10] J.-N. Choi, S.-K. Oh, W. Pedrycz,
Structural and parametric design of fuzzy
inference systems using hierarchical fair
competition-based
parallel
genetic
algorithms and information granulation,
International Journal of Approximate
Reasoning 49 (3) (2008) 631648.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


99

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

[11] O. Cordn, M.J. del Jesus, F. Herrera,


A proposal on reasoning methods in fuzzy
rule-based
classification
systems,
International Journal of Approximate
Reasoning 20 (1) (1999) 2145.
[12] O. Cordn, F. Herrera, I. Zwir,
Linguistic modeling by hierarchical systems
of linguistic rules, IEEE Transactions on
Fuzzy Systems 10 (1) (2002) 220.
[13] J. Demar, Statistical comparisons of
classifiers over multiple data-sets, Journal of
Machine Learning Research 7 (2006) 130.
[14] L.J. Eshelman, 1991. Foundations of
Genetic Algorithms. Morgan Kaufman, Ch.
The CHC Adaptive Search Algorithm: How
to have Safe Search when Engaging in
Nontraditional Genetic Recombination, pp.
265283.
[15] A. Estabrooks, T. Jo, N. Japkowicz, A
multiple resampling method for learning
from imbalanced data-sets, Computational
Intelligence 20 (1) (2004) 1836.
[16] T. Fawcett, F.J. Provost, Adaptive fraud
detection, Data Mining and Knowledge
Discovery 1 (3) (1997) 291316.
[17] A. Fernndez, S. Garca, M.J. del Jesus,
F. Herrera, An analysis of the rule weights
and fuzzy reasoning methods for linguistic
rule based classification systems applied to
problems with highly imbalanced data-sets,
in: International Workshop on Fuzzy Logic
and Applications (WILF07), Lecture Notes
on Computer Science, vol. 4578, SpringerVerlag, 2007, pp. 170179.
[18] A. Fernndez, S. Garca, M.J. del Jesus,
F. Herrera, A study of the behaviour of
linguistic fuzzy rule based classification
systems in the framework of imbalanced
data-sets, Fuzzy Sets and Systems 159 (18)
(2008) 23782398.
[19] M. Friedman, The use of ranks to avoid
the assumption of normality implicit in the
analysis of variance, Journal of the
American Statistical Association 32 (1937)
675701.

[20] S. Garca, D. Molina, M. Lozano, F.


Herrera, A study on the use of nonparametric tests for analyzing the
evolutionary algorithms behaviour: a case
study on the CEC2005 special session on
real parameter optimization. Journal of
Heuristics, in press, doi: 10.1007/s10732008-9080-4.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


100

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

REDUCING SECURITY RISKS IN VIRTUAL NETWORKS BY


USING SOFTWARE SWITCHING SOLUTION
B.Ashok
nd

M.Tech 2 year, Dept. of CSE, ASCET, Gudur, India


Email: ashok20.86@gmail.com

_____________________________________________________________________________
Cloud computing provides multiple services to the cloud users, particularly in Infrastructure as a service (Ias)
clouds user may install vulnerable software on their virtual machines. Attackers exploit these virtual machines to
compromise as zombie and by using it attacker can perform Distributed denial of service (DDOS) attacks. The
Distributed Denial of service attacks (DDOS) caused by the extreme flow of requests from clients to the cloud
sever at the same time. The DDOS attacks are very much high in the existing Intrusion detection systems. To
overcome these problems a modified approach called Effective Intrusion Detection and reducing Security risks in
Virtual Networks (EDSV) is proposed. It enhances the intrusion detection by closely inspecting the suspicious
cloud traffic and determines the compromised machines A novel attack graph based alert correlation algorithm
is used to detect DDOS attacks and reduced to low level by incorporating access control and software switching
mechanism. It also reduces the infrastructure response time and CPU utilization.
Keywords: cloud computing, Network Security, DDOS, Intruder, Zombie detection

I.

as trusting the VM image, securing inter-

INTRODUCTION

Cloud computing is a model for facilitating


convenient, on-demand network access to a
shared pool of configurable computing
resources.

It

supports three

important

models such as platform as a service (Paas),


infrastructure as a service (Iaas) and
software as a service (saas). In IaaS, the
cloud provider supplies a set of virtualized
infrastructural components such as virtual
machines (VMs) and storage on which

host communication, hardening hosts are


critical in cloud environment. However,
customers are also very concerned about the
risks of Cloud Computing if not properly
secured, and the loss of direct control over
systems for which they are nonetheless
accountable. The major threats for cloud
system include: Abuse and Nefarious Use of
Cloud Computing, Insecure Application
Programming Interfaces, Malicious Insiders,
Shared Technology Vulnerabilities, Data

customers can build and run applications.


These applications are reside on their VM
and the virtual operating system. Issues such

Loss/Leakage. This paper focuses on Abuse


and Nefarious Use of Cloud Computing.
SLA is a service level agreement between

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


101

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

the service provider and the consumer. It

service attack (DDOS), spamming, and

consists the common understanding about

identity theft. Security issues over cloud

services,

priorities,

computing is definitely one of the major

warranties

and

responsibilities,

guarantees.

In

cloud

computing environment SLAs are necessary

concerns it prevent the rapid development of


cloud computing.

to control the use of computing resources.


However, patching known security holes in
cloud data centers where cloud user have
access to control their software installed
virtual machines may not work effectively
and that violates service level agreements.
Virtualization is considered to be one of the
important technologies that help abstract
infrastructure and resources to be made
available to clients as isolated VMs. A
hypervisor or VM monitor is a piece of
platform-virtualization software that lets
multiple operating systems run on a host
computer

simultaneously.

Also

this

technology allows generating virtualized


resources for sharing and it also increase the

II.

EXISTING SYSTEM

In a cloud system, where the infrastructure is


shared by potentially millions of users,
attackers can explore the vulnerabilities of
the cloud and use of its resource to deploy
attacks in more efficient ways. Existing
system

focuses

on

the

detection

of

compromised machines that have been


recruited to serve as spam zombies. The
DDOS attacks have been counter measured
by using approaches such as Entropy
Variation method and Puzzle based Game
theoretic strategy. If the number of requests
made by the attacker increases the efficiency
of the entire system will be reduced.

attack surface. We need a mechanism to


isolate

virtual

machines

and

secure

communication between them. This cloud


computing is done with flexible access
control mechanism that governs the control

TIME(ms)

and sharing capabilities between VMs with


in a host. Compromised machines are one of
the major security threats over the internet.
They are often used to launch a variety of
security attacks such as Distributed denial of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


102

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

suspicious

virtual machine

for

further

detailed investigation.

The above fig X-axis specifies the number

Time(ms)

of users and Y-axis specifies the time in


milliseconds. The fig shows that attacker
performs five distributed denial of service
attacks in few milliseconds. As a result of
these attacks, client may wait and expect
response from the server but the server does
not register the response according to the
client request. This automatically increases
N O.OF USERS

the infrastructure response time which also


cause increase CPU utilization i.e it is also
increase

time

taken to

create

virtual

The above fig X-axis specifies the number


of users and Y-axis specifies the time in

machines.

milliseconds. The fig shows that the number


III.

of users and duration using the system is

PROPOSED SYSTEM

same as in previous approach. In EDSV,


In

the

proposed

system,

avoiding

compromised virtual machines by using


multiphase distributed vulnerability attack
detection

and

measurement.

Analytical

attack graph model is used for attack


detection and prevention by correlating
attack behavior and suggests effective
countermeasures.

Attack

graph

is

constructed by specifying each node in the


graph represent exploits and each path from
initial

node

successful
software

to target
attack.

switching

node represent

EDSV
solution

incorporates
to

DDOS attacks are effectively prevented


before they incorporate into cloud and
causing further damage to the cloud system.
The client waiting for the response from the
server is delivered in appropriate time. So
that it reduces the infrastructure response
time and also CPU utilization i.e. it is also
cause less time to create virtual profile.
i.Authorization and Access control:
The

cloud

service

provider

allows

authenticated users to access a server for


storing and retrieving data. Virtual machines

isolate

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


103

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

allows to store the information about the

8. else if ( int_mac_a is in T5)

//Check Current Clients List

client requests like port no,ip address, MAC


address

etc

.This

Intrusion

detection

then

9. (Ignore the request)

management system detects the alert by


maintaining five tables and timer. The tables
are

Account

T1,

Intruders

table

T2,

10. else
11. (Accept the login request) and (Start

Authenticated client T3, Unauthenticated

communication)

client T4, client list T5.T1 is used to check


the client by using MAC address.T2
contains the address of already known

12.

end_ if

13. end_ if

intruders.T3 contains the MAC address of


the clients who are in communication
process, login time and logout time.T4

14. end_ if
15. end_ if

records the MAC address login and logout


time of clients.T5 contains the MAC address

ii Intrusion detection by using Alert

and login time of all clients.

correlation

Alg1:Authentication and Access control


Attack

1. Event type (login, logout)

analyser

performs

three

major

functions such as attack graph construction,


2. If (event Request = login) then
3.

alert

correlation

and

countermeasure

int_mac_a = get_ Mac_Address()

selection. The process of constructing

//Get the mac address of the client

scenario attack graph (SAG) consists of


three major functions such as information

4. If (int_mac_a is in T2) then //Check

the intruder list

gathering, attack graph construction and


exploit path analysis. Attack graph is
constructed by specifying each node in the

5. (Ignore the request)

graph represent exploits and each path from


6. else if ( int_mac_a is in T3)

then

//Check authenticated client list


7.

initial

node

to target

node represent

successful attack. The Attack analyser also

(Ignore login request) and (store

performs alert correlation and analysis by

int_mac_a in T2)

constructing Alert correlation graph and

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


104

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

providing threat information to network

Step 4: Alert Dependencies: Let Am is a

controller. When Attack analyser receives an

subset of A be the set of alerts that have been

alert it checks the alert already exist in the

matched to a node in an AG: The

attack graph and performs countermeasure

dependency graph DG is defined by the

selection. It notifies network controller to

matched and aggregated alerts Am as

deploy countermeasure actions or mitigating

vertices and the relations between these

the risk. If the alert is new then attack

alerts as edges.

analyser performs alert correlation and then

Step5: Searching: Each path in the alert

update SAG, ACG .If the alert is a new

dependency graph DG specifies a subset of

vulnerability and not present in attack graph

alerts that might be part of an attack scenario.

then attack analyser reconstructing the graph

Dependency Graph is used in the last step to

by adding it.

determine the most interesting subsets of

Algorithm2: Alert correlation algorithm

alerts and also the most interesting path in

step1: preparation- In the preparation phase,

the alert dependency graph.

all the system and network information is

iii Software Switching Solution

loaded,

The

the

database

with

alert

network

controller

is

major

classifications is imported, and the attack

component to supports the programmable

graph AG for the network is loaded.

networking capability to realize the virtual

Step 2: Mapping: The mapping function

network reconfiguration feature based on the

maps the matching alerts to specific nodes in

Open Flow protocol. In EDSV, each cloud

the AG. Alert mapping can be done by

server consists a software switch which is

determining source, destination of and

used as the edge switch for VMs to handle

classification of alerts.

traffic in and out from VMs. Conceptually

Step 3: Aggregation: Let alerts A is subset

switch function is divided into two pieces

of A be the set of alert that is supposed to be

such as control plane and data plane. The

aggregated. Let th be a threshold, The alert

control plane is the core part of switch

aggregation combines alerts that are similar

which handles the discovery, routing, path

but where created together in a short

communication and communication with

time ,i.e., the difference of the timestamps is

other switches. The control plane creates a

below a certain threshold th.

flow table and it is used by the data plane to


process the incoming packets. Open flow

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


105

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

protocol lets you delegate the control plane

improvement in the performance of the

of all the switches to the central controller

cloud with the depletion in the CPU

and lets the central software to define the

utilization, Infrastructure response time and

behaviour of the network.

VM creation time. In order to improve the

The network controller is responsible for

detection

collecting network information of current

detection solutions are needed to

attack graphs it includes current data paths

incorporated and to cover the whole

on each switch and the detailed flow of

spectrum of IDS systems in the cloud

information associated with these paths,

system.

such as TCP/IP and MAC header. The

References

network controller automatically receives


information

about

network

flow

and

topology changes after that it sends the

accuracy,

hybrid

intrusion
be

1. H. Takabi, J. B. Joshi, and G. Ahn,


Security and privacy challenges in cloud
computing environments, IEEE Security &
Privacy,vol. 8, no. 6, pp. 2431, Dec. 2010.

information to attack analyser to reconstruct


attack graph. We integrate the control
functions for both open flow switch, open

2. B. Joshi, A. Vijayan, and B. Joshi,


Securing cloud computing environment
against DDoS attacks, IEEE Intl Conf.

virtual switch so that that allows the cloud


system to set the security and filtering rules

Computer Communication and Informatics


(ICCCI 12), Jan. 2012

in secure and comprehensive manner. Based


on the security index of Virtual machine and

3. Cloud Security Alliance, Top threats to


cloud

computing

v1.0,

severity of an alert, countermeasures are

https://cloudsecurityalliance.org/topthreats/

selected and executed by Network controller.

csathreats.v1.0.pdf,March 2010.

IV.

CONCLUSIONS

4. Z. Duan, P. Chen, F. Sanchez, Y. Dong, M.

In this paper, we proposed a solution to

Stephenson, and J. Barker, Detecting spam

detect DDOS attacks early and preventing

zombies by monitoring outgoing messages,

the system from attacks. We have used a

IEEE

novel alert correlation algorithm which

Computing, vol. 9, no. 2, pp. 198210, Apr.

creates alert
effective

correlations and suggests

countermeasures.

Software

switches based solutions used to improve the


detection accuracy as a result there is an

Trans.

Dependable

and

Secure

2012.

5. R. Sadoddin and A. Ghorbani, Alert


correlation

survey:

framework

and

techniques, Proc. ACM Intl Conf. on


Privacy, SecurityAnd Trust: Bridge the Gap

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


106

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Between PST Technologies and Business


Services (PST 06), pp. 37:137:10. 2006.

8. S. H. Ahmadinejad, S. Jalili, and M. Abadi,


A hybrid model for correlating alerts of
known and unknown attack scenarios and

6. S. Roschke, F. Cheng, and C. Meinel, A


new alert correlation algorithm based on
attack graph, Computational Intelligence

updating

attack

graphs,

Computer

Networks, vol. 55, no. 9, pp.22212240, Jun.


2011.

in Security for Information Systems, LNCS,


vol. 6694, pp. 5867.Springer, 2011.

9. Open

vSwitch

project,

http://openvswitch.org, May 2012.

7. P. Ammann, D. Wijesekera, and S. Kaushik,


Scalable,

graph

based

network

10. L. Wang, A. Liu, and S. Jajodia, Using

vulnerability analysis, Proc. 9th ACM

attack graphs for correlating, hypothesizing,

Conf. Computer and Comm. Security (CCS

and predicting intrusion alerts, Computer

02), pp. 217-224, 2002.

Communications, vol. 29, no. 15, pp. 2917


2933, Sep .2006

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


107

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Effective Method for searching substrings in large Databases


By using Query based approach
J.RamNaresh Yadav
nd

M.Tech 2 year, Dept. of CSE, ASCET, Gudur, India


Email: ramnaresh167@gmail.com

Abstract This paper mainly deal with the direct

occurrence positions of a query word easily. How

substring search in long string and a query point in

can a search engine, however, present an

the string for searching large database. Traditional

informative list of the search results? Showing all

approaches only focuses on effective search using

occurrence positions. On the one hand, if the

approximation string search but they dont

length is short, some snippets may be identical and

consider the point that how to find exact substring

thus

query for searching in large database. We address

distinguished.

those

occurrences

still

cannot

be

this problem in this paper and we develop an

A smarter way is to present for each

effective algorithm for query answering. First,

occurrence a smallest snippet that contains the

we develop an algorithm to answer a smallest

query term and is different from all other snippets

substring queries in O(n) time using suffix tree

of the query term. The above simple yet effective

index. Second we also compute unique substring

application in document search introduces an

in every position of a given string. Once the

interesting novel problem to be tackled in this

smallest unique substrings are pre-computed,

paper. Given a (long) string S and a query point q

smallest unique substring queries can be answered

in S, we want to conduct a smallest unique

online in constant time.

substring query that finds a smallest unique

Index terms- substring queries, Query-Answering,


suffix tree.

substring containing q.
Shortest unique substring queries have

I.

INTRODUCTION

You are searching the Complete Works of


William Shakespeare using query term king.
The term king occurs 1, 546 times in 1, 392
speeches within 40 works, even without counting
those related words like kings and kings.
Using modern information retrieval techniques,
such as an inverted index, one can find all

many potential applications. In addition to the


above document search example, shortest unique
substring queries can be used in bioinformatics.
Moreover, finding shortest unique substrings on
DNA sequences can help polymerase chain
reaction (PCR) primer design in molecule biology.
Also, it can help to identify unique DNA
signatures of closely related species or organisms.
The shortest unique substring of the event under

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


108

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

investigation may serve as a concrete working


base of the event context.
Answering

shortest

A. Smallest Unique Substring Queries


Let S be the string of length n. Denoted by

unique

substring

S[i] the value of i-th position of S and S[i,j] =

queries efficiently is far from trivial. A brute-force

S[i].S[j] the substring at position i and ending at

(heuristic) search may easily lead to cost in time

position j. Substring starting at position p. if two

quadratic to the length of the string, which is

string are identical, denoted by X = Y, and for

unacceptable in practice when the string is long

every 1 | |, [ ] = [ ]. X is called a

and queries are expected to be answered online.

substring of Y. we call X a proper substring of Y,

In this paper, we address the problem of answering

denoted by

shortest unique substring queries

from the

Definition 1 (Minimal unique Substring (MUS)):

algorithmic point of view and make several

A substring S[i, j] is unique in S if there doen not

contributions. First, we model shortest unique

exist another substring S[ , ] such that S[i,j] =

substring queries and explore their properties

S[ , ]. S[i,j]is called a minimal unique substring

thoroughly. The properties clearly distinguish

if S[i, j] is unique and there is not any proper

shortest unique substring queries from the existing

substring of S[i, j] that is also unique.

related problems, such as computing global

Definition 2 (small unique Substring (SUS)) :

minimal substrings.

Given a string S and a position p in S, substring

Second, we present an algorithm to

S[i, j] is a small unique substring at position p if

answer a shortest unique substring query in O(n)

S[i,j] is unique and contains p, and there does not

time using a suffix tree index, which can be

exists another unique substring S[ , ] such that

constructed in O(n) time and space, where n is the

S[ , ] is also contains p and

< .

length of string S.
Third, we show that, using O(n h) time
and O(n) space, we can compute a shortest unique
substring for every position in a given string,
where h is variable theoretically in O(n) but on
real data sets often much smaller than n and can be

Definition 3 (Problem definition): Given a String


S and a position p, the small unique substring
query (SUSQ) is to find a SUS at position p. Any
number in SUS (p) is valid answer.
In our algorithm design, we often consider two
types of unique substring that may be candidates

treated as a constant.
II. SMALLEST UNIQUE SUBSTRING QUERIES

of SUSs. We give the definitions here and will


pursue further discussion later.

In this section, we formulate the shortest


unique

substring

queries,

and

properties of several critical concepts.

discuss

the

Definition 4 Given a string S and a Position p in S,


a substring S[p, j] is called the left-bound SUS
(LSUS) position p, denoted by LSUS(p), if S[p, j]
is unique and no other substring

[ , ] is also

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


109

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

[ , ] is

S of length n suffixes S[i,n], i=1, ..,n. in the

called the right-bound SUS (RSUS) for position p,

suffix tree of S, each edge represents a substring S,

denoted by RSUS(p), if [ , ] is unique and no

and a path from the root to a leaf node represents

other substring [ , ] is also unique for

exactly on suffix of S.

unique for

< . Symmetrically,

<

Ukkonens proposed a well known suffix

.
Moreover, we define the left SUS be the

tree construction method that requires only linear

SUS whose starting point is smallest, denoted by

time and space when the alphabet size of a string

leftmost-SUS(p).

is a constant. Taking S=11011001 as an example,

Figure 1 shows the relationship among


LSUS (p), RSUS(p) and the MUSs contain p.

we briefly show to construct its suffix tree as


shown in Figure 3, using Ukkonens algorithm.

Figure 2 illustrates the concept of leftmost SUS.

Figure1. The relationship among three cases

Figure2. The leftmost SUS at a position p.


Figure3. The suffix tree of S= 11011001

It is easy to see the following property.


Property 1 (LSUS and RSUS): Given a string S, for

The construction procedure is illustrated in Figure

every position p in S, LSUS (p) and RSUS(p), if

4.

exist, are respectively. In some cases, LSUSs or


RSUSs may not exist. Moreover, position p in S,
LSUSs or RSUSs may not be SUSs.
III. QUERY ANSWERING USING SUFFIX TREES
In this section, we first review suffix tree
and the construction. Then we discuss how to use
a suffix tree as index to answer smallest unique
substring queries.
A. Suffix Trees and Construction
A suffix tree is data structure that
concisely records all possible suffixes of a given

Figure 4 The construction of suffix tree

string allows fast string search operations. A string

S = 11011001.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


110

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

To extend the suffix tree of S[1, i] to S[1,


i+1] we need to extend S[j,i] for 1 , S[i+1].
There are three possible cases.
1)

2)

Given a string S, we first build its suffix


tree in O(n) space and O(n) time using Ukkonens

[ , ] ends at a leaf node. Then, we pad

algorithm. We further store all leaf node into an

[ + 1] to the corresponding leaf edge.

array so that we can access a specific leaf node

[ , ] does not end at a leaf node but followed

Leaf(i), its edge Edge(Leaf(i)) and its associated

by [ + 1]. Then, we split the edge and creat


a new node.
3)

B. Query Answering Using Suffix Trees

[ , ] does not end at a leaf node but followed

string Sedge(Leaf(i)) in constant time.


We can use the suffix tree to get LSUS (p)
in constant time, as shown in Algorithm 1.

[ + 1]. In this case, we do not need to do


Algorithm 1 the LSUS finding algorithm

anything.
When we expand the tree from j = 1 to j =
I during phase i+1, the occurrences of this three
phases follow some properties. First, after case 2
or case 3 happen, and then case 1 will never
happens. With these properties, once we meet case
3 at step j of phase i, we can immediately finish
the current phase and start the phase i+1 at step j.
To

ensure

O(n)

construction

time,

Input: string S[1, n], a position p, and the suffix


tree T of S
Output: LSUS (p)
1. Find the leaf node of S[p, n] in T;
//The leaf node can be indexed during the
construction of the suffix tree, so the access to the
leaf node costs O(1) time
2. If the label of leaf edge is $ then return null;
3. end if

ukkonnes algorithm uses the suffix links and the

4. lthe length of the lablel of the leaf edge;

skip/count technique during the tree construction.

// the padded terminal charater is not counted into

A suffix link is a directed path from an internal

the length of the leaf edge.

node associated with substring S [i, j] to another

5. Return [ , + 1]

internal node associated with substring S[i+1,j],


which allows fast jump to the next extension point

We first target at the corresponding leaf

in the tree. The skip/count technique enables us to

node, of p in the suffix tree. Backtracking along

add the new character S[i+1] at phase i+1 quickly.

the leaf node, we meet an internal node. Base on

To save more space, instead of storing copies of

the property of the suffix tree, the represented

substrings, we label edges using start and end

string from the root to this internal node is a

indexes. To end index of a leaf edge is omitted and

common prefix of different suffixes.

denoted by . Finally, an end symbol $ is padded

With LSUS (p), we can now find a SUS(Smallest

at each path as a leaf node. As a result, the space

Unique Substring) containing position p as shown

used for a suffix tree is reduced to O(n).

in Algorithm 2.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


111

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Algorithm 2 the baseline SUS finding algorithm

Theorem 1 Given a string S and a position p in S,

Input: string S, a position p, and the suffix tree

if [ , ]is a SUS at portion p but not a MUS, then

T of S

either i = p or j = p.

Output: the leftmost SUS containing position p

Theorem 2 indicates that, by one scan of S and


obtain LSUS at each position, we can find all

1. Find LSUS (p)


( )

2. if

[ , ] be

3.

MUSs and LSUSs that are not MUSs. To

them
( );

determine whether a SUS is a RSUSs we have the


following result.

4. else
5.

Theorem 3 Given a string S and a position p, a

[ , ] be [1, ];

substring

6. else if
7.

for k p-1; k>0 and

[ , ] contains only one MUS [ , ]( ).

Given a string S, our algorithm first

8. if LSUS(k) is null then continue;

constructs a suffix tree. This takes (| |) time and

9. end if

11.

[ , ] be the LSUS (k);


<

then

p;

14.

(| |) space. For each position p, algorithm


maintains a currently shortest MUS that contains
position p, denoted by p.cand. It also takes

12. end if
13. if

( ) if and only only if

B. The Framework

1 do

10.

[, ]

1 then = ; = ,

end if

(| |)

space to store of the nMUS obtained at the last


position. Therefore, our algorithm needs only
(| |) space overall.

15. End for


16. Return S[i,j];

Algorithm 3 shows the pseudo-code of our


method.

IV. A CONSTANT TIME ONLINE QUERY


ANSWERING ALGORITHM
In this section, we develop a method that

Algorithm 3 the pre-computation algorithm.


Input: string s
Output: a SUS for each position 1

pre-computes the leftmost SUS for every position

1.

using linear space. Then, online query answering

2. Initialize

can be conducted in constant time.


A. Ideas

LSUS, or a RSUS. This can be achieving by using


various theorems. In this paper, we briefly define
theorem which are useful.

Build a suffix tree for String S;


.

for 1

| |;
(1), denoted by [1, ] as

3. Output LSUS

We first observe that a smallest unique


substring must fall into three cases: a MUS, a

| |

the SUS at Position 1;


4.

1,

// use LSUS (1) to initialize the SUS at position


p-1
5. For p=2 to | | do

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


112

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

[, ]

6. Let

( ) obtained from the

code of the propagation procedure.

suffix tree;
7. Let S[i, j] be the shortest substring among the
following
.

strings:

.
,

(1)

[ , ],

<

and (4) [

1: procedure PROPAGATE(MUS [ , ]), the

(3)

position k to propagate

sybstring having the smallest length, pick the


left most one. Output

[ , ] as a SUS at

= [ , ] .

8. suppose .

>

call PROPAGATE ( [ , ], + 1

10.

( ) = [ , ] is a MUS and is not

[ , ], and

>

call PROPAGTE( [ , ], + 1
13.

end if
,

14.
15.

<

> then

3: k is not in the range of [i ,j]


4:

return

5: else if .

I null then
[ , ]; return

7: end if
8: supposed .
9:

> 1 then .

;
is longer than

[, ]

11. end if
12. if

2:

6:

position ;

9. if .

Algorithm 4 the Propagation procedure.

(2)

. If there are more than one

if

(p) and RSUS (p). Algorithm 4 gives the pseudo-

10: .
11:


At the beginning, we initialize p.cand to

>

12:
13:

[ , ]

call PROPAGATE (

<

S from the beginning to the end. At position 1,

17: else [ , ]

LSUS (p) is the only SUS containing position 1.

18:

if < then

At each position p (p>1), we compute LSUS (p)

19:

using the suffix tree in constant time.

20:

smallest MUS for each position, we do not need to


explicitly store one MUS at each p.cand. Instead,
we only need to store at those positions p where
the smallest MUS may not be obtained by LSUS

, + 1);

< then

[, ]
16:

Although we reverse space to record the

is shorter than [ , ] and ends before

null for all positions p. our algorithm scans string

C. MUS Propagation

end if

14: else if
15:

[ , ] ends before .

21:

call PROPAGATE ( [ , ], + 1);

24:
25:

have the same length

[ , ]

call PROPAGATE(
else

, + 1)

>

call PROPAGATE ( [ , ], + 1)

22:
23:

end if
end if
return

26: end procedure

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


113

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

V. EXPECTED RESULTS
Suppose we want to conduct extensive
experiments on three real data sets and a group of
synthetic data sets to evaluate our methods.
Mainly we consider three real data sets will be
used in our experiments. the first data set is an
introduction of R language, appear of the FAQa on
the R project website (http://www.r-project.org/).
The second real data set is the genome sequence of
Mycoplasma genitalium, the pathogentic bacterium that has one of the smallest genomes known
for any free-living organism. The third data is the
Bible.

REFERENCES
[1] B. Haubold, N. Pierstorff, F. Moller, and T.
Wiehe, Genome comparison without alignment
using

shortest

unique

substrings,

BMC

Bioinformat- ics, vol. 6, no. 123, May 2005.


[2]

P.

Weiner,

Linear

pattern

matching

algorithms, in Proc. of the 14th Annual


Symposium on Switching and Automata Theory
(swat 1973), 1973, pp. 111.
[3] U. Manber and G. Myers, Suffix arrays: a
new method for on-line string searches, in
Proceedings of the first annual ACM-SIAM
symposium on Discrete algorithms, 1990, pp.

For

the R-sequence,

we chose the

positions where the language name R appeared as


the query index. For a given sequence, we can

319327.
[4] E. Ukkonen, On-line construction of suffix
trees, Algorithmica, vol. 14, pp. 249260, 1995.

compute one SUS at each position. We want to


observe the distribution of the SUS counts over

[5] M. Farach, Optimal suffix tree construction

different SUS lengths on the R-Sequence. In

with large alphabets, in Proc. of the 38th Annual

addition to the original R-sequence itself, we

Symposium on Foundations of Computer Science

further generated three mutations with the same

(FOCS97), 1997.

string length using the alphabet set. For the

[6] S. J. Puglisi, W. F. Smyth, and A. H. Turpin,

original R-sequence, most SUSs are of length of 3.

A taxonomy

As the length increases, the corresponding counts

algorithms, ACM Computer Survey, vol. 39, no.

decreases.

2, July 2007.
VI.

CONCLUSION

of

suffix

array

construction

[7] G. Nong, S. Zhang, and W. H. Chan, Linear


time suffix array construction using d-critical

In this paper, we formulated a novel type

substrings,

in

Proc.

20th

Annual

of interesting queries- smallest unique substring

Symp.Combinatorial Pattern Matching, 2009, pp.

queries, which have many applications. We

5467.

developed efficient algorithms. Furthermore, our

[8] D. Cusfield, Algorithms on Strings, Trees, and

study leads to new direction on string queries. As

Sequences: Computer Science and Computational

future work, it is interesting to extend and

Biology. Cambridge University Press, 1997.

generalize smallest unique substring queries.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


114

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

[9] L. Ilie and W. F. Smyth, Minimum unique


substrings and maximum repeats, Fundamenta
Informaticae, vol. 110, no. 1-4, pp. 183195,
2011.
[10] K. Ye, Z. Jia, Y. Wang, P. Flicek, and R.
Apweiler, Mining unique-m substrings from
genomes,

Journal

of

Proteomics

and

Bioinformatics, vol. 3, no. 3, pp. 99100, 2010.


AUTHORS
First Author
Second Author

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


115

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Significance of Stator winding Insulation Systems of Low-Voltage Induction Machines


aiming on Turn Insulation Problems : Testing and Monitoring
First A. DEETI SREEKANTH, Fellow, IEEE.

Abstract-A cessation of the electrical insulation system


causes ruinous failure of the electrical machine and
brings large process downtime losses. Researchers have
understood that fast rise-time voltage surges surges from a
circuit breaker closing can lead to an electrical breakdown of
the turn insulation in motorstator windings [1]. To determine

the conditions of the stator insulation system of motor


drive systems, various testing And monitoring methods
have been developed. This paper presents fiction review
of testing and monitoring methods, cataloguing them
into online and offline methods, each of which is further
grouped into specific areas according to their physical
nature. The main focus of this paper is techniques that
diagnose the condition of the turn-to-turn insulation of
low-voltage machines. Thus to improve motor reliability
and move to predictive maintenance for motors, tools
are needed to assess the condition of the windings.
There are several old and new test methods that have
gained popularity with AC induction motor
maintenance specialists. In order to give a compact
overview, the results are summarized in two tables. In
addition to monitoring methods on turn-to-turn
insulation, some of the most common methods to assess
the stators phase-to-ground and phase-to-phase
insulation conditions are included in the tables as well.
Index Terms winding insulation Induction motors,
interturn shorts, stator faults,motor diagnostics,circuit
signature analysis(CSA).
INTRODUCTION
urn insulation failures are seen in the stator
coils as melted copper conductors and hole in
the main insulation due to earth fault.
Induction motors have been widely used in many
industrial applications, because of simplicity of
control[1].
Owners of electrical machines expect a high reliability
and a long lifetime of its equipment. This can only be
achieved by a consistent quality assurance throughout
the whole product life cycle on both, the product- and
process level[2]. The process level includes research
and development, continuous design improvement,
quality control during production/commissioning and
online/off line testing in service. In the beginning,
natural products such as silk, wool, cellulose and flax
together with natural varnishes and petroleum
derivatives were used for insulation. Due to
optimization, these materials were displaced or
materials like asbestos, quartz or other minerals have
been added. The main reasons of winding insulation
deterioration as described in [2] and [3] are thermal,

electrical, mechanical, or environmental stress.


Moreover, the class of insulation and the application
of the motor have a strong influence on the condition
and aging of the insulation system. Many approaches
have been proposed to detect the faults and even the
early deterioration of the primary insulation system
(phase-to-ground or phase-to-phase) and the
secondary insulation system (turn to- turn). The
testing and monitoring methods can be generally
divided into two different categories. The first one is
offline testing [2], [4], [11][26], which requires the
motor to be removed from service, while the second
one is online monitoring [2], [7], [27][35], which can
be performed while the machine is operating. An
important aspect of each method is whether it is
invasive or noninvasive to the machines normal
operation.
Nonintrusive methods are always preferred because
they only use voltage and current measurements from
the motor terminals and do not require additional
sensors. Most of the insulation system faults are
caused by the deterioration and failure of the turn-toturn insulation [4], [5]. Therefore, the monitoring of
the turn-to-turn insulations condition is of special
interest. For this reason, the main focus of this
surveyin contrast to other surveys related to motor
diagnostics [8][9]is on methods that can be used
to detect faults or deterioration in the turn-to-turn
insulation of low-voltage machines. Some popular
methods related to medium- and high-voltage
machines are also briefly mentioned.The recent
technology advances in sensors, integrated circuits,
digital signal processing, and communications
enabled engineers to develop more advanced methods
to test and monitor the conditions of the machine
[4].The most common methods to test and monitor
the ground-wall and phase-to-phase insulation are
also included in this survey. First, the insulation
failure mechanisms are analyzed briefly. Then, several
offline tests are introduced. Finally, a general
approach of developing online methods is discussed,
and conclusions are drawn considering the need for
future development in this area.
II. STATOR FAULTS AND THEIR ROOT CAUSE
As mentioned earlier, the main cause for stator failures can
be divided into four groups [2], [3] thermal, electrical,
mechanical, and environmental stress. Before describing
the different causes for the breakdown of the insulation
system, a brief overview over the possible nature of the
fault and a way to analyze it is given.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


116

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

A.Analysis and Nature of the Stator


Insulation Failure:
Analyzing the mode and pattern of the fault helps to
find its cause. As there are different failure modes and
patterns associatedwith stator insulation failures [10].
The most severe failure mode is a phase-to-ground
fault. Other mode are turn-to-turn, coil-to-coil, phaseto-phase short-circuits, or an open-circuit of the
stator windings. Those faults can occur in a single
phase, can be symmetrical, nonsymmetrical with
grounding, or nonsymmetrical excluding grounding.
A great percentage of the insulation failures start with
a turn-to-turn insulation problem and subsequently
develop into more severe insulation faults. In a turnto-turn fault, two or more turns of a coil are shortcircuited.
The current in the shorted turns will be considerably
higher than the operating current and therefore
increases the windings temperature to a level where
severe damage or even the cessation of the insulation
is the result.
One of the faults developing from a turn-to-turn fault
mightbe a coil-to-coil short circuit, where coils from
the same phase get shorted, or a phase-to-phase short
circuit, where two or more of the different phases get
shorted. These faults again can develop into phase-toground faults, which can cause large damage to the
motor. A different kind of fault is the open-circuit of a
stator winding. As the short-circuit faults, the opencircuit introduces a strong asymmetry and, thus,
malfunction of the motor. Compared to the shortcircuit faults, this kind of fault rarely occurs. Aside
from analyzing the mode and pattern of the failure,
the examination of the appearance of the motor is
helpful to identify the cause of the fault. This includes
aspects like the cleanness of the motor, signs of
moisture, minor inspection, moderate and major
inspections , and the condition of the rotor. The
operating conditions under which the motor fails, as
well as the general operating conditions, should also
be taken into consideration. Furthermore, the
maintenance history can be consulted to determine
the problems that lead to failure. Considering all these
aspects, a method can be developed in order to
analyze and classify insulation failures [10].
B.Root Causes for the Failures of the Stator Insulation
System:
Aging Mechanisms: Insulation in service is exposed to
high temperature, high voltage, vibration, and other
mechanical forces as well as some unfavorable
environmental conditions. These various factors act
together and individually to wear out or age the
insulation.
1) Thermal Stress: One of the thermal stresses
insulation is subject to is the thermal aging process.
An increase in temperature accelerates the aging
process and thus reduces the lifetime of the insulation
significantly. Under normal operating conditions, the
aging process itself does not cause a failure but makes
the insulation more susceptible to otherstresses,which
then produce the actual failure. In order to ensure a
longer lifetime and reduce the temperature.influence
of the aging process, one can either work atlow

operating temperatures or use an insulation of higher


quality, i.e., use a higher insulation class. Another
thermal stress that has a negative effect on the
insulation lifetime is thermal overloading, which
occurs due to voltage variations, unbalanced phase
voltages, cycling, overloading, obstructed ventilation,
or ambient temperature. For example, even a small
increase in the voltage unbalance has an enormous
effect on the winding temperature. It should be
ensured that the flow of air through the motor is not
obstructed since the heat cannot be dissipated
otherwise and that the winding temperature will
increase. If this is not possible, however, this should
be taken into account by upgrading the insulation
system or restricting the winding.

Thermal Aging:Thermal cycling, especially with respect to


the main field windings of peaking units, can lead to significant
physical deformation of the associated coils.

In this thermal aging insulation are catalouged in to


two different categories as core insulation and
stator/rotor insulation.
Core Insulation:
Inadequate cooling, General over- heating ,Localized
over-Heating, Burnout at High Temperature
Stator/Rotor Insulation:
Continous High Temp, Differntial Expansion,
Thermal Cycling, Girth Cracking, Scarf- Joint,
Loosening of End winding, Loosening of coils in slots.
2) Electrical Stress: There are different reasons why
electrical stresses lead to failure of the stator
insulation. These can usually be broken down into
problems with the dielectric material, the phenomena
of tracking and corona and the transient voltages, that
a machine is exposed to the type of dielectric material
that is used for phase-to-ground, phase-to-phase, and
turn-to-turn insulation, as well as the voltage stresses
applied to the insulating materials, influences the
lifetime of the insulation significantly. Thus, the
materials for the insulation have to be chosen
adequately in order to assure flawless operation and
desired design life.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


117

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

Tracking and corona are phenomena that only occur


at operating voltages above 600 V and 5 kV,
respectively. Thenegative influence of transient
voltage conditions on the winding life has been
observed in recent years. These transients, which
either cause depreciation of the insulation or even
turn-to- turn or turn-to-ground failures, can be
caused by line-to-line, line-to-ground, or multiphase
line-to-ground faults in the supply; repetitive
restriking; current limiting fuses;rapid bus transfer;
opening and closing of the circuit breakers; capacitor
switching(power factor improvement); insulation
failure in the power system; or lightning strike.
Variable frequency drives are subject to permanent
voltage transients. In particular, during the starting
and stopping process, high-voltage transients can
occur.

Electrical Aging: Loose stator winding will vibrate within the

fanblades, loose nuts or bolts striking the motor, or


foreign particles that enter the motor.
Mechanical Aging:
Core Insulation: Core Looseness & Fretting , Back
Iron Over-Heating
Stator Insulation:120 Hz Bar vibration Forces,
Electromagnetic Forces on End windings, Abrasive
Materials.
Rotor Insulation: Centrifugal Forces , Abrasive
materials, Operation on Turning Gear.
4) Environmental Stress: Stresses stemming from
contamination, high humidity, aggressive chemicals,
radiation in nuclear plants, or the salt level in
seashore applications can be categorized as
environmental or ambient stress [2]. For example, the
presence of foreign material by contamination can
lead to reduction in the heat dissipation, increasing
the thermal deterioration. A thin layer of conducting
material on the surface of the insulation is another
possible result of contamination. Surface currents and
electrical tracking can occur due to this layer applying
additional electrical stress. Aggressive chemicals can
degrade the insulation and make it more vulnerable to
mechanical stresses. If possible, the motor should be
kept clean and dry internally, as well as externally, to
avoid the influence of moisture, chemicals, and
foreign particles on the insulation condition.
Radiation is a stress that only occurs in nuclear power
plants or nuclear-powered ships. The aging process is
comparable to thermal aging.

stator slots. Fretting against the stator core iron, corona


suppression materials will abrade, increasing PD activity.

Core Insulation: Under-Excitation, Over


Excitation, Manfacturing Defects, Ground Fault in
core slots.
Stator Insulation: Electrical Discharges, Surface
Tracking, Moisture Absorption, System Surge
voltages, Unbalanced Supply Voltages
Rotor Insulation: Transient Over-Voltages , Static
Excitation Transients , Surface Tracking , Moisture
Absorption.
3) Mechanical Stress: The main causes for insulation
failure due to mechanical stresses are coil movement
and strikes from the rotor. The force on the winding
coils isproportional to the square of the motor current
and reaches its maximum value during the startup of
the motor. This force causes the coils to move and
vibrate. The movement of the coils again can cause
severe damage to the coil insulation or the conductor.
There are different reasons that will cause the rotor to
strike the stator. The most common are bearing
failures, shaft deflection, and rotor-to-stator
misalignment. Sometimes, the contact is only made
during the start, but it can also happen that there will
be a contact made at full speed of the motor. Both
contacts can result in a grounded coil.There are other
mechanical stresses, which the windings are exposed
to, like loose rotor balancing weights, loose rotor

Environmental Aging: Oil is both a solvent as well as a lubricant.


Internal oil contamination can breakdown insulation, and loosen the
frictional-force blocks, ties, and packing throughout a generator.

Environmental Aging:
Core Insulation: Water Absorption., Chemical
contamination.
Stator/Rotor Insulation: Water Absorption.,
Chemical Contamination.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


118

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

III. OFFLINE TESTING


The condition of the stator winding is critical for the
overall motor wellness. To ensure the perfect
operation of a motor system, various offline tests can
be executed. These tests allow the user to assess the
condition of the motor under test. Offline methods are
normally more direct and accurate. The user does not
need to be an expert of motor drives to perform the
tests. However, most of these tests can only be applied
to motors that are disconnected from servicei.e,
offline tests are done during a shutdown . This is one
of the main drawbacks compared to the onlinemonitoring methods. An advantage to online
monitoring is that meaningful tests can be performed
after production of the motor and that a test device
can be used for several different machines, which
saves costs. The offline tests are summarized in Table
I, [2], [4], [11][16]. Evaluating the table, it becomes
obvious that there are not many offline techniques
available to diagnose insulation are the surge test and
the offline partial discharge (PD) test. Since the PD
test is not applicable to low-voltage machines, it is not
further described here. Common methods used to test
the phase-to-ground insulation [2], [11][13] are the
insulation resistance (IR) test, the polarization index
test, the dc and ac high potential test, and the
dissipation factor test. Recently, a new method was
developed to apply some of those offline tests (IR,
dissipation factor, and capacitance tests) to inverterfed machines while they are not operating [14], [15].
Since the tests can be conducted on a frequent basis
without using additional equipment, ground-wall
insulation problems can be diagnosed at an early
stage.
A. Surge Tests
If the turn insulation fails in a form-wound stator
winding, the motor will likely fail in a few minutes.
About 80% of all electrical failures in the stator
originate from a weak turn-to-turn insulation
[5].Thus the turn insulation is critical to the life of a
motor. By applying a high voltage between the turns,
the surge test is able to overcome this limitationand
thus this test is an overvoltage test for the turn
insulation, and may fail the insulation, requiring
bypassing of the failed coil, replacement or rewind .
Low voltage tests on form-wound stators, such as
inductance or inductive impedance tests, can detect if
the turn insulation is shorted, but not if it is
weakened. Only the surge voltage test is able to
directly find stator windings with deteriorated turn
insulation. The test is valid for any random wound or
multi-turn form wound stator, and the test method for
form wound stators is described in IEEE 522.
The surge test duplicates the voltage surge created by
switching on the motor. The surge test is a destructive
go-no go test. If the turn insulation fails, then the
assumption is that the stator would have failed in
service due to motor switch-on, PWM inverter voltage
surges or transients caused by power system faults. If
the winding does not puncture, then the assumption

is that the turn insulation will survive any likely surge


occurring in service over the next few years.
The first generation of surge test sets was called surge
comparison testers. They consisted of two energy
storage capacitors, which were connected to two
phases. The waveform from each phase is monitored
on an analog oscilloscope. The assumption is that the
waveform is identical for the two phases. As the
voltage is increased, if one of the waveforms changes
(increases in frequency) then turn to turn puncture
occurred in the phase that changed. This approach
has lost favor now since it is possible for two phases to
have slightly different inductances due to different
circuit ring bus lengths, mid-winding equalizer
connections or even due to rotor position (since it
affects the permeability). Modern surge testers use a
digitizer to capture the surge voltage waveform on a
phase, as the voltage is gradually raised. Digital
analysis then provides an alarm when the waveform
changes at a high voltage, due to a turn fault [6]. The
test voltages are described in IEEE 522.
There have been a lot of controversies about the risk
of surge testing [21][23]. A comprehensive study
about this issue disproves the statement that surge
testing significantly reduces the lifetime of a machine
[45], [46]. The effect of the surge rise time is also a
topic that has been widely discussed [24]
B. Signature Analysis After Switch-Off
A technique that uses motor, the signature analysis of
the terminal voltage immediately after switch-off to
diagnose turn faults, is introduced in [15]. The
advantage, compared to online techniques using
current signature analysis, is that the voltage
unbalance of the source does not influence the result
since the supply is off. The faulty machine model used
for simulation is also included in the paper.
IV. ONLINE MONITORING
On-line monitoring performed during normal
operation of the motor .Various monitoring methods
have been developed using different physical
quantities to detect the health condition of the stator
insulation system [2], [3]. These methods utilize
different motor parameters like magnetic flux,
temperature, stator current, or input power for the
monitoring purpose. The induction motor model with
a turn-to-turn fault, introduced in[36][40], is
required for some of the methods.Online condition
monitoring is usually preferred in the applications,
which have a continuous process, such as petro/chem,
water treatment, material handling, etc. The major
advantage is that the machine does not have to be
taken out of service. As a result, the health condition
while
the
motor
is
operating
can
be
assessed.Predictive maintenance is made easier
because the machine is under constant monitoring, an
incipient failure can immediately be detected, and
actions can be scheduled to avoid more severe
process
downtime.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


119

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN


ENGINEERING
RESEARCH, ICDER - 2014
TABLE
I
DIFFERENT METHODS TO TEST THE STATOR INSULATION SYSTEM OF ELECTRICAL DRIVES
Method

Reference

Insulation
Resistance (IR)/
Mega ohm

[2]
[11]-[15]

Insulation tested & Diagnostic


value
Finds contamination and defects
in phase-to-ground insulation.

Winding
resistance/DC
conductivity Test
Polarization index
(PI)
(DC HIPOT)
DC High potential
test

[2], [8],
[11]

Positive Features

Negative Features

Easy to perform, applicable to all


windings except for rotor of a squirrel
cage IM

Results in strong temperature


dependent

Detects shorted turns, no


predictive value

Easy to perform

Only detects faults.

[2]
[11]-[13]
[6]
[11]-[13]

Finds contamination and defects


in phase to-ground insulation
Finds contamination and defects
in phase to-ground insulation

Easy to perform,
Less sensitive to temperature than IR-test
Easy to perform, if test doesnt fail the
insulation is likely to work flawlessly until
the next maintenance periodmore
predictive character than PI and IR

In case of failure, repair is required


( destructive )

(AC HIPOT)
AC High potential
Test
Growler

[2]
[12],[13]

Finds contamination and defects


in phase to-ground insulation

More effective than DC Highpot

Not as so easy to perform like DC


Highpot.

SurgeTest

[2],[4],[11]
[16]
[16]

Detects shorted coils, no


predictive value
Detects worsening of the
turn-to-turn insulation.
Detects turn to turn insulation

Applicable to both low voltage and High


voltage machines
It is only a offline test that measures the
integrity of the turn insulation
Signature analysis without influence of
supply voltage unbalance

Specifically applied to armature and


rotors
Test can be destructive

Detects deterioration of the


turn-to-turn and phase-to-ground
insulation
Detects deterioration of the
phase-to-ground and phase-tophase insulation
Detects shorted turns, no
predictive value

Good practical results

Not applicable to low voltage


machines, difficulty in interpretation
of the data.
Measurements on a regular basis
required in order to trend the data over
time.
Not as easy to perform as the winding
Resistance Test

Signature Analysis of
Terminal voltage
after switch-off
Partial Discharge

[2],[1]

Dissipation factor

[2],[14],[15
]

Inductive Impedance

[2]

A disadvantage is that the online-monitoring


techniques often require the installation of additional
equipment, which has to be installed on every
machine. Compared to the offline tests, it is more
difficult or even impossible to detect some failure
processes [6]. However, many sensorless and
nonintrusive methods have been recently developed
using the electrical signatures, e.g., current and
voltage, such that the monitoring algorithm can reside
in the motor control center or even inside of motor
control devices,such as the drives [58]. Therefore, the
online monitoring can become nonintrusive without
the need of additional sensors and installations. The
online-monitoring techniques described in the survey
are summarized in Table II [27][35].
A. HF Impedance/Turn-to-Turn Capacitance
A nonintrusive condition monitoring system using the
highfrequency (HF) response of the motor is
introduced in [34]. It is able to observe the aging and,
thus, the deterioration of the turn-to-turn insulation
by detecting small changes in the stator windings
turn-to-turn capacitance. It is shown that the turn-toturn capacitance of the stator winding and, thus, its
impedance spectrum are changing under the influence
of different aging processes. Since it is not possible to
use an impedance analyzer for the purpose of an
online test, it is suggested to inject a small HF signal
into the stator winding. Its frequency has to be close
to the series resonance frequency of the system. The
flux of the machine caused by the injected HF signal
can be measured by a magnetic probe in the vicinity of
the machine. The change in the phase lag between the
injected signal and the measured flux will be used as
an indicator of a change in the resonance frequency
and, thus, in the turn-to-turn capacitance, which is
caused by the deterioration of the insulation. If there

Able to determine the cause of


deterioration
----

----

Test can only be conducted directly


after swith-off.

is some prior knowledge or data of the system


available, it can even be deduced how likely a failure
of the insulation system is in the near future. A similar
technique is introduced in two different patents [35],
[36]. Two different methods to determine the
insulation condition and how close it is to failure are
listed. The first one requires the comparison of the
impedance response to a response that is recorded
after the fabrication of the motor, which can be called
its birth certificate. Another method is to calculate
the power that is dissipated in the insulation by either
measuring current or voltage across the winding and
using the broadband impedance response. This power
is then compared to a target value which can be
determined by historical data from similar motors.
In contrast to the claim in [34], the use of an
impedance meter is suggested in [35]. However, in
[36], the measurement of the broadband impedance is
accomplished by measuring voltage and current at the
machines terminals and by using Ohms law.
B. Sequence Components
Several methods based on the sequence components
of the machines impedance, currents, or voltages
have been developed for the online detection of turnto-turn faults in the stator insulation system [34]. One
of the drawbacks of the methods utilizing sequence
components is that only a fault, but not the change of
the overall condition and thus the deterioration of the
insulation system, is monitored.
1) Negative-Sequence Current:The monitoring of
the negative sequence current for fault detection is the
subject of many papers [31],[34]. If there is an
asymmetry introduced by a turn-to-turn fault, the
negative-sequence current will change and can thus
be used as an indicator for a fault.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


120

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

TABLE II
DIFFERENT METHODS TO TEST THE STATOR INSULATION SYSTEM OF ELECTRICAL DRIVES
Method

Reference

Insulation tested & Diagnostic


value

Positive Features

Negative sequence
current

[31],[34]

Detects turn-to-turn faults

Sequence Impedance
Matrix

[51]-[52]

Detects turn-to-turn faults

Zero sequence Voltage

[49]

Detects turn-to-turn faults

Pendulous oscillation
phenomenon
Airgap Flux signature,
Axial leakage Flux
Current signature
analysis
Vibration signature
analysis
Online partial
discharge

[50]

Detects turn-to-turn faults

[54]

Detects turn-to-turn faults

[55]-[56]

Detects turn-to-turn faults

[57]-[58]

Detects turn-to-turn faults

[7],[41]-[48]

Detects turn-to-turn faults

---Good practical results

Ozone

[6]

Detects turn-to-turn faults

By-product of PD

High Frequency
Impedance

[34]-[36]

Detects deterioration of the


turn-to-turn insulation

Temperature
monitoring

[6],[24],[27][35]

Detects deterioration in
phase-to-ground and faults
in turn-to-turn insulation

Capable of monitoring the


deterioration of turn-to-turn
insulation.
Non-invasive, capable of
determining the cause of
deterioration.

Leakage currents

[32],[33]

Detects deterioration in
phase-to-ground and
phase-to-phase insulation
Detects faults and
problems with phase-toground and turn-to-turn
insulation

Condition Monitors
and Tagging
compounds

[6],[24]

The major problem with this method is that not only a


turn-to-turn fault contributes to the negativesequence component of the current but also supply
voltage imbalances, motor and load inherent
asymmetries, and measurement errors have an effect
on this quantity. The methods suggested in [35]
account for those nonidealities by using the negativesequence voltage and impedance and a database.
Another way to consider the nonidealities is the use of
artificial neural networks (ANNs). A method to
determine the negative-sequence current due to a turn
fault with the help of those ANNs is proposed in [40].
The neural network is trained offline over the entire
range of operating conditions. Thus, the ANN learns
to estimate the negative-sequence current of the
healthy machine considering all sources of asymmetry
except for the asymmetry dueto a turn fault. During
the monitoring process, the ANN estimates the
negative-sequence current based on the training
under healthy condition. This value is compared to
the measured negative-sequence current.
The deviation of the measured value from the
estimated value is an indicator of a turn fault and even
indicates the severity of the fault. Another approach
using negative-sequence current and an

Non-invasive, methods available


to take non-idealities in to
account
Non-invasive, methods available
to take non-idealities in to
account
Non-invasive, methods available
to take non-idealities in to
account
Non-invasive, able to compensate
for non-idealities.

Non-invasive, interpretation of
results subjective.

Non-invasive, capable of
determining the cause of
deterioration.
----

Negative Features
Non-idealities that complicate
fault detection
Non-idealities that complicate
fault detection
Non-idealities that complicate
fault detection, neutral of the
machine has to be accessible
---Invasive, results strongly
depend on the load.
Further research advised to
generalize results.
Non-invasive,Further research
advised to generalize results.
Difficulty in interpretation of
the data, not applicable to low
voltage machines, additional
equipment required.
Invasive(gas analysis tube or
electronic instrument).
Invasive(search coil), not tested
widely yet.
Invasive if sensors are required,
a lot of data and additional
information like ambient
temperature required.
---Invasive (equipment for
detection of particles required
and chemicals have to be
applied to machine)

ANN to detect the fault, which is implemented in a


LABVIEW environment, is proposed in [30]. The
injection of an HF signal superposed to the
fundamental excitation in inverter-fed machines has
been suggested and examined in [31]. By using
reference frame theory and digital filters, the authors
show that the negative-sequence component does not
depend on the frequency of the injected signal. Thus,
it is possible to use a frequency that is substantially
higher than the one of the fundamental excitation.
The application of an HF signal also minimizes the
influence on the machines operation. To compensate
for nonidealities, a commissioning process during the
first operation of the machine is suggested.
2) Sequence Impedance Matrix:The calculation
of the sequence impedance matrix under healthy
conditions is the basis of an approach that is
presented in [51],[52]. A library of the sequence
impedance matrix as a function of the motor speed
for a healthy machine is used during the monitoring
process. The method is not sensitive to construction
imperfections and supply unbalances, since they have
been taken into account during the construction of the
library. Another robust method with high
sensitivityusing the sequence component impedance
matrix is introduced in [52]. It uses an off-diagonal

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


121

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

term of the sequence component impedance matrix


and is immune against supply voltage unbalance, the
slip-dependent influence of inherent motor
asymmetry, and measurement errors.
3) Zero Sequence Voltage:A method utilizing the
zero sequence voltage is proposed in [53]. The
algebraic sum of the line-neutral voltages is used as an
indicator for a turn fault. Ideally, this sum should be
zero. The sensitivity is improved by filtering the
voltage sum to get rid of higher order harmonics.It is
pointed out that the method is not sensitive to supply
or load unbalances. In order to take inherent machine
imbalances into account, different procedures are
suggested. The main drawback of this procedure is
that the neutral of the machine has to be accessible.
C. Signature Analysis:
1) Current Signature Analysis:Current Signature
Analysis (CSA) monitoring has revolutionized the
detection of broken rotor bars and cracked short
circuit rings in squirrel cage induction motor rotors
[4]. CSA can also find problems with rotor balance
that lead to rotor eccentricity. In CSA, the current on
one of the power cables feeding the motor is analyzed
for its frequency content. Specific frequencies in the
current indicate the presence of defective rotor
windings during normal operation of the motor.
Broken rotor bars by CSA can sometimes be detected
by bearing vibration analysis. Motor current signature
analysis is a popular method to detect broken rotor
bars and airgap eccentricity [55]. In [56], it has been
shown that it is also possible to use this technique to
detect turn faults.This approach is based on the fact
that the magnitude of the stator current harmonics
changes after a turn fault developed. The method for
detecting a turn fault seems to be subjective though.
The various approaches use different frequency
harmonics todetect a fault. For example, in [55], it is
suggested to observe the change in the third harmonic
and
some
other
frequency
components.
Unfortunately, the sensitivity of those components
under loadedconditions is not very high, and they are
also sensitive to inherent motor asymmetry and
supply unbalance.
2) Axial Leakage Flux:If an induction machine is
in perfectly balanced condition, there should be no
axial leakage flux present. Due to production
imperfection, there is always a small asymmetry in the
motor that causes an axial leakage flux. Since a turn
fault also creates some asymmetry in the machine and
thus some axial leakage flux, the monitoring of this
flux can be used for detecting turn faults. This
technique has been the topic of several publications
[54].The theoretical and practical analyses carried out
show that certain frequency components of the axial
leakage flux are sensitive to interturn short circuits.
One of the main disadvantages of this method is the
strong dependence on the load driven by the motor.
The highest sensitivity can be reached under
fullloadconditions. Another drawback is that a search
coil to detect the axial flux has to be installed. Another
publication [54] not only detects turn-to-turn faults

but also uses the axial leakage flux to find broken


rotor bars and end rings.
3) Vibration Signature Analysis:
Another quantity whose signature analysis can be
used to get information about the condition of the
insulation system is the electrically excited vibration.
This topic has been examined in [58].The results show
that deteriorated and faulted windings can be
identified. It is indicated that the method is good to
provide additional information supplementary to
other monitoring techniques. Further research has to
be made in order to gain full access to the potential of
this method. An obvious disadvantage is the required
installation of vibration sensors.
D. Temperature Monitoring
The constant monitoring of the temperature and
trending over time can be used by maintenance
personnel to draw conclusions about the insulation
condition [2]. In many motors, the temperature is
monitored and the motor is turned off if a certain
temperature is exceeded. Temperature sensors can be
embedded within the stator windings, the stator core,
or frame or even be part of the cooling system. There
are different types of temperature sensors employed
like
resistance
temperature
detectors
or
thermocouples. Recently, there has also been a lot of
work done on temperature estimation techniques
[28]-[31], which are nonintrusive and, thus, do not
require the installation of temperature sensors. The
ability to measure even small excursions in
temperature enables the detection of possible
problems in the insulation at an early stage and can
thus be used to plan maintenance before a major
breakdown occurs [24].
E. PD
A popular, reliable, and very frequently used method
for finding problems with the insulation system of
medium- and high-voltage machines is the PD
method [2], [3], [41][44] that can be applied online
as well as offline. Unfortunately, it requires the
installation of costly additional equipment. For
various reasons, the method has not been widely
applied to low-voltage machines yet. However, the
occurrence of PD in low-voltage motors under
application of voltage surges has been subject
toseveral investigations [45], [46], and the possible
use of the PD method for low-voltage motors has been
recently reported in [47] and [48]. Since the voltage
level in low-voltage mains-fed machines is too low to
induce PDs, the method is only applied to inverter-fed
machines that are subject to repetitive voltage surges.
The main problem in this application is that the PDs
are overlapped by the voltage surges and thus are
difficult to detect. Different methods are suggested,
which all entail large complexity and cost. For
example, the detection using optical sensors is
suggested. However, this method does not seem to be
very useful for the application in motors since the
windings are at least partially invisible (hidden) and
some discharges will therefore be hidden from the
optical sensor. The cost and complexity of this or

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


122

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

other methods seem to be too high to justify the use


on comparatively cheap low-voltage motors on a big
scale. A by-product of the PD that can also be used for
monitoring the insulation condition is ozone [2].
V. CONCLUSION
The main objective of this paper is to evaluate existing
offline and online-monitoring methods for the stator
winding insulation of low-voltage induction machines
in order to give engineers a broad overview over
recent developments in this area, to show the
capability and boundaries of those methods, and to
point out possible directions for future research
activities. A comprehensive literature survey on the
existing methods for low-voltage induction motor
winding insulation condition monitoring and fault
detection has been presented, and it has been
identified that turn-to-turn faults count most for
induction motor winding insulation faults [4], [5].
Various online methods have been developed that are
capable of identifying a turn fault even in thepresence
of nonidealities. The offline surge test is not only able
to identify a fault but also capable of revealing a
weakness in the turn insulation prior to a fault.
Despite all progress made in the field of monitoring
motor drive systems, there is still no onlinemonitoring method widely applied in industries and
accepted in the diagnosis community, which is
capable of monitoring the deterioration of the turntoturn insulation of low-voltage machines. Thus,
based on the survey results, the authors suggest the
development of an online-monitoring method
applicable to low-voltage machines, which is capable
of diagnosing the deterioration of the turn-to-turn
insulation prior to a fault and is also reasonable from
a cost standpoint.
Acknowledgement:
The author is thankful to Coils & Insulation division of
BHEL Hyderabad and HINTZ for help extended in
preparation of this research paper.

References:
[1] H. W. Penrose, Test methods for determining the impact of
motor condition on motor efficiency and reliability, Ph.D.
dissertation, ALLTEST Pro, LLC, Old Saybrook, CT.
[2] G. C. Stone, E. A. Boulter, I. Culbert, and H. Dhirani,
Electrical Insulation for Rotating Machines: Design, Evaluation,
Aging, Testing, and Repair. Piscataway, NJ: IEEE Press, 2004.
[3] A. Siddique, G. S. Yadava, and B. Singh, A review of stator
fault monitoring techniques of induction motors, IEEE Trans.
Energy Convers., vol. 20, no. 1, pp. 106114, Mar. 2005.
[4] D. E. Schump, Testing to assure reliable operation of electric
motors, in Proc. IEEE Ind. Appl. Soc. 37th Annu. Petrol. Chem.
Ind. Conf., Sep. 1012, 1990, pp. 179184.
[5] J. Geiman, DC step-voltage and surge testing of motors,
Maint.Technol.,vol. 20, no. 3, pp. 3239, 2007.
[6] Improved motors for utility applications, General Elect. Co.,
Schenectady, NY, p. 1763-1, EPRI EL-4286, Oct. 1982.
[7] G. C. Stone, Advancements during the past quarter century in
on-line monitoring of motor and generator winding insulation,
IEEE Trans.Dielectr. Electr.Insul., vol. 9, no. 5, pp. 746751, Oct.
2002.
[8] M. E. H. Benbouzid, Bibliography on induction motors faults

detection and diagnosis, IEEE Trans. Energy Convers., vol. 14,


no. 4, pp. 10651074, Dec. 1999.
[9] I. Albizu, I. Zamora, A. J. Mazon, J. R. Saenz, and A. Tapia,
On-line stator fault diagnosis in low voltage induction motors,
IEEE Electr.Insul. Mag., vol. 9, no. 4, pp. 715, Jul./Aug. 1993.
[10] A. H. Bonnett and G. C. Soukup, Cause and analysis of
stator and rotor failures in three-phase squirrel-cage induction
motors, IEEE Trans. Ind.Appl., vol. 28, no. 4, pp. 921937,
Jul./Aug. 1992.
[11] Users ManualDigital Surge/DC HiPot/Resistance Tester
ModelsD3R/D6R/D12R, Baker Instrument Co., Fort Collins, CO,
2005.
[12] G. C. Stone, Recent important changes in IEEE motor and
generator winding insulation diagnostic testing standards, IEEE
Trans. Ind. Appl., vol. 41, no. 1, pp. 91100, Jan./Feb. 2005.
[13] C. Lanham, Understanding the Tests That Are Recommended
for Electric Motor Predictive Maintenance. Fort Collins, CO:
BakerInstrument Co.
[14] J. Yang, S. B. Lee, J. Yoo, S. Lee, Y. Oh, and C. Choi, A
stator winding insulation condition monitoring technique for
inverter-fed machines, IEEE Trans. Power Electron., vol. 22, no.
5, pp. 20262033, Sep. 2007.
[15] H. D. Kim, J. Yang, J. Cho, S. B. Lee, and J.-Y. Yoo, An
advancedstator winding insulation quality assessment technique
for inverterfedmachines, IEEE Trans. Ind. Appl., vol. 44, no. 2,
pp. 555564,Mar./Apr. 2008.
[16] S. Nandi and H. A. Toliyat, Novel frequency-domain-based
techniqueto detect stator interturn faults in induction machines
using statorinducedvoltages after switch-off, IEEE Trans. Ind.
Appl., vol. 38, no. 1,pp. 101109, Jan./Feb. 2002.
[17] J. A. Oliver, H. H.Woodson, and J. S. Johnson, A turn
insulation test forstator coils, IEEE Trans. Power App. Syst., vol.
PAS-87, no. 3, pp. 669678, Mar. 1968.
[18] P. Chowdhuri, Fault detection in three-phase rotating
machines,IEEE Trans. Power App. Syst., vol. PAS-91, no. 1, pp.
160167,Jan. 1972.
[19] E. Wiedenbrug, G. Frey, and J. Wilson, Impulse testing and
turn insulationdeterioration in electric motors, in Conf. Rec.
Annu. IEEE PulpPaper Ind. Tech. Conf., Jun. 1620, 2003, pp.
5055.
[20] E. Wiedenbrug, G. Frey, and J. Wilson, Impulse testing as a
predictivemaintenance tool, in Proc. 4th IEEE Int. Symp.
Diagn.Elect.Mach.,Power Electron. Drives, Aug. 24-26 2003, pp.
1319.
[21] B. K. Gupta, B. A. Lloyd, and D. K. Sharma, Degradation of
turninsulation in motor coils under repetitive surges, IEEE Trans.
EnergyConvers., vol. 5, no. 2, pp. 320326, Jun. 1990.
[22] B. Gupta, Risk in surge testing of turn insulation in windings
of rotatingmachines, in Proc. Elect. Insul.Conf. Elect.Manuf. Coil
WindingTechnol., Sep. 23-25 2003, pp. 459462.
[23] J. H. Dymond, M. K. W. Stranges, and N. Stranges, The
effect of surgetesting on the voltage endurance life of stator coils,
IEEE Trans. Ind.Appl., vol. 41, no. 1, pp. 120126, Jan./Feb. 2005.
[24] M. Melfi, A. M. J. Sung, S. Bell, and G. L. Skibinski, Effect
ofsurge voltage risetime on the insulation of low-voltage machines
fed byPWM converters, IEEE Trans. Ind. Appl., vol. 34, no. 4, pp.
766775,Jul./Aug. 1998.
[25] H. D. Kim and Y. H. Ju, Comparison of off-line and on-line
partialdischarge for large motors, in Conf. Rec. IEEE Int. Symp.
Elect.Insul.,Apr. 710, 2002, pp. 2730.
[26] X. Ma, X.Ma, B. Yue,W. Lu, and H. Xie, Study of aging
characteristicsof generator stator insulation based on temperature
spectrum of dielectricdissipation factor, in Proc. 7th Int. Conf.
Prop. Appl. Dielectr. Mater.,Jun. 15, 2003, vol. 1, pp. 294297.
[27] S.-B. Lee, T. G. Habetler, R. G. Harley, and D. J. Gritter, An
evaluation of model-based stator resistance estimation for
induction motor stator winding temperature monitoring, IEEE
Trans. Ind. Appl., vol. 34, no. 4, pp. 766775, Jul./Aug. 1998.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


123

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

[28] S.-B. Lee and T. G. Habetler, An online stator winding


resistance estimation technique for temperature monitoring of lineconnected induction machines, IEEE Trans. Ind. Appl., vol. 39,
no. 3, pp. 685694, May/Jun. 2003.
[29] Z. Gao, T. G. Habetler, R. G. Harley, and R. S. Colby, A
sensorless adaptive stator winding temperature estimator for
mains-fed induction machines with continuous-operation periodic
duty cycles, in Conf. Rec.41st IEEE IAS Annu. Meeting, Oct.
2006, vol. 1, pp. 448455.
[30] F. Briz, M. W. Degner, J. M. Guerrero, and A. B. Diez,
Temperature estimation in inverter fed machines using high
frequency carrier signal injection, in Conf. Rec. 42nd IEEE IAS
Annu. Meeting, Sep. 2327, 2007, pp. 20302037.
[31] R. Beguenane and M. E. H. Benbouzid, Induction motors
thermal monitoring by means of rotor resistance identification,
IEEE Trans. EnergyConvers., vol. 14, no. 3, pp. 566570, Sep.
1999.
[32] S.-B. Lee, K. Younsi, and G. B. Kliman, An online technique
for monitoring the insulation condition of AC machine stator
windings, IEEE Trans. Energy Convers., vol. 20, no. 4, pp. 737
745, Dec. 2005.
[33] S.-B. Lee, J. Yang, K. Younsi, and R. M. Bharadwaj, An
online groundwall and phase-to-phase insulation quality
assessment technique for AC-machine stator windings, IEEE
Trans. Ind. Appl., vol. 42, no. 4, pp. 946957, Jul./Aug. 2006.
[34] P. Werynski, D. Roger, R. Corton, and J. F. Brudny,
Proposition of a new method for in-service monitoring of the
aging of stator winding
insulation in AC motors, IEEE Trans. Energy Convers., vol. 21,
no. 3, pp. 673681, Sep. 2006.
[35] M. W. Kending and D. N. Rogovin, Method of conducting
broadband impedance response tests to predict stator winding
failure, U.S. Patent 6 323 658, Nov. 27, 2001.
[36] S.Williamson and K. Mirzoian, Analysis of cage induction
motors with stator winding faults, IEEE Trans. Power App. Syst.,
vol. PAS-104, no. 7, pp. 18381842, Jul. 1985.
[37] Y. ZhongmingandW. Bin, Simulation of electrical faults of
three phase induction motor drive system, in Proc. 32nd IEEE
PESC, Jun. 1721, 2001, vol. 1, pp. 7580.
[38] O. A. Mohammed, N. Y. Abed, and S. Ganu, Modeling and
characterization of induction motor internal faults using finiteelement and discrete wavelet transforms, IEEE Trans.Magn., vol.
42, no. 10, pp. 34343436, Oct. 2006.
[39] R. M. Tallam, T. G. Habetler, and R. G. Harley, Transient
model for induction machines with stator winding turn faults,
IEEE Trans. Ind.Appl., vol. 38, no. 3, pp. 632637, May/Jun. 2002.
[40] S. Bachir, S. Tnani, J.-C.Trigeassou, and G. Champenois,
Diagnosis by parameter estimation of stator and rotor faults
occurring in induction machines, IEEE Trans. Ind. Electron., vol.
53, no. 3, pp. 963973, Jun. 2006.
[41] G. Stone and J. Kapler, Stator winding monitoring, IEEE
Ind. Appl. Mag., vol. 4, no. 5, pp. 1520, Sep./Oct. 1998.
[42] G. C. Stone, B. A. Lloyd, S. R. Campbell, and H. G. Sedding,
Development of automatic, continuous partial discharge
monitoring systems to detect motor and generator partial
discharges, in Proc. IEEE Int. Elect.Mach. Drives, May 1821,
1997, pp. MA2/3.1MA2/3.3.
[43] G. C. Stone, S. R. Campbell, B. A. Lloyd, and S. Tetreault,
Which inverter drives need upgraded motor stator windings, in
Proc. Ind.Appl. Soc. 47th Annu. Petrol. Chem. Ind. Conf., Sep. 11
13, 2000, pp. 149154.
[44] S. R. Campbell and G. C. Stone, Investigations into the use
of temperature detectors as stator winding partial discharge
detectors, in Conf.Rec. IEEE Int. Symp. Elect.Insul., Jun. 1114,
2006, pp. 369375.
[45] M. Kaufhold, G. Borner, M. Eberhardt, and J. Speck,
Failure mechanism of the interturn insulation of low voltage
electric machines fed by pulse-controlled inverters, IEEE
Electr.Insul. Mag., vol. 12, no. 5, pp. 916, Sep./Oct. 1996.

[46] F. W. Fetherston, B. F. Finlay, and J. J. Russell,


Observations of partial discharges during surge comparison
testing of random wound electric motors, IEEE Trans. Energy
Convers., vol. 14, no. 3, pp. 538544, Sep. 1999.
[47] N. Hayakawa and H. Okubo, Partial discharge
characteristics of inverter-fed motor coil samples under ac and
surge voltage conditions, IEEE Electr.Insul. Mag., vol. 21, no. 1,
pp. 510, Jan./ Feb. 2005.
[48] H. Okubo, N. Hayakawa, and G. C. Montanari, Technical
development on partial discharge measurement and electrical
insulation techniques for low voltage motors driven by voltage
inverters, IEEE Trans. Dielectr.Electr.Insul., vol. 14, no. 6, pp.
15161530, Dec. 2007.
[49] M. A. Cash, T. G. Habetler, and G. B. Kliman, Insulation
failure prediction in AC machines using line-neutral voltages,
IEEE Trans. Ind.Appl., vol. 34, no. 6, pp. 12341239, Nov./Dec.
1998.
[50] B. Mirafzal, R. J. Povinelli, and N. A. O. Demerdash,
Interturn fault diagnosis in induction motors using the pendulous
oscillation phenomenon,IEEE Trans. Energy Convers., vol. 21,
no. 4, pp. 871882, Dec. 2006.
[51] J. L. Kohler, J. Sottile, and F. C. Trutt, Condition monitoring
of statorwindings in induction motors. I. Experimental
investigation of theeffective negative-sequence impedance
detector, IEEE Trans. Ind.Appl., vol. 38, no. 5, pp. 14471453,
Sep./Oct. 2002.
[52] S.-B. Lee, R. M. Tallam, and T. G. Habetler, A robust, online turnfaultdetection technique for induction machines based on
monitoring thesequence component impedance matrix, IEEE
Trans. Power Electron.,vol. 18, no. 3, pp. 865872, May 2003.
[53] M. A. Cash, T. G. Habetler, and G. B. Kliman, Insulation
failure predictionin AC machines using line-neutral voltages,
IEEE Trans. Ind.Appl., vol. 34, no. 6, pp. 12341239, Nov./Dec.
1998.
[54]B. Ayhan, M.-Y. Chow, and M.-H. Song, Multiple
discriminantanalysis and neural-network-based monolith and
partitionfault-detection schemes for broken rotor bar in induction
motors,IEEE Trans. Ind. Electron., vol. 53, no. 4, pp. 1298
1308,Jun. 2006
[55] G. Joksimovic and J. Penman, The detection of interturn
short circuitsin the stator windings of operating motors, in Proc.
24th Annu.Conf.IEEE Ind. Electron. Soc. (IECON), Aug. 31Sep.
4, 1998, vol. 4,pp. 19741979.
[56] D. Kostic-Perovic, M. Arkan, and P. Unsworth, Induction
motor faultdetection by space vector angular fluctuation, in Conf.
Rec. IEEE IASAnnu. Meeting, Oct. 812, 2000, vol. 1, pp. 388
394.
[57] N. Arthur and J. Penman, Induction machine condition
monitoringwith higher order spectra, IEEE Trans. Ind. Electron.,
vol. 47, no. 5,pp. 10311041, Oct. 2000.
[58] F. C. Trutt, J. Sottile, and J. L. Kohler, Condition monitoring
ofinduction motor stator windings using electrically excited
vibrations,in Conf. Rec. 37th IEEE IAS Annu. Meeting, Oct. 13
18, 2002, vol. 4,pp. 23012305.
DeetiSreeKanth is presently pursuing
his B.TechIIIrd year in Electrical and
Electronics Engineering at JNTUH
Karimnagar. Looking for an
opportunity in an organization where
he can utilize his technical skills
effectively contributing towards the
growth of the organization, thereby
leading to his professional growth.
His research interests include electric
machine diagnostics, motor drives,
power electronics, and the
control of electrical machines.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


124

www.iaetsd.in

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT


125

10

www.iaetsd.in

Vous aimerez peut-être aussi