Vous êtes sur la page 1sur 776

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014

ISSN 2091-2730


1
www.ijergs.org


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


2
www.ijergs.org

Table of Content
Topics Page no
Chief Editor Board 3-4
Message From Associate Editor 5
Research Papers Collection

6-775




















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


3
www.ijergs.org

CHIEF EDITOR BOARD
1. Dr Gokarna Shrestha, Professor, Tribhuwan University, Nepal
2. Dr Chandrasekhar Putcha, Outstanding Professor, University Of California, USA
3. Dr Shashi Kumar Gupta, , Professor, IIT Rurkee, India
4. Dr K R K Prasad, K.L.University, Professor Dean, India
5. Dr Kenneth Derucher, Professor and Former Dean, California State University,Chico, USA
6. Dr Azim Houshyar, Professor, Western Michigan University, Kalamazoo, Michigan, USA
7. Dr Sunil Saigal, Distinguished Professor, New Jersey Institute of Technology, Newark, USA
8. Dr Hota GangaRao, Distinguished Professor and Director, Center for Integration of Composites into
Infrastructure, West Virginia University, Morgantown, WV, USA
9. Dr Bilal M. Ayyub, professor and Director, Center for Technology and Systems Management,
University of Maryland College Park, Maryland, USA
10. Dr Sarh BENZIANE, University Of Oran, Associate Professor, Algeria
11. Dr Mohamed Syed Fofanah, Head, Department of Industrial Technology & Director of Studies, Njala
University, Sierra Leone
12. Dr Radhakrishna Gopala Pillai, Honorary professor, Institute of Medical Sciences, Kirghistan
13. Dr P.V.Chalapati, Professor, K.L.University, India
14. Dr Ajaya Bhattarai, Tribhuwan University, Professor, Nepal
ASSOCIATE EDITOR IN CHIEF
1. Er. Pragyan Bhattarai , Research Engineer and program co-ordinator, Nepal
ADVISORY EDITORS
1. Mr Leela Mani Poudyal, Chief Secretary, Nepal government, Nepal
2. Mr Sukdev Bhattarai Khatry, Secretary, Central Government, Nepal
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


4
www.ijergs.org

3. Mr Janak shah, Secretary, Central Government, Nepal
4. Mr Mohodatta Timilsina, Executive Secretary, Central Government, Nepal
5. Dr. Manjusha Kulkarni, Asso. Professor, Pune University, India
6. Er. Ranipet Hafeez Basha (Phd Scholar), Vice President, Basha Research Corporation, Kumamoto, Japan
Technical Members
1. Miss Rekha Ghimire, Research Microbiologist, Nepal section representative, Nepal
2. Er. A.V. A Bharat Kumar, Research Engineer, India section representative and program co-ordinator, India
3. Er. Amir Juma, Research Engineer ,Uganda section representative, program co-ordinator, Uganda
4. Er. Maharshi Bhaswant, Research scholar( University of southern Queensland), Research Biologist, Australia
















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


5
www.ijergs.org

Message from Associate Editor In Chief
Let me first of all take this opportunity to wish all our readers a very happy, peaceful and
prosperous year ahead.
This is the Fifth Issue of the Second Volume of International Journal of Engineering Research and
General Science. A total of 90 research articles are published and I sincerely hope that each one
of these provides some significant stimulation to a reasonable segment of our community of
readers.
In this issue, we have focused mainly on the Recent Technology and its implementation approach with Research. We also
welcome more research oriented ideas in our upcoming Issues.
Authors response for this issue was really inspiring for us. We received many papers from more than 15 countires in this
issue and we received many research papers but our technical team and editor members accepted very less number of
research papers for the publication. We have provided editors feedback for every rejected as well as accepted paper so that
authors can work out in the weakness more and we shall accept the paper in near future. We apologize for the
inconvenient caused for rejected Authors but I hope our editors feedback helps you discover more horizons for your
research work.
I would like to take this opportunity to thank each and every writer for their contribution and would like to thank entire
International Journal of Engineering Research and General Science (IJERGS) technical team and editor member for their
hard work for the development of research in the world through IJERGS.
Last, but not the least my special thanks and gratitude needs to go to all our fellow friends and supporters. Your help is
greatly appreciated. I hope our reader will find our papers educational and entertaining as well. Our team have done good
job however, this issue may possibly have some drawbacks, and therefore, constructive suggestions for further
improvement shall be warmly welcomed.



Er. Pragyan Bhattarai,
Assistant Editor-in-Chief, P&R,
International Journal of Engineering Research and General Science
E-mail -Pragyan@ijergs.org
Contact no- +9779841549341


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


6
www.ijergs.org

The Influence of Length of the Stem of Klutuk Banana (Musa Balbisiana)
Toward Tensile Strength: A Review of the Mechanical Properties
Achmad Choerudin
1
, Singgih Trijanto
1

1
Lecturer, Academy of Technology AUB Surakarta, Central Java, Indonesia
Abstract - This study is testing the effects of fibre length on tensile strength and strain on the stem of klutuk banana fibre specimen
with length of fiber are variation of 10 cm, 7 cm, 5 cm and 3 cm. The results showed that composite fibres stem of banana with tensile
strenght test result is specimen of 10 cm (25.38 N/mm
2
), specimen of 7 cm (14,47 N/mm
2
), specimen of 3 cm (17,27 N/mm
2
) and
specimen of 5 cm (12,08 N/mm
2
). The result is strain specimen and extension of 10 cm (20,14), specimen of 7 cm (9.95), specimen of
5 cm (5,49) and specimen of 3 cm (5,39). The conclusions of this study are (1) the length of the stem of banana fibres in composite
will further improve the tensile strenght on an existing specimen, by not being influenced by other factors, (2) The length of the stem
of banana fibres will be increasingly small elongation that occurred compared with shorter fibres, and (3) the length of the stem of
banana fibres are the higher values of the strain that occurs.
Keywords: the length of fibre, the stem of banana, tensile strenght

INTRODUCTION
The tensile strength is the waste from banana crop has been cleared for the rosaceae and agricultural waste is a potential that has not
been much used. The composite is a material that is formed from a combination of two or more materials whose mechanical properties
of constituent materials. Composite consists of two parts namely the matrix as a binder or protective composite and composite filler as
filler. Natural fibre composite filler is a great alternative for a wide range of polymer composite because its superiority compared to
synthetic fibres. Natural fibres are easily obtained at low prices, easy processing, low, its are environmentally friendly and can be
described in biologist (Kusumastuti, 2009).
Fiber obtained from the stem of banana tree fibre has good mechanical properties. Mechanical properties of fibres of the stem of
banana has a density of 1.35 gr/cm, the 63-64% of cellulose, hemicellulose 20%, lignin 5%, the average tensile strength 600 Mpa
tensile modulus, an average of 17.85 GPa and long added 3.36% (Lokantara, 2007). The diameter of the stem of banana leaf fibres is
5.8% , whereas the length of around 30,92-40,92 cm.
Suwanto (2006) have observed the influence of temperature post-curing composite tensile strength of epoxy resin reinforced with
woven banana fibres. Maximum tensile strength that occurs on a composite experience a process of post curing temperature on
10000
o
C of 42,82 MPa, an increase in tensile strength of 40,26% if compared to the composite without warming up. Tensile strength
that occurs on a composite of smaller if compared to the tensile strength of material constituting the two. This can be caused by a high
degree of porosity in fibre composites, conditions are less uniform, the on set of delamination between fibre and matrix, a low surface
bonding between fiber and matrix.
Surani (2010) examined utilization of banana as raw fibre boards with thermo-mechanical treatment. Thermo-mechanical treatment is
carried out through the establishment of mat wet way. The best fibre board quality is obtained at the treatment temperature of boiling
100
0
C flakes without the use of synthetic adhesives. Syaiful Anwar (2010) examine the stem of banana stems is to know the influence
of the length of the stem of banana leaf fibre 10 mm, 20 mm, 30 mm, 40 mm against the stem of banana leaf fiber tensile strength with
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


7
www.ijergs.org

matric polyester. Study on the fiber used is the stem of banana leaf fibers with 50% volume fraction, fiber length of 10 mm, 20 mm,
30 mm, 40 mm.
The standard reference for the manufacture and testing of specimens used ASTM D 638-03 type I for tensile test. The results of the
study concluded the specimen with an increasingly long fibres will be more durable in the hold the load pull because the long fibres
have a more perfect structure that were installed along the axis of the fiber and internal defects on fiber less than the material fibers are
short. Evi Indrawati (2010) states that the stem of banana leaf is one part of the banana which consists of a collection of the stem of
the composition and grow erect. Fibre obtained from the banana is a strong fibre and has a high store and has cellular tissue with pores
interconnected. Based on the background in this research existing problems the influence of long fiber of the klutuk banana, tensile
strength while many banana is not explored. The results can be achieved is materially qualified natural and good.

LITERATURE REVIEW
Banana Fibre
Banana stems is a type of fibre that is of good quality, and is one of the potential alternative materials that can be used as a filler in
composite manufacture polyvinyl chloride or commonly abbreviated PVC. The stem of banana waste can be used as a source of fibre
in order to have economic value. Rahman (2006) states that the comparison between fresh weight of leaves, stems, and fruit of the
banana row 63, 14, and 23%. The stem of banana has a kind of weight 0,29 g/cm with a length of fibre 4.20- 5.46 mm and lignin
content 33,51% (Syafrudin, 2004).
Klutuk Banana
The banana is a kind of typical, not because it tastes sweet, but because his meat filled with black. The seeds have rough skin texture
and a hard shell. The klutuk banana tree traits has a height up to 3 meters with trunk circumference ranging from 60 cm to 70 cm.
Stem is green with or without patches of spots. Klutuk banana tree leaf is usually along 2 metre with lenar 0.6 metres. Its leaves if
noted in details seem thin wax layer has a unique and not easy to rip like other types of banana leaves.
Composite
Composite is a combination of two or more different materials, and it is made to acquire properties that are better are not retrieved
from the respective compilers composite (Fajriyanto and Firdaus, 2007). Composites consist of a matrix as a fixed phase and filler and
the second phase is separated by interface condition. The resulting composite material depends on the matrix and the matrix filler
material used. Each composite that is made with different materials, then the nature of that form will be different and depending on the
type of filler material, filler and matrix material for amplifiers are used (Hanafi, 2004).
Tensile Testing
Tensile testing is used to determine the mechanical properties of a material, such as the maximum tensile strenght. A test object that is
used is solid and there are some cylindrical, shaped sheet and plate-shaped pipes in a variety of sizes. A specimen is then gripped
between the two pegs on the test machine is equipped with a variety of control so that specimens can be tested.

MATERIALS AND METHODS
The tools and materials used in this research is the Universal Testing Machine with a maximum capacity of 500 kg specification and
control of automated testing and stem fiber banana as a composite. This research method using hand lay-up include the preparation of
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


8
www.ijergs.org

molds, coating, alignment, and drying. After taking a fibre made by selecting the stem of banana leaf that is still in a state of good,
moist, and start to dry out, discard the leaves on the stem of banana leaf and cut the stem of which is already drying up, release the
outer skin of the stem of banana leaf, drying out is carried out in a place not exposed to direct sunlight.
Then do the creation of samples. Manufacture of composite refers to the standard ASTM D-3039. A homogeneous mixture is poured
in the mold of mold size tension-test leveled with a brush.








Figure 2: Sample of tensile strenght test
Scheme for sample fibre tensile strength test of the stem of banana leaf arranged lengthwise above the mold parallel with the size of
the mold. The repeated these steps for samples 1, 2, 3 and 4, with a length of fiber is a variation of 10 cm, 7 cm, 5 cm and 3 cm. After
making a sample done, drained for 24 hours, then made the characterization of the mechanical properties of the composite is a strenght
testing.

RESULTS AND DISCUSSION
The Results
This research use the equipment Universal Testing Machine. Tensile test specimen made in the form of a composite plate
manufactured by the method of hand lay-up. The geometry and dimensions of specimen tensile test customized standard ASTM D
3039. Set-up tool test on static tensile tests tailored to the holder of the specimen on the tension testing machine. Tensile loading
provided parallel to the axis of the axial and is assumed to be uniform in every point of testing.
Tensile test specimen holder are designed in accordance with the test tool holder to be used as a specimen holder shaped plates. In
order to be considered the holder of the specimen must be capable of holding the specimen with strong and attempted to slip does not
occur. Tensile measurement of tensile specimen based on the theory of Hooke's Law. The theory states that a materials behave in an
elastic and showed a relationship between stress and strain liniear called elastic finite. Variables that will be observed in this study i.e.
tensile strenght so that the maximum tensile stresses get value added, the length of which indicates the strain that occurs.

Table 1: The Results of Tensile Testing
No. Specimen Area Max (mm2) Max Force (N) Tensile Strenght (N/mm2) Break Force (N)

1. 10 cm 25.000 1935.6 38.71 689.90
124.400 1311.6 10.30 585.97
124.400 1932.9 15.17 910.62
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


9
www.ijergs.org

124.400 1598.5 12.55 759.11
124.400 1317.4 10.77 600.88
2. 7 cm 25.000 849.9 17.00 420.45
124.400 1075.7 8.44 400.64
124.400 1570.4 12.33 777.04
124.400 1165.3 9.15 503.61
124.400 935.7 7.35 432.40
3. 5 cm 25.000 937.6 18.75 468.20
124.400 954.7 7.49 439.86
124.400 547,4 4.30 272.35
124.400 557.6 4.38 261.76
124.400 793.9 6.23 378.33
4. 3 cm 25.000 1280.4 25.61 591.94
124.400 1175.8 9.23 467.86
124.400 981.8 7.71 489.15
124.400 1192.5 9.36 514.65
124.400 1024.2 8.04 445.49
(Sources: primary data, 2014)











(Sources: primary data, 2014)
Figure 1: The Comparation of Tensile Strenght in Specimen 10 cm, 7 cm, 5 cm and 3 cm

Discussion
The test results showed that the composite fibres strength stem of banana leaf with a tensile strenght test result was consecutive from
the highest is specimen 10 cm (25.38 N/mm
2
), specimen of 7 cm (14,47 N/mm
2
), specimen of 3 cm (17,27 N/mm
2
) and specimen of 5
cm (12,08 N/mm
2
). The results of the extension and the strain that occurs in row are specimen of 10 cm (20,14), specimen of
7 cm (9.95), specimen of 5 cm (5,49) and specimen of 3 cm (5.39).

Table 2: Analysis of Results Tensile Strenght
No. Specimen Area Max
(mm2)
Max
Force
(N)
Tensile
Strenght
(N/mm2)
Break Force
(N)
Elongation Strain
1. 10 cm 25.000 1935.6 77 689.90 1,26 20,14
0
50
100
150
200
250
Test 1 Test 2 Test 3 Test 4 Test 5
3 cm
5 cm
7 cm
10 cm
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


10
www.ijergs.org

124.400 1311.6 10.54 585.97 1,27
124.400 1932.9 15.53 910.62 1,26
124.400 1598.5 12.85 759.11 1,25
124.400 1317.4 11.02 600.88 1,27
Mean (1) 25,38 1,26
2. 7 cm 25.000 849.9 33,99 420.45 1,42
9,95
124.400 1075.7 8,65 400.64 1,43
124.400 1570.4 12,63 777.04 1,47
124.400 1165.3 9,37 503.61 1,46
124.400 935.7 7,52 432.40 1,48
Mean (2) 14,43 1,45
3. 5 cm 25.000 937.6 37,5 468.20 2,0
5,49
124.400 954.7 7,67 439.86 2,1
124.400 547,4 4,40 272.35 2,3
124.400 557.6 4,48 261.76 2,4
124.400 793.9 6,38 378.33 2,1
Mean (3) 12,08 2,2
4. 3 cm 25.000 1280.4 51,23 591.94 3,2
5,39
124.400 1175.8 9,45 467.86 3,1
124.400 981.8 7,89 489.15 3,5
124.400 1192.5 9,58 514.65 3,3
124.400 1024.2 8,23 445.49 3,2
Mean (4) 17,27 3,2
(Sources: primary data, 2014)








(Sumber: data primer, 2014)

Figure 2: The Comparation Strain, Elongation dan Stress
in Specimen 10 cm, 7 cm, 5 cm and 3 cm

Based on the results of these tests, that the length of the stem of banana leaf fibres will be higher tensile stresses, though there is still a
difference between specimen of 5 cm and 3 cm, this is because due to factors outside of testing such as the making of the specimen is
still not perfect and the density of specimens of different factions. It can be seen from the strain that occurs that is the length of the
stem of banana leaf fibres would further increase the strain these specimens, so the stem of banana leaf fibre length factor affects the
value of the voltage drop and the strain that occurs. Besides the comparison judging from length (elongation) that the length of the
stem of banana leaf fibres will be inversely proportional to the extension took place, namely the length of the fiber will be getting
smaller in comparison with the long stem of the short fibres.

0
5
10
15
20
25
30
Strain Elongation Stress
10 cm
7 cm
5 cm
3 cm
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


11
www.ijergs.org

CONCLUSION
Based on research that has been done can be inferred that the tensile strenght obtained:
1. The length of the stem of banana leaf fibres in composite will further improve the tensile strenght on an existing specimen, by not
being influenced by other factors.
2. The length of the stem of banana leaf fibres will be increasingly small elongation that occurred compared with shorter fibres.
3. The length of the stem of banana leaf fibres are the higher values of the strain that occurs.

ACKNOWLEDGEMENT
1. Directorate General of Higher Education, Ministry of Education and Culture of The Republic of Indonesia, in Research of
Penelitian Dosen Pemula (Grants Lecturer Beginner), 2014.
2. Laboratory Sciences and Laboratory Materials Technology, University Sebelas Maret of Surakarta, Indonesia, 2014.
3. Academy of Technology AUB Surakarta, Central Java, Indonesia.

REFERENCES:
[1] ASTM D 3039, 2005, Standard Test Methode for Tensile Properties of Plastics, American Society for Testing Materials,
Philadelphia, PA.
[2] Bramayanto, A., 2008, Pengaruh Konsentrasi terhadap Sifat Mekanik Material Komposit Poliester Serat Alam, Fakultas
Teknik, University of Indonesia, Indonesia.
[3] Fajriyanto dan Firdaus, F., 2007. Karakteristik Mekanik Panel Dinding dari Komposit Sabut Kelapa (Coco Fiber) - Sampah
Plastik (Thermoplastics), Logika, Vol. 4, No. 1, Januari 2007, Fakultas Teknik Sipil dan Perencanaan UII Yogyakarta
[4] Hanafi, I., 2004. Komposit Polimer Diperkuat Pengisi dan Gentian Pendek Semula Jadi, Universiti Sains, Malaysia.
[5] Hardoyo, K., 2008, Karakterisasi Sifat Mekanis Komposit Partikel SiO
2
dengan Matrik Resin Polyester, Tesis FMIPA, Program
Studi Ilmu Material, University of Indonesia, Indonesia.
[6] Kusumastuti, A., 2009, Aplikasi Serat Sisal sebagai Komposit Polimer, Jurusan Teknologi Jasa dan Produksi, Universitas Negeri
Semarang, Jurnal Kompetensi Teknik Vol. 1, No. 1, 27 November 2009, Indonesia.
[7] Lokantara, P., 2012, Analisis Kekuatan Impact Komposit Polyester-Serat Tapis Kelapa dengan Variasi Panjang dan Fraksi
Volume Serat yang diberi Perlakuan NaOH, Fakultas Teknik, Universitas Udayana, Kampus Bukit Jimbaran, Bali, Indonesia.
[8] Rahman, H., 2006. Pembuatan Pulp dari Batang Pisang Uter (Musa paradisiaca Linn. var uter) Pascapanen dengan Proses Soda.
Fakultas Kehutanan. Yogyakarta : Universitas Gadjah Mada, Indonesia.
[9] Syafrudin, 2004. Pengaruh Konsentrasi Larutan dan Waktu Pemasakan Terhadap Rendemen dan Sifat Fisis Pulp Batang
Pisang Kepok (Musa spp) Pascapanen. Fakultas Kehutanan. Yogyakarta: Universitas Gadjah Mada, Indonesia.
[10] Schwartz, MM., 1984, Composite Materials Handbook, McGraw-Hill Book Co, New York.
[11] Surrani, L., 2010, Pemanfaatan Batang Pisang (Musa Sp.) sebagai Bahan Baku Papan Serat dengan Perlakuan Termo-
Mekanis, Balai Penelitian Kehutanan, Manado, Indonesia.
[12] Suwanto, B.,2006, Pengaruh Temperatur Post-Curing terhadap Kekuatan Tarik Komposit Epoksi Resin yang diperkuat Woven
Serat Pisang, Jurusan Teknik Sipil Politeknik Negeri Semarang, Semarang, Indonesia
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


12
www.ijergs.org

Technical Competency of Engineer Expert in Brazil and the USA approach
Alexandre A.G. Silva
1
, Pedro L.P.Sanchez
2

1
Lawyer, Master and Ph.D. student in Electrical Engineering from the Polytechnic School of the University of So Paulo. Auditor and
responsible for legal affairs of the Innovation Agencyin Federal University of ABC.
alexandre.silva@ufabc.edu.br
prof.alealberto@gmail.com
2
Lawyer,electrical engineer, Ph.D. and Associate Professor in Electrical Engineering at the Polytechnic School of the University of
So Paulo.
pedro.sanchez@poli.usp.br
AbstractThis article discusses the system of choice of experts, especially engineers experts in Brazil and the United States.
Despite being different legal systems, there are common issues that have different solutions in both countries. First will be exposed as
is the Brazilian legal system, and will be described how is the choose of experts in the judiciary. Next will be described the American
legal system and how is the choice of the experts in this system. Possible solutions to the problems of the Brazilian system based on
the American system will be pointed out.

Keywordsengineer expert; technical competency; forensic; Brazilian judiciary; legal system; forensic science;
1. INTRODUCTION
The purpose of this paper is to compare the system adopted by Brazilian courts with the system adopted by the
United States for the use of legal experts.
Before discuss reforms based on American common law procedures, this article will examine the role of the expert
in Brazilian civil law system, in special case of the engineering.
Americans procedures are different in many aspects of the Brazilian system to choose the expert and the expert
acting in court proceedings. These differences become very important at the time that judges, in Brazil, has to make the choice of an
expert that will do his job using scientific methods.
Despite the legal systems differ, the issues related to the "quality" of the applied science as well as the professional
qualification are common concerns in both countries.
In this regard, the United States in recent pioneer report by the National Academy of Sciences (NAS), acknowledged
that part of forensic science is not based on an established science. The report notes that many disciplines such as hair microscopy, bite
mark comparisons, fingerprint analysis, testing firearm and tool mark analysis, were developed just to solve criminal cases, being used
in the context of individual cases which have significant variations in research and expertise. These have not gone through a rigorous
experimental scrutiny, as there are no standards in the United States or anywhere else that can validate these methods consistently,
with the exception only of DNA testing.[1]
2. BRAZILS LEGAL SYSTEM
The forensic engineer is a professional engineer who deals with the engineering aspects of legal problems. Activities
associated with forensic engineering include determination of the physical or technical causes of accidents or failures, preparation of
reports, and presentation of testimony or advisory opinions that assist in resolution of related disputes. The forensic engineer may also
be asked to render an opinion regarding responsibility for the accident or failure.[2]
It is also the application of the art and science of engineering in the judiciary, including the investigation of the
physical causes of accidents and other types of claims and litigation, preparation of engineering reports, testimony at hearings and
trials in administrative or judicial proceedings, and interpretation of advisory opinions to assist the resolution of disputes affecting life
or property.
The first skill that expert must have is competency in his specialized engineering discipline. This competency must
be acquired by education and experience, so a professional who has a large professional experiencewill be better than an engineer that
does not have much experience, even with the same education.
Another skill that is very important is the knowledge of legal procedures and the vocabulary used in Courts not to
cause trouble or misunderstanding during the process.
Brazil is a federal republic formed by the indissoluble union of the states, municipalities and the Federal District.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


13
www.ijergs.org

The government is composed of the legislative, executive and judiciary. The country adopts the system of Civil Law, which has its
origin in Roman law and was introduced by the Portuguese colonizers. The system is based on codes and laws enacted by the federal
legislature, as well as by state and local legislatures.[3] [4]
The federal legislature is practiced by Congress, which is composed of the Chamber of Deputies and the Federal
Senate, through the legislative process. The President and the Ministers of State make up the Executive Branch, and the Supreme
Court, the National Council of Justice, the Superior Court of Justice, the Federal Court, the Labour Court, the Electoral Court, the
Military Court, and state courts make up the Judiciary.
The Federal Supreme Court is the highest court and is entrusted with the responsibility of safeguarding the
Constitution, as well as functioning as a court of review. The Federal Supreme Court also has original jurisdiction to try and decide
direct actions of unconstitutionality of a federal or state law or normative act, or declaratory actions of constitutionality of a federal
law or normative act, which somewhat resemble the issuance of advisory opinions. This situation not allowed in the Supreme Court of
the United States, for example.
Brazil does not follow the doctrine of stare decisions and only after the amendment of the Federal Constitution in
2004 that the Supreme Court started to adopt, in special situations, binding decisions.
The Common Law admitted its origins in the precedents from previous cases as sources of law. The principle is
known as stare decisis and recommends that once a court has answered a question, the same question in other cases must elicit the
same response from the same court or lower courts in that jurisdiction. In turn, the system of Civil Law, attaches great importance to
the codes, laws and opinions of jurists.
The principle of free conviction of the judge's what guides all Brazilian judicial decisions, and this should be
beaconed by the law.
In Brazil there are two types of forensic experts: the criminal and non-criminal. The first ones are public employees,
in almost cases, and the others are hired for all other types of judicial cases, by the parts involved. Obviously exists exceptions in this
two cases, but they arent relevant for these observations.
There are criminal forensics experts only in two spheres of government, namely in the federal scope and in state
government, with no performance in the district, or county, for such experts. In Brazil there is no specific jurisdiction in the city and
district. Municipal issues are resolved in state courts.
These forensic experts to be hired by the government go through a public examination to evaluate their knowledge.
On the other hand, not forensic experts are hired for their expertise, but there is no effective way of measuring the level of such
knowledge. The judges pick their experts in non-criminal cases, among professionals inscribed on a prior list in state and federal
courts.
Likewise there is no specific government agency that regulates the forensic science to bring regulations to both
categories of experts, nor the quality of the science applied in the cases by the experts.
The criminal expertise is regulated by the Code of Criminal Procedure [5] and the non-criminal skills are covered by
the Code of Civil Procedure. [6]
The Code of Criminal Procedure provides in Article 158 that when the violation leaving any trace will be
indispensable examination of corpus delicti, direct or indirect, cannot supply him the confession of the accused.The examination of
corpus delicti and other skills will be conducted by official expert, holder of college degree.
In civil investigations, the judge chooses the expert from among those previously enrolled in that jurisdiction. The
parties have five days to submit their questions for the expert. Article 436 of the Code of Civil Procedure provides that a judicature is
not attached to the expert report, his conviction may form with other elements or facts proven in the case.
Specifically for the engineering, professional regulation in Brazil is a responsibility of the professional supervisory
board of Engineering (CONFEA), which was created by federal law and has to care for and regulate the engineering profession.
Considered a landmark in the history of professional and technical regulation in Brazil, the Brazilian Confederation
of Engineering and Agronomy came up with that name on December 11, 1933, by Federal Decree No. 23,569. In its current design,
the Federal Council of Engineering and Agronomy (CONFEA) is governed by Law 5,194 of 1966 and also represents geographers,
geologists, meteorologists, technologists such arrangements, industrial and agricultural technicians and their specializations, totaling
hundreds of professional titles.
The CONFEA system has in their records about one million of professionals who account for about 70% of Brazil's
GDP. It is a demanding job in the expertise and knowledge of technology, fueled by intensely technical and scientific findings of man.
Takes charge of social and human interests of the whole society and, on that basis, regulates and supervises the professional practice in
the areas. The Federal Council is the highest level at which a professional can use with regard to the regulation of professional practice
[7]. The Council also gives permission for expert activity of the engineering professional, but this is just an administrative issue.
The principles adopted for expert opinions, is the state of art and also good practices in each specialty, as well as,
eventually, which is regulated by the Brazilian Association of Technical Standards (ABNT), private institution that aims to promote
the development of technical standards and promote their use in scientific, technical, industrial, commercial, agricultural, among
others, keeping them updated, counting both the best technical expertise and lab work, as well as encourage and promote the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


14
www.ijergs.org

participation of communities in the technical research, development and dissemination of technical standardization in the country.
ABNT is also collaborating with the state in the study and solution of problems that relate to technical
standardization in general and also with the public authorities mediates the interests of civil society with regard to matters of technical
standardization.[8]
A recurring drawback for the lack of a mechanism or official body control expert activity is that regular
professionals sometimes do not have adequate technical knowledge to perform certain work or not using methodology consistent with
the need of expertise.
In cases where the expert has no scientific or technical knowledge and without reasonable cause, fails to comply
with the charge given to him he may be replaced in accordance with Article 424 of the Code of Civil Procedure. Normative Decision
69 CONFEA also predicts this hypothesis and treats this as an ethical infraction.
Also the Code of Criminal Procedure provides in Articles 343 and 344 punishments ranging from two to four years
and a fine for "perjury or false expertise", but these crimes must be intentional. [9]
When talking about professional assignment, it is necessary to make the distinction between academic ability, legal
requirements and professional qualification, since there is a relationship of dependency between them, while distinct, each being
arising from other.
Gotten the graduation course, acquires academic ability, but it is not possible yet practice the profession, which
happens only with enrollment in respective professional Council, that is the legal authorization, and the professional qualification is
acquired only through constant training and experience.
Professional Assignments and technical knowledge are not necessarily associated in the field of engineering.
Professional assignments are conferred by CONFEA resolutions, being differentiated by each type of professional.
It is not necessary and sufficient for the exercise charge of the expert's mere record of the professional class organ as
the expert charge depends on its technical and scientific knowledge condition.
This knowledge will be built with the knowledge acquired during graduation and specific courses. The classic
example is the case of the newly formed engineer that receives his properly registered title in the class organ that enables expert
activity, but there is lack of expertise and knowledge of legal aspects inherent to it, because technical knowledge of application in
judicial skills are not included in undergraduate courses, and needed further depth of knowledge, which is a distortion of the Code of
Civil Procedure, as to the operationalization of expertise activity.
The major problem is that rarely unqualified practitioners are punished for their actions, as it rarely the parties
perform accusations in the Council, as well as the judges, which only fail to request the services of experts who do not meet the
expectations.
Another issue is that the judge is a layman, he has no knowledge to evaluate the use of the scientific quality of
expertise, which also hampers the punishment of the wicked experts, and there is no legal instrument or procedure that it can be used
to make this review.
3. THE LEGAL SYSTEM IN U.S.
In United States, the U.S. Constitution establishes a federal system of government and gives specific powers to the
federal government, and all powers not delegated to the federal government are left to the states. The fifty states of the country have
their own constitution, government structure, legal codes and own judicial system.
The legal system adopted has Anglo-Saxon origin and is based on the study of judicial precedents (Common Law).
Also the Judicial Branch of the federal government is established by the Constitution specifying its authority. Federal courts have
exclusive jurisdiction only in certain types of cases, such as in cases involving federal laws, disputes between states and cases
involving foreign governments. There are cases in which federal courts share jurisdiction with the state, such as a federal and a state
court can make a decision together about two parts that reside in different states. The state courts have exclusive jurisdiction over the
vast majority of cases.
The parties have the right to trial by jury in all criminal cases and in most civil cases. The jury usually consists of
twelve citizens who hear the evidence and based on the evidence presented during the trial, apply the law determined by the judge to
reach a decision based on the facts that the jury determines itself as true, based on the evidence presented during the trial. [10]
As measures to ensure the reliability of the expert's opinion, before the presentation of expert evidence at trial in a
U.S. federal court, the expert goes through some essential preliminary steps. The expert is selected and maintained by the party;
evaluates the starting materials; emits a report; giving his testimony. The admissibility may have been evaluated under standards by
the trial judge. [11]
Initially to present an expert opinion, the proposed expert must be qualified and able to meet the admissibility
requirements as established by the Supreme Court in the 1990s in cases Daubert, Joiner, and Kumho. A more recent decision of the
Court of Justice, the Kumho case, again confirmed and clarified that judges should act as "gatekeepers" in determining the
admissibility of expert evidence, and must be sure that the witness is relevant and reliable. [12]
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


15
www.ijergs.org

The FRYE test of "general acceptance" is given on behalf of the admissibility of lie detector test with over 80 years
in the case Frye v. United States, which generated controversy over what standard a court should apply in evaluating the expert
evidence.
In Frye, the defendant was subjected to a scientific test designed to determine the innocence or guilt based on the
variation of blood pressure when questioned about facts related to the crime for which he was accused. The defendant objected to the
methodology and results based on the novelty of the technique. In stating the rule, the Court argued that is difficult to know when a
scientific principle or discovery crosses the line between the experimental and final stages. The probative force of the principle must
be recognized at some point but until this occurs, the deduction must be sufficiently established to have general acceptance in the
particular field to which it belongs.
In light of the new rule, the Frye Court held that the blood pressure test at issue had not yet gained such standing and
scientific recognition to justify admitting the expert testimony at hand. The Frye general acceptance test was applied by federal
courts for more than 50 years, and was applied exclusively to expert testimony based on new scientific techniques. The Frye test was
also adopted and applied by many state courts, some of which still apply the Frye test today.
By applying the new rule, the Court held that the blood pressure test in question had not gained acceptance and
recognition for justifying the admission of expert evidence. The Frye test of "general acceptance" was applied by federal courts for
over 50 years, exclusively to expert testimony based on new scientific techniques. Some state courts still use the Frye test today.
The Frye test was the rule until 1975, when Congress passed the Federal Rules of Evidence (FRE), which seemed to
create a new standard for the courts to assess the admissibility of expert evidence. FRE 104 went to the district court the power to
determine the qualifications of a witness. Preliminary questions concerning the qualification of a person to be a witness, the existence
of a privilege, or the admissibility of evidence shall be determined by the court, subject to the provision of subdivision pertaining to
conditional admissions.
The FRE 702 seemed to bring new parameters for the courts to assess the admissibility of expert testimony. The rule
provides that the specialized scientific, technical, or other will assist the trier of fact to understand the evidence or to determine a fact
in issue, a witness qualified as an expert, knowledge, skill, experience, training, or education may testify thereto in the form of an
opinion or otherwise.The coexistence of the two rules created great confusion and different strands of the courts in applying the rules.
The uncertainty was then clarified by the Supreme Court in Daubert v. Merrell Dow Pharmaceuticals. The authors
were trying to introduce expert testimony that supported the birth defects occurred due to ingestion of the drug Bendectin by mothers.
The Court held that FRE 702 superseded Frye and the general acceptance test was not a pre-condition for the admissibility of
scientific evidence under the Federal Rules of Evidence, considering the court of first instance as a "guardian" to determine the
admissibility of expert evidence ensuring that this is on a reliable foundation and is relevant to the issue under examination.
In Daubert, the trial judge must consider two issues: the relevance and scientific basis. To determine the relevance,
the trial judge must ensure that the expert will help the judge to understand or determine a fact in issue. As the trust in the scientific
basis, the Court found different factors to consider, among them, the methodology can be and has been tested, the methodology has
been subject to peer review or publication and if error rates are known, observe if there is existence of patterns of control and
operation, and if the theory obtained general acceptance in the relevant scientific community.
The Court found that this list is not definitive. Acknowledged that peer review or publication is not always a
determinative consideration because not always correlate with reliability, with some propositions, very particular, very recent or very
limited interest to be published.
The facts of each case must be considered to achieve the determination of admissibility. Even if weak evidence is
found admissible by the trial court, the court and the parties still have other means yet to reach the truth.
In General Electric Company v. Joiner the Supreme Court clarified some issues, where the plaintiff alleged that
exposure in the workplace to polychlorinated biphenyls (PCB) and their derivatives have generated small cell lung cancer. The author
admitted to being a smoker. The author was assumed smoker.
The district court granted summary judgment for the defendant, relying on the fact that there was no causal link
between exposure to PCBs and the author's illness. The district court also found that the testimony of prosecution expert was used
subjective arguments or unsupported speculation.
On appeal, the Eleventh Circuit Court of Appeals reversed the decision, stating that the Federal Rules of Proof
related to experts, assist the admissibility, and therefore, the reviewing courts should adopt a strict standard of review for exclusion of
expert testimony by the trial's judge.
The Supreme Court reversed the decision of the Eleventh Circuit Court, contending that the appellate courts should
be more judicious to admit or exclude expert testimony. Analyzing the admissibility of expert evidence in question was realized that
the studies presented were different from the case presented by the author and thus did not provide adequate basis for the allegations.
The Joiner Court held that, although in Daubert require courts to focus only on principles and methodology, not on the conclusions
generated, they are not required to admit evidence where there is a major omission in the data presented and the opinion offered.
There was still the problem of non-scientific expert evidence, which was not resolved by Daubert, since only treated
the expert scientific evidence. Then the question remained whether the trial courts should also work with gatekeepers in these cases.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


16
www.ijergs.org

In Kumho Tire Company v. Carmichael, the Court held that Daubertwas applied to all types of evidence.
The case was about a motor vehicle accident caused by a tire blowout that killed one person and caused injury to
others. The claim was that the tire had a manufacturing fault, based on studies done by failure engineers.
It was confirmed that the engineering evaluation of the evidence, the trial judge may consider the Daubert factors to
the extent of its relevance, and that the application of these factors should depend on the nature of the issue, the expert's area of
specialization , and the subject under discussion. The Daubert factors should be utilized to be useful and not immutable and
responsibility of gatekeeping is to evaluate each individual case, except that the trial judge go beyond the Daubert factors to assess the
relevance and reliability, ensuring that techniques were used stringent.
In a national survey of judges in USA on judging expert evidence in a post-Daubert era is explicit the belief that the
guard against junk science is the intent of decision. [13]
It was also found that judges have difficulty in operationalizing the Daubert criteria and apply them, especially in
regards to falsifiability and error rate. Judges also have some difficulty to understand the scientific meaning of these criteria.
Another point is that the validity and reliability of approaches and procedures for forensic analysis should be tested.
In this sense, there is an effort of the acting community to achieve this goal. [14]
Certification programs for individuals and accreditation of educational programs and crime labs are voluntary and
are not supervised by the American Academy of Forensic Sciences (AAFS), which has a council to examine existing certification
bodies. [15]
Randall K. Noon sets forensic engineering as the application of engineering principles and methodologies to
answer questions of fact. These questions of fact are usually associated with accidents, crimes, catastrophic events, degradation of
property, and various types of failure. [16]
As in Brazil, in U.S. forensic engineers are experts that use engineering disciplines to assist in legal matters. They
work in all areas of engineering. Its necessary at least a bachelor's degree in engineering, and most professionals are licensed as
professional engineers and this license may be required for some practical. Some forensic engineers have masters or doctorate degree
too. Most experts full time are in private practice or small private companies. There are also academics who do eventually
consultancy. There are many Forensic engineers engaged in the reconstruction of traffic accidents (car, train, airplane, etc..) and may
be involved in some cases failures materials, construction or other structural and collapses and other failures. [17]
The duty of the engineer appears in the following instruments: a contract for engineering services; laws governing
the licensing engineer; recommendations for good practice and codes of ethics promulgated by professional societies; and case law,
which is the law based on judicial decisions and precedents.
The case law established that engineers have a duty to provide its services in a manner consistent with the standard
of care of their professions. The standard jury dealing with the duty of a professional instruction provides that when performing
professional services for a client, a professional has a duty to have that degree of learning and skill ordinarily possessed by reputable
professionals practiced in the same location and under similar circumstances.
When performing professional services for a client must have the degree of learning and skill ordinarily possessed
by reputable professionals.It is his duty use skill and care ordinarily used in similar cases by other reputable professionals in the same
locality and under similar circumstances. Should also use reasonable diligence and his best judgment in the exercise of his
professional skill and applying his knowledge, struggling to fulfill the purpose for which it was employed. Failure to comply with any
duty is negligence.
In this way, there are four main obligations presented by the jury instruction: have knowledge and skill; use care and
skill; make use of reasonable diligence and his best judgment; and strive to achieve the purpose for which it was hired.
The level of learning, skill and care that professional engineering must own and use are those owned and used by
respected engineers in similar situations and locations. The requirement of "reasonable diligence" means that the engineer must apply
a balanced level of effort to complete their tasks. Efforts must involve or result in a serious, careful examination, but without
exceeding the bounds of reason. [18]
4. CONCLUSION
In the legal system of the United States, can be said that the quality of engineering professionals are
determined by the "gatekeepers" at the time of the admissibility of evidence, unlike the Brazilian system expert is chosen by the
judge.
Despite the failures that the system can provide, there are standards that, although not decisive, serve to guide
the court as to the admissibility of a particular evidence, getting the system always subject to improvement by a new decision
because the American system is based on the history of judicial decisions.
The focus on Brazil is the expert and their technical expertise, in other words the person's professional, while in
the U.S. the focus is on the result of the work of the expert: if this is usable, if the work has credibility, or whether it was used
"junk science, "which therefore demonstrates that the professional is not a good expert.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


17
www.ijergs.org

The biggest problem of the Brazilian system of choice of experts is that the choice of the expert stay only at the
discretion of the judge. The judge does not have the standards as adopted in the United States for the admissibility of expert
evidence, and technical assistants who could act as "supervisors" have little or no influence on the final results of forensic
analysis.
It turns out that experts with little technical knowledge, or that use "junk science", influences the court decision
because the judge believes in his assistant since it has no effective parameters to assess the quality of work of the professional.
A list of experts available to judges does not exist, as well as the quality of professionals is not
evaluatedobjectively. Also the scientific method is not evaluated which often exposes the judge take decisions based on
unreliable information.
The work of technical assistants could be valued, contributing to these act as beacons of expert performance ,
functioning as critical expertise held, almost like in America when using the cross-examination.
Despite being different judicial systems, the idea of control of the professionals who act as experts and a review
of the results of their investigations, in other words, if the expert reports are scientifically sound and not based on junk science,
one can have a benefit of expert activity much better and more useful results for society.
Due to the Brazilian legal system, to achieve this goal, it is necessary to make changes in the existing
legislation, which will only happen with a mobilization of experts and judges.

REFERENCES:
[1] Peter Neufeld, Barry Scheck Making forensic science more scientific Nature, volume 464, page 351, Mar 2010.
[2] Carper, Kenneth L., Forensic Engineering2
nd
ed. Boca Raton: CRC Press, 2000.
[3] In: http://www.planalto.gov.br/ccivil_03/Constituicao/Constituicao.htm
[4] In: http://www.loc.gov/law/help/legal-research-guide/brazil-legal.php?loclr=bloglaw#t9
[5] In: http://www.planalto.gov.br/ccivil_03/Decreto-Lei/Del3689.htm
[6] In: http://www.planalto.gov.br/ccivil_03/Leis/L5869.htm
[7] In: http://www.confea.org.br/cgi/cgilua.exe/sys/start.htm?sid=906
[8] In: http://www.abnt.org.br/IMAGENS/Estatuto.pdf
[9] In: http://www.planalto.gov.br/ccivil_03/Decreto-Lei/Del2848.htm
[10] In: http://www.fjc.gov/public/pdf.nsf/lookup/U.S._Legal_System_English07.pdf/$file/U.S._Legal_System_English07.pdf
[11] Jurs, Andrew W., Balancing Legal Process with Scientific Expertise: Expert Witness Methodology in Five Nations and
Suggestions for Reform of Pos-DaubertU.S. Reliability Determinations Marquette law review [0025-3987] Jurs, Andrew, Vol.
95,Issue 4, Pages1329 -1415, 2012.
[12] Patrick J. Sullivan, Franklin J. Agardy, Richard K. Traub Practical environmental forensics:process and case histories John
Wiley & Sons, Inc., NY, 2001.
[13] Sophia I. Gatowski, Shirley A. Dobbin, James T. Richardson, Gerald P. Ginsburg, Mara L. Merlino, and Veronica Dahir Asking the
Gatekeepers: A National Survey of Judgeson Judging Expert Evidence in a Post-Daubert World Law and Human Behavior, Vol. 25, No. 5,
October 2001.
[14] Geoffrey Stewart Morrison Distinguishing between forensic science and forensic pseudoscience: Testing of validity and
reliability, and approaches to forensic voice comparison Science and Justice,Volume 54, Issue 3, Pages 245256, May 2014.
[15] Virginia Gewin Forensic Evidence Nature, Volume 458, Page 663, Apr 2009.
[16] Randall K. Noon Forensic engineering investigation Boca Raton,CRC Press, 2001, ISBN 0-8493-0911-5.
[17] R. E. Gaensslen How do I become a forensic scientist? Educational pathways to forensic science careers Anal Bioanal Chem
(2003) 376 : 11511155. DOI 10.1007/s00216-003-1834-0.
[18] Joshua B. Kardon The elements of care in engineering in Forensic engineering: diagnosing failures and solving problems:
Proceedings of the 3rd International Conference on Forensic Engineering Edited by Brian S. Neale. Taylor& Francis Group,
London, 2005, ISBN 9780415395236




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


18
www.ijergs.org

High Speed CPL Adder for Digital Biquad Filter Design
Neva Agarwala
1

1
Lecturer, Department of EEE, Southeast University, Dhaka, Bangladesh
E-mail- mnagarwala@seu.ac.bd

Abstract The project presents the comprehensive explanation of how to minimize the overall delay of a Digital Biquad Filter by
comparing the time delay performance analysis among different adders. Finally, 8 bit CPL adder is used in the design methodology of
this Biquad Filter for its excellent performance in timing delay calculation. At the end, it has been found that the design was fully
functional and the time delay was less compare to others.
KeywordsBiquad Filter, CPL Adder, CMOS adder, ROM, Register, D Flip Flop. nmos, pmos, XOR.

1. INTRODUCTION
1.1 Review of full adder design of two different cmos logic style

Several variants of static CMOS logic styles have been used to implement low-power 1-b adder cells [1]. In general, they can be
broadly divided into two major categories: the complementary CMOS and the pass-transistor logic circuits. The complementary
CMOS full adder (C-CMOS) of Fig. 2 is based on the regular CMOS structure with pMOS pull-up and nMOS pull-down transistors.
The series transistors in the output stage form a weak driver. Therefore, additional buffers at the last stage are required to provide the
necessary driving power to the cascaded cells. [2]

The complementary pass transistor logic (CPL) full adder with swing restoration is shownin Fig. 3. The basic difference between the
pass-transistor logic and the complementary CMOS logic styles is that the source side of the pass logic transistor network is connected
to some input signals instead of the power lines. The advantage is that one pass-transistor network (eitherpMOSor nMOS) is sufficient
to implement the logic function, which results in smaller number of transistors and smaller input load. [3]

1.2 Aims and Objectives

The general objective of our work is to make a faster 8-bit adder and to investigate the area and power-delay performances of 1 bit full
adder and 8 bit full adder cells in two different CMOS logic styles. Here, we compare the CMOS and CPL 1 bit and 8 bit full adder
and use the CPL full adder in the Biquad filter because the delay is low compared to the CMOS adder.

1.3 One bit full adder

The one-bit full adder used is a three-input two-output block. The inputs are the two bits to be summed, and, and the carry bit , which
derives from the calculations of the previous digits. The outputs are the result of the sum operation and the resulting value of the carry
bit. More specifically, the sum and carry output are given by,

S = A xor B xor Cin ------------------------------ (1)

Co = AB+(A+B)Cin ------------------------------ (2)

From (2) it is evident that if the carry output is equal to their value. If we have (the full adder is said to be in propagate mode), and,
hence, the full adder has to wait for the computation of Co.[4]


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


19
www.ijergs.org










Fig 1: A full adder [4]

2. DESIGN AND SPECIFICATION

The sizing used are based on the inverter size (nmos = 3:2 and pmos = 6:2). Below are the details of 8-Bit CPL Adder modules:
- 8-bit CPL Adder
- Reference sizing from inverter with size 3:2 for nmos transistor and 6:2 for pmos transistor
- Using 1-bit CPL Adder
- 1-bit CPL Adder sizing: 6:2 for all nmos transistor and 6:2 for all pmos transistor


3. RESULTS AND ANALYSIS

We simulate the 1 bit and 8 bit full adders by using IRSIM and get different delay from these two different adders. From the
simulation waveform we can easily calculate the delay

For 1 bit full adder:

CPL: 0.179ns 0.537ns (Layout)
CMOS: 0.53ns 0.893ns (Layout)

For 8 bit:

CPL: 2.02ns 2.268ns (Layout)

From these above result we come to know that CPL 1 bit adder is faster than CMOS adder. For this reason we use this adder in the
biquad filter design to get the minimum delay from the whole design and get a better performance.



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


20
www.ijergs.org

- A. SCHEMATIC DIAGRAM
- CMOS 1-bit Adder






Fig 2: CMOS 1 bit Adder (Schematic)

- CPL 1-bit Adder






Fig 3: CPL 1 bit Adder (Schematic)
- CPL 8-bit Adder







Fig 4: CPL 8 bit adder (Schematic)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


21
www.ijergs.org

B. LAYOUT VIEW

- CMOS 1-bit Adder







Fig 5: CMOS 1 bit Adder (Layout)

- CPL 1-bit Adder






Fig 6: CPL 1 bit Adder (Layout)
- CPL 8-bit Adder







Fig 7: CPL 8 bit adder (Layout)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


22
www.ijergs.org

C. TIMING SIMULATION:
1) CMOS 1-bit Adder
a. Schematic
d = 0.536ns





Fig 8: Simulation of CMOS 1 bit Adder (Schematic)
b. Layout
d = 0.893ns






Fig 9: Simulation of CMOS 1 bit Adder (Layout)
2) CPL 1-bit Adder
a. Schematic
d = 0.179ns






Fig 10: Simulation of CPL 1 bit Adder (Schematic)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


23
www.ijergs.org

b. Layout
d = 0.537ns







Fig 11: Simulation of CPL 1 bit Adder (Layout)
3) CPL 8-bit Adder
a. Schematic
d = 2.024ns






Fig 12: Simulation of CPL 8 bit adder (Schematic)
b. Layout
d = 2.268ns





Fig 13: Simulation of CPL 8 bit adder (Layout)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


24
www.ijergs.org

4. OUTPUT
TABLE 1: 8 BIT FULL ADDER








5. CONCLUSION

For full adder cell design, pass-logic circuit is thought to be dissipating minimal power and have smaller area because it uses less
number of transistors. Thus, CPL adder is considered to be able to perform better than C-CMOS adder. Based on the SPICE net list
generated for all modules were compared and found similarity for both schematic and layout. The same goes for the timing simulation
ran using Build-in IRSIM. The delay found for the layout greater than the schematic but still in the acceptable range. Below are tables
of delay observed:


TABLE 2: TIME DELAY







A
0
A
1
A
2
A
3
A
4
A
5
A
6
A
7
B
0
B
1
B
2
B
3
B
4
B
5
B
6
B
7
C
i
n
S
0
S
1
S
2
S
3
S
4
S
5
S
6
S
7
C
o
u
t
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1
1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0
1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0 1
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1


Modules
Delays
Schematic Layout
1-bit CMOS
Adder
0.536ns 0.893ns
1-bit CPL
Adder
0.179ns 0.537ns
8-bit CPL
Adder
2.024ns 2.268ns

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


25
www.ijergs.org

ACKNOWLEDGEMENT

I would like to thank Dr. P.Y.K Cheung for his enormous support while doing this work.

REFERENCE
[1] S. Wairya, R. K. Nagaria, S. Tiwari, New Design Methodologies for High-Speed Mixed Mode Cmos Full Adder Circuits
International Journal of VLSI Design & Communication Systems, Vol. 2, No. 2, pp. 78-98, June 2011.

[2] S. Wairya, R. K. Nagaria, S. Tiwari, S. Pandey, Ultra Low Voltage High Speed 1-Bit CMOS Adder, IEEE Conference on
Power, Control and Embedded Systems (ICPCES), pp.1-6, December 2010.

[3] R. Zimmermann, W. Fichtner, Low-Power Logic Styles: CMOS Versus Pass-Transistor Logic, IEEE Journal of Solid-State
Circuits, Vol. 32, No.7, pp. 1-12, July 1997.

[4] Fordham University, The Binary Adder, Fordham College Lincoln Center, Spring, 2011















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


26
www.ijergs.org

WiTricity: A Wireless Energy Solution Available at Anytime and Anywhere
Shahbaz Ali Khidri, Aamir Ali Malik, Shakir Hussain Memon
Department of Electrical Engineering, Sukkur Institute of Business Administration, Sukkur, Sindh, Pakistan
shahbazkhidri@outlook.com, aamir.malik@iba-suk.edu.pk, shakir.hussain@iba-suk.edu.pk
Abstract Electrical power is vital to everyone and is a clean and efficient energy source that is easy to transmit over long distances,
and easy to control. Generally, electrical power is transmitted from one place to another with the help of wires which introduce losses
and a significant amount of power is wasted in this way. As a result, the efficiency of the power system is significantly affected. In
order to overcome these problems, a low-cost, reliable, efficient, secure, and environmental friendly wireless energy solution is
presented in this research paper. The concept of transferring power wirelessly in 3D space was first realized by Nikola Tesla when he
gave the idea to transmit the power without the help of wires over large distances using the earths ionosphere. In this research paper,
magnetic resonance method which is non-radiative in nature is introduced for wireless power transmission and the electrical power is
transmitted wirelessly over a distance of 10 feet with an overall efficiency of 80%. The method introduced in this paper is
environmental friendly and has a negligible interaction with exterior forces/objects.
Keywords Electrical power, energy source, long distance power transmission, wireless power transmission, magnetic resonance,
non-radiative, power system efficiency.
I. INTRODUCTION
An interesting aspect about the energy in Electrical form is that neither it is so available directly from nature, nor it is required to be
finally consumed in that form [1]. Still, it is the popular form of energy since it can be used cleanly in any home, work place, or
factory [2]. Generally, electrical power is transmitted from one place to another with the help of conventional copper cables and
current carrying wires which introduce significant losses and much amount of power is wasted in this way. As a result, the efficiency
of the power system is highly affected and the overall performance of the power system is degraded. The efficiency of the
conventional power transmission system can be improved by using the quality material but this introduces a significant increment in
cost. As the world has become a global village because of technological advancements, people dont like to interact all the time with
the conventional wired power system for charging their electrical/electronic devices and for other purposes because its much
complicated, time consuming, and dangerous as there is always a chance of getting electric shock. The conventional wired power
system is shown in figure 1.

Figure 1 Conventional Wired Power System
In order to get rid of all these problems and hurdles, an alternative solution must be presented or created which must be efficient,
reliable, safe, cost-effective, and environmental friendly. Nikola Tesla was the first person who gave the idea of transmitting electrical
power over large distances using the earths ionosphere without the help of conventional copper cables and current carrying wires [3].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


27
www.ijergs.org

Nikola Tesla designed a magnifying transmitter to implement wireless energy transmission by means of the disturbed charge of
ground and air method [4]. The magnifying transmitter is shown in figure 2.

Figure 2 Magnifying Transmitter
In this research paper, a low-cost, reliable, efficient, secure, and environmental friendly wireless energy solution is presented and is
based on magnetic resonance method which is non-radiative in nature. The electrical power is transmitted wirelessly over a distance of
10 feet and an overall efficiency of 80% is achieved by utilizing this technique.
This research paper is based on 5 sections. Section II is based on literature review and reviews the existing techniques and methods for
wireless power transmission. Section III is based on the methods and techniques which we have used for wireless power transmission
and describes our contribution. Section IV is based on results and reflects the results obtained from carried out research work. Section
V is based on conclusions and concludes the research paper with the important suggestions and factual findings from the carried out
research work.
II. LITERATURE REVIEW
Several techniques and methods are available for wireless power transmission. The common methods are given as:
1. Wireless Power Transmission using Magnetic Induction
This method for wireless power transmission is non-radiative in nature and works on the principle of mutual induction which states
that when two coils are inductively coupled and electrically isolated and if current in one coil is changed uniformly then an
electromotive force gets induced in the other coil [5]. In this way, the energy can be transmitted from one place to another without
using the conventional wires. However, there are few limitations and this method is not a proper method for wireless power
transmission because of several factors including shorter range (a few mm if any), lower overall efficiency, and tight coupling [6]. The
care must be taken in positioning the coils for proper operation. Many industries are using this method in their products. The magnetic
induction method is widely used in electric toothbrushes, wireless cell phone chargers, and pace makers [7, 8]. The efficiency and the
operating range of this method can be improved to a considerable level by enhancing the resonance.
2. Wireless Power Transmission using Electromagnetic Radiations
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


28
www.ijergs.org

This method for wireless power transmission is radiative in nature and not widely used because the transmitted power is dissipated in
all directions and at the end, insufficient amount of power reaches at the receiver.
This method is widely used for the transmission of information wirelessly over large distances.
3. Wireless Power Transmission using Optical Techniques
This method for wireless power transmission uses lasers to transmit energy from one place to another. The energy which is to be
transferred is in the form of light which is converted into electrical form at the receiver end. This method uses directional
electromagnetic waves so the energy can be transmitted over large distances [9]. This method for wireless power transmission is not
suitable when the receiver is mobile because proper line-of-sight is needed. For proper operation, no object should be placed between
transmitter and receiver. Complicated tracking techniques can be used in mobility condition but at the end, the cost of the power
system is increased to a significant level.
4. Wireless Power Transmission using Microwaves
This method for wireless power transmission uses microwave frequencies for transmitting energy from one place to another. The
energy can be transmitted over large distances using radiative antennas [10]. The efficiency of this power system is higher at greater
distances as compared to other wireless power transmission systems but this method is not environmental friendly and is unsafe and
complicated because microwave frequencies at higher power levels can potentially harm people. Proper care must be taken when
using method at higher power levels. Energy in tens of kilowatts has been transmitted wirelessly using this method [11]. In 1964, a
model of microwave powered helicopter was presented by Brown [12]. In 1997, this method has been utilized for wireless power
transmission in Reunion Island [13].
5. Wireless Power Transmission using Electrodynamic Induction
This method for wireless power transmission is non-radiative in nature and is environmental friendly. Two resonance objects can
exchange energy when they possess same frequency [14]. The higher efficiency can be achieved when the transmitting range is
medium. This method is popular method for wireless power transmission because no alignment of transmitter and receiver is needed
so this method has a higher placement freedom. In 2007, researchers from Massachusetts Institute of Technology (MIT) utilized this
method and powered a 60W light-bulb wirelessly at a distance of 7 feet with an overall efficiency of 40% [15]. In 2008, Intel used the
same method and powered a 60W light-bulb wirelessly at a shorter distance with an overall efficiency of 75% [16]. In 2008, the same
method was used by Lucas Jorgensen and Adam Culberson belonging to Cornell University and a successful experiment of wireless
power transmission at a shorter distance was performed [17]. In 2011, Mandip Jung Sibakoti and Joey Hambleton belonging to
Cornell University performed the same experiment and powered a 40W light-bulb wirelessly at a shorter distance [18].
III. IMPLEMENTATION OF WIRELESS POWER TRANSMISSION USING MAGNETIC RESONANCE
The block diagram of wireless power transmission using magnetic resonance is shown in figure 3.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


29
www.ijergs.org


Figure 3 Block Diagram of WPT using Magnetic Resonance
Magnetic resonance is a low-cost, reliable, efficient, secure, and environmental friendly method for wireless power transmission.
The energy in Electrical form can be transmitted from one place to another over medium ranges with the help of magnetic field
when the frequencies of source resonator and device resonator are equal. This method is non-radiative in nature and has a negligible
interaction with exterior forces/objects. Different steps involved in magnetic resonance based wireless power transmission are shown
in figure 3.
In first step, an alternating current (AC) is supplied to the power system which is usually 240V. In second step, the alternating current
(AC) is converted into the direct current (DC) using rectifiers. This step is ignored when a direct current (DC) supply is provided to
the power system. When used for high power applications, few errors may occur so a power factor corrector may be needed for high
power applications. In third step, the direct current (DC) obtained from the rectifier is converted into a radio frequency (RF) voltage
waveform because the source resonator operates on a radio frequency (RF) voltage waveform. This conversion is done using a high
speed and highly efficient operational amplifier. The amplifier used here has a very high frequency response. In fourth step, an
impedance matching network (IMN) is used for efficient coupling of the high speed operational amplifier output and the source
resonator. In fifth step, the magnetic field is generated by the source resonator. In sixth step, the generated magnetic field excites the
device resonator and an energy build up process takes place. Here, the energy is transferred without the help of wires with the help of
magnetic field. In seventh step, an impedance matching network (IMN) is used again for efficient coupling of the device resonator and
the load. In eighth step, the radio frequency (RF) voltage waveform is converted into the direct current (DC) using rectifiers because
the load operates on a direct current (DC) supply. In ninth and final step, the load is powered with the direct current (DC) supply. So,
the energy is efficiently transmitted wirelessly from the source to the load with the help of magnetic resonance.
In this research work, a successful experiment of wireless power transmission over a distance of 10 feet is performed with an overall
efficiency of 80%. It is observed that a large amount of energy can be obtained from resonators because of their oscillating nature. So
with a weak excitation force, a useful amount of energy can be obtained which is stored in resonators. The efficiency of a resonator is
based on its quality factor often represented by Q. The quality factor depends upon the resonant frequency and the internal losses of
the resonator. So, a resonator with lower losses has a higher quality factor and a higher efficiency. A simple electromagnetic resonator
is shown in figure 4
.
Figure 4 Electromagnetic Resonator
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


30
www.ijergs.org

For this research work, resonators with a high quality factors are used in order to obtain a better and desirable efficiency. It is possible
for resonators to exchange energy with the help of magnetic field when they are placed closer to each other. Two coupled resonators
exchanging energy are shown in figure 5
.
Figure 5 Coupled Resonators
The coils used for wireless power transmission in this research work have the radius of approximately 74cm and are designed to have
the resonant frequency range of 8.5MHz 12.5MHz. For frequency matching, a tunable high frequency oscillator is designed using
operational amplifiers having the tunable frequency range of 5.5MHz 14.5MHz. Along with a tunable high frequency oscillator, a
power amplifier is used for assuring the reasonable amount of power which is to be transferred to the load at the receiver side.
The creative visualization of wireless power transmission using magnetic resonance is shown in figure 6, figure 7, and figure 8.

Figure 6 Powering the Source Resonator

Figure 7 Energy Build Up Process in the Device Resonator due to the Source Resonator
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


31
www.ijergs.org


Figure 8 Powering the Load
The creative visualization shows that the energy is transmitted from the source to the load in three steps.
In first step as shown in figure 6, the source resonator is powered with the help of an alternating current (AC) supply. In second step as
shown in figure 7, the energy build up process takes place with the help of magnetic field when the source resonator and device
resonator having same frequencies are coupled. In third step as shown in figure 8, the load is powered with the help of direct current
(DC) supply which is transmitted wirelessly from the source.
IV. RESULTS AND DISCUSSIONS
In this research work, we were able to power a 40W light-bulb wirelessly over a distance of 10 feet with an overall efficiency of 80%.
A significant change in the efficiency of the wireless power transmission system was observed when the distance between the source
resonator and the device resonator was varying. A decrement in the intensity of light was observed with the increment in the distance.
However, the overall efficiency of the designed wireless power transmission system was better and desirable. The results obtained
from the designed wireless power transmission system when placed in the parallel and the perpendicular configuration are shown in
figure 9 and figure 10 in form of charts respectively.

Figure 9 Power vs. Distance Chart for Parallel Configuration
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


32
www.ijergs.org


Figure 10 Power vs. Distance Chart for Perpendicular Configuration
Different values of power in watts with respect to distance in feet are shown in figure 9 and figure 10 for the parallel and the
perpendicular configuration respectively. It is observed from these power values that the intensity of light decreases when the distance
increases. However, sufficient amount of power is obtained wirelessly over a distance of 10 feet with an overall efficiency of 80%. A
change in the resonant frequency was also observed when the distance was increased gradually due to the imperfect match in the
resonant frequencies of coils. So, the frequency was properly adjusted at different intervals of measurement for obtaining maximum
power and better efficiency. Overall, the results obtained from the carried out research work were desirable.
The chart in figure 11 shows the relationship between the efficiency of the designed wireless power system and the distance when the
parallel configuration is used and the chart in figure 12 shows the relationship between the efficiency and the distance when the
perpendicular configuration is used.

Figure 11 Efficiency vs. Distance Chart for Parallel Configuration
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


33
www.ijergs.org


Figure 12 Efficiency vs. Distance Chart for Perpendicular Configuration
The efficiency is decaying with an increment in the distance as shown in both charts in figure 11 and figure 12.
This shows that the performance of the designed system is better when the source resonator and the device resonators are closer to
each other and the performance starts degrading when the distance between the source resonator and the device resonator gets
increased. It is observed that the efficiency and the performance of the wireless power system decreases when the distance between
the source resonator and the device resonator increases for parallel as well as perpendicular configuration.
This wireless power transmission system is suitable for medium transmitting ranges so better efficiency can be achieved at moderate
distances.
V. CONCLUSIONS AND FUTURE RECOMMENDATIONS
A successful experiment of wireless power transmission over a distance of 10 feet with an overall efficiency of 80% is carried out in
this research work. The designed wireless power transmission is a low-cost, reliable, efficient, secure, and environmental friendly. The
designed power system has a negligible interaction with exterior forces/objects. The designed wireless power transmission system can
be used in various areas of application. In the area of consumer electronics, the designed system can be used to wirelessly power the
home and industry appliances including television, cell phone, room lightings, laptop, propeller displays and clocks, etc. In the area of
medical sciences, the presented system can be used to power heart assist pumps, pacemakers, infusion pumps, etc. The presented
system can be used to efficiently charge the electric vehicles. The wireless power transmission system can also be used in military and
can be used to power military robots, vehicles, and other necessary equipment of a soldier.
In future, a significant research can be carried out in the area of wireless power. Reduced size wireless power transmission systems
with better efficiency over large distances can be developed. Efficient wireless power transmission systems can be designed in future
to transmit tens and thousands of KW power over hundreds of miles with maximum efficiency and performance.
ACKNOWLEDGMENT
We would like to express our appreciation to our beloved parents for their unconditional love and support that let us through the
toughest days in our life.
REFERENCES:
[1] Bakshi U. A. and Bakshi V. U., Electrical Machines I, Pune, Technical Publications Pune, v1, n1, ISBN: 978-8-18-
431535-6, 2009.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


34
www.ijergs.org

[2] Chapman Stephen J., Electrical Machinery Fundamentals, New York, McGraw-Hill Companies, Inc., v1, n5, ISBN: 987-0-
07-352954-7, 2012.
[3] Tesla Nikola, The transmission of electrical energy without wires, Electrical World and Engineer, March 1905.
[4] Corum K. L. and Corum J. F., Nikola Tesla and the Diameter of the Earth: A Discussion of One of the Many Modes of
Operation of the Wardenclyffe Tower, 1996.
[5] Syed A. Nasar, Schaums Outline of Theory and Problems of Electric Machines and Electro mechanics, New York,
McGraw-Hill Companies, Inc., v1, n2, ISBN: 0-07-045994-0, 1998.
[6] Dave Baarman and Joshua Schwannecke, Understanding Wireless Power, December 2009, Web,
http://ecoupled.com/pdf/eCoupled_Understanding_Wireless_Power.pdf, last visited on July 25, 2014.
[7] The Economist, Wireless charging, Adapter dies, November 2008, Web,
http://www.economist.com/science/tq/displayStory.cfm?story_id=13174387, last visited on July 25, 2014.
[8] Fernandez J. M. and Borras J. A., Contactless battery charger with wireless control link, U. S. Patent: 6,184,651, February
2001.
[9] Sahai A., Graham, and David, Optical wireless power transmission at long wavelengths, IEEEICSOS, 2011, IEEE
International Conference on Space Optical Systems and Applications, ISBN: 978-1-4244-9686-0, June 02, 2011.
[10] Landis G. A., Applications for Space Power by Laser Transmission, SPIEOEOLC 1994, Conference on SPIE Optics,
Electro-optics, and Laser, v2121, p252-55, January 1994.
[11] Space Island Group, Space Solar Energy Initiative, Web, http://www.spaceislandgroup.com/solarspace.html, last visited on
July 27, 2014.
[12] Brown W. C., Mims J. R., and Heenan N. I., An Experimental Microwave-Powered Helicopter, IEEEICR 1964, 965 IEEE
International Convention Record, v13, n5, p225-35, 1964.
[13] Lan Sun Luk J. D., Celeste A., Romanacce, Chane Kuang Sang L., and Gatina J. C., Point-to-Point Wireless Power
Transportation in Reunion Island, IAC 1997, 48
th
International Astronautical Congress, October 1997.
[14] Karalis A., Joannopoulos J. D., and Soljacic M., Efficient Wireless Non-Radiative Mid-range Energy Transfer, Annals of
Physics, 323, 2008, p34-48, April 27, 2007.
[15] EetIndia.co.in, MIT lights 60W light-bulb by wireless power transmission, Web,
http://www.eetindia.co.in/ART_8800467843_1800005_NT_4ba623b8.HTM, last visited on July 27, 2014.
[16] TG Daily, Intel imagines wireless power for your laptop, August 2008, http://www.tgdaily.com/content/view/39008/113/,
last visited on July 27, 2014.
[17] Lucas Jorgensen and Adam Culberson, Wireless Power Transmission Using Magnetic Resonance, 2008, Web,
http://www.cornellcollege.edu/physics/courses/phy312/Student-Projects/Magnetic-Resonance/Magnetic-Resonance.html, last
visited on July 27, 2014.
[18] Mandip Jung Sibakoti and Joey Hambleton, Wireless Power Transmission Using Magnetic Resonance, December 2011,
Web, www.cornellcollege.edu/physics/files/mandip-sibakoti.pdf, last visited on July 27, 2014

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


35
www.ijergs.org

Removal of phenol from Effluent in Fixed Bed: A Review
Sunil J. Kulkarni
1

1
Chemical Engineering Department, Datta meghe College of Engineering, Airoli, Navi Mumbai, maharastra, India
E-mail- Suniljayantkulkarni@gmail.com

Abstract Phenol removal from wastewater is very widely studied area of research. The practical approach for phenol removal by
adsorption involves study of batch adsorption and more importantly the fixed bed operation. In the present study, various aspects of
fixed bed adsorption for phenol have been discussed. The review of research carried out on this topic is presented. The phenol
removal in fixed bed has been carried out by using adsorbents, biosorbents, aerobic and anaerobic biological mechanisms. In most of
the investigations fixed bed adsorption was found to be satisfactory in terms of removal efficiency and time. The nature of break
through curve was justified by using various models. The experimental data was in agreement with model results. In most of the
cases, the equilibrium capacity increased with increase in influent concentration, bed height and decreased with increase in flow rate.
Keywords Adsorption, saturation time, isotherms, kinetics, flow rate, concentration , removal efficiency.
I.INTRODUCTION
Industrial effluent is major source of effluent disposed to river, land other reservoirs. One of the major pollutant of great
environmental concern is phenol. Wastewater from other industries such as paper and pulp, resin manufacturing, tanning, textile,
plastic, rubber, pharmaceutical and petroleum also contain different types of phenols. Phenolic compounds are harmful to organisms
even at low concentrations and many have been classified as hazardous pollutants because of their potential harm to human health.
Various methods used for phenol removal from wastewater includes abiotic and nonbiological processes such as adsorption,
photodecomposition, volatilization, coupling to soil humus and thermal degradation. Removal of phenol by adsorption is very
effective treatment method. Use of fixed bed for phenol removal offers many advantages such as flexibility, adoptability and high
removal efficiency. In the present study, the work done in this field is summerized.The studies and research carried out includes
isotherm, kinetic, breakthrough curve studies. The batch data was used for isotherm and kinetic studies. Attempts have also been done
to use the batch data to predict fixed bed parameters. Various models were used to justify the nature of breakthrough curve by various
researchers.
II.PHENOL REMOVAL IN FIXED BED
Studies for the removal of aqueous phenol from activated carbon prepared from sugarcane bagasse in a fixed bed adsorption column
were carried out by Karunarathnea and Amarasinghe[1]. They prepared adsorbent from fine bagasse pith collected from a leading
local sugar manufacturing factory in Sri Lanka .This bagasse was washed and dried in an oven under the temperature of 700C for
24h. Preparation of activated carbon(AC) was done by heating a bagasse sample at 600
0
C for 1 hour in a muffle furnace in the absence
of air. AC particles between 1-2 mm in size were used for all the experiments. They conducted column experiments using a glass tube
of 3 cm diameter and 55 cm height. They conducted the experiments varying the weight of activated carbon using initial solution
concentration of 20mg/l. There are many parameters involve in evaluation of performance of fixed bed column such as solution initial
concentration, flow rate, amount of adsorbent used and particle size of the adsorbent. The results show that the increases of adsorbent
dose in column enhance the adsorbent capacity of the bed. Further percentage of length of the unused bed to its original bed height
decreases with the increases of amount of adsorbent used. Anisuzzaman et.al.investigated phenol adsorption in activated carbon
packed bed column with emphasis on dynamic simulation[2]. Their many study was aimed at the dynamic simulation of phenol
adsorption within the packed bed column filled with activated carbon derived from dates stones. The parameters such as column
length, inlet liquid flow rate, initial phenol concentration of feed liquid and characteristics of activated carbon were investigated based
on dynamic simulation using Aspen Adsorption V7.1. However, based on the simulation, they concluded that the adsorption column
is not feasible for conventional water treatment plant. A review on removal of phenol from wastewater in packed bed and fluidized
bed columns was done by Girish and Murty[3].. Their study provided an bird eye view of the packed and fluidized bed columns used
for treatment of wastewater containing phenol and also on the different operational conditions and their performance. They concluded
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


36
www.ijergs.org

that to enhance the performance of the reactors for phenol adsorption, there is an indispensable requirement of novel efforts to be done
in the reactor design.Gayatri and Ahmaruzzaman studied adsorption technique for the removal of phenolic compounds from
wastewater using low-cost natural adsorbents[4]. Though activated carbon is an effective adsorbent, its widespread used is restricted
due to its high cost and substantial lost during regeneration. Their study indicated that the adsorption for phenol in a fixed bed a
efficient method for phenol removal The data obtained during investigation is in agreement with the models available to relat e the
break through time and break through curve for adsorption. Ekpete et.al used fluted pumpkin and commercial activated carbon for
fixed bed adsorption of chlorophenol[5].They compared the removal efficiency of chlorophenol by fluted pumpkin stem waste to a
commercial activated carbon. The fixed bed experiment were carried out to study flow rate (2-4ml/min), initial concentration (100-
200mg/l) and bed height (3-9cm). Column bed capacity and exhaustion time increased with increasing bed height. With increase in the
flow rate the bed capacity decreased. They observed that the column performed well for lowest flow rate of 2 ml/min. It was also
observed that the increase in flow rate decreased the breakthrough time, exhaustion time and uptake capacity of chlorophenol due to
insufficient residence time of the chlorophenol in the column. Li et.al. developed a mathematical model for multicomponent
competitive adsorption process, to describe the mass transfer kinetics in a fixed-bed adsorber packed with activated carbon fibers
(ACFs)[6]. They analyzed the effects of competitive adsorption equilibrium constants, axial dispersion, external mass transfer, and
intraparticle diffusion resistances on the breakthrough curves for weakly-adsorbed and strongly-adsorbed components. It was
observed, during the analysis that the effects of intrafiber and external mass transfer resistances on the breakthrough curves can be
neglected for a fixed-bed adsorber packed with ACFs. The axial dispersion was confirmed to be the main parameter that controls the
adsorption kinetics.
Ashtoukhy et.al. investigated removal of phenolic compounds by electrocoagulation using a fixed bed electrochemical reactor for
petroleum waste.[7]. The investigation was carried out to study the removal of phenolic compounds in terms of various parameters in
batch mode namely: pH, operating time, current density, initial phenol concentrations, addition of NaCl, temperature and the effect of
phenol structure (effect of functional groups). Their study revealed that the optimum conditions for the removal of phenolic
compounds were achieved at current density = 8.59 mA/cm2, pH = 7, NaCl concentration = 1 g/L and temperature of 25C. EC of
phenolic compounds using Al rasching rings connected together in the form of fixed bed sacrificial anode seems to be very efficient
method from this research. The removal of 100% of phenol compound after 2 hrs was achieved for 3 mg/l phenol concentration of real
refinery wastewater at the optimum conditions.Sorour et.al. studied application of adsorption packed-bed reactor model for phenol
removal[8]. They conducted the experiments to determine the Langmuir equilibrium coefficients ( and Xm) and to determine the
bulk sorbate solution concentration versus different adsorption column depths and different time as well. The model equations which
are a combination of Particle Kinetics and Transport Kinetics were used to predicts the relations between sorbate concentration and
flow rate as variables with column depth at any time. The granular active carbon[AquaSorbTM2000] and filtration anthracite
[AMSI/AWWA 8100-96] was used as sorbents and phenol as sorbate through testing over a range of phenol concentrations (100-300
mg/l).The results of the model were in good agreement with the experimental data. The investigation on removal of phenol and lead
from synthetic wastewater by adsorption onto granular activated carbon in fixed bed adsorbers was carried out by Sulaymon et.al.[9].
They used fixed bed adsorbers for removal of phenol and lead (II) onto granular activated carbon (GAC) in single and binary system.
A general rate multi-component model which considers both external and internal mass transfer resistances as well as axial dispersion
with non-liner multi-component isotherm, was utilized to predict the fixed bed breakthrough curves for dual-component system. The
results showed that a general rate model was satisfactory for describing the adsorption process of the dynamic behavior of the GAC
adsorber column. The research on fixed bed column studies for the sorption of para-nitrophenol from aqueous solutions using cross-
linked starch based polymer was conducted by Sangeeta et.al.[10]. It was observed that the column experiments on cross-linked starch
showed that the adsorption efficiency increased with increase in the influent concentration, bed height and decreased with increasing
flow rate. Also the experimental data was well fitted with Yoon-Nelson model. It was concluded that the adsorbent prepared from the
cross-linking of starch with HMDI was an effective adsorbent for the removal of para-nitrophenol (pNP) from waste water.
maximum equilibrium capacity of 42.64 mg/g for pNP at 100 mg/L of influent concentration, bed height 7.5 cm and flow rate of 4
ml/min was observed. It was also seen that the equilibrium capacity increased with increase in influent concentration, bed height and
decreased with increase in flow rate.Bakhshi et. al. used upflow anaerobic packed bed reactor (UAPB) for phenolremoval[11].The
operating conditions were a hydraulic retention time (HRT) of 24 h under mesophilic (301C) conditions. The operation was split
into four phases. The concentration of phenol in phases 1, 2, 3 and 4 were 100, 400, 700 and 1000 mg/l, respectively..The reactor
reached steady state conditions on the 8th day with a phenol removal efficiency and biogas production rate of 96.8 % and 1.42 l/d in
phase 1.An increase of the initial phenol concentration in phase 2 resulted in a slight decrease in phenol removal efficiency. Phases 3
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


37
www.ijergs.org

and 4 of startup followed the same trend. In phases 3 and 4, the phenol removal efficiencies at steady state conditions were 98.4 and
98%, respectively. A sudden decrease in biogas production was observed with stepwise increase of the phenol concentration, dynamic
studies of nitro phenols sorption on perfil in a fixed-bed column were carried out by Yeneva et.al.[12]. They investigated the
adsorption of two substituted nitrophenols, namely 4-nitrophenol (4-NP) and 2,4-dinitrophenol (2,4-DNP), from aqueous solutions
onto perfil in a fixed bed. They applied the theoretical solid diffusion control (SDC) model describing single solute adsorption in a
fixed bed based on the Linear Driving Force (LDF) kinetic model to the investigated systems. They used Biot number as an indicator
for the intraparticle diffusion. The Biot number was found to decrease with the increase of bed depth. This indicated that the film
resistance was increased or the intraparticle diffusion resistance was decreased. Coated sand (CS) filter media was used to remove
phenol and 4-nitrophenol from aqueous solutions in batch experiments by Obaidy [13]. They studied the influence of process
variables represented by solution pH value, contact time, initial concentration and adsorbent dosage on removal efficiency of phenol
and 4-nitrophenol.The adsorption of phenol from aqueous solution onto natural zeolite was studied using a fixed bed column by
Ebrahim[14]. They carried out experiments to study the effect of influent concentration, flow rate, bed depth and temperature on the
performance of the fixed bed. The study indicated that there was a good matching between experimental and predicted data in batch
experiment by using surface diffusion method. It was also observed that The Homogeneous Surface Diffusion Model (HSDM) which
includes film mass transfer and surface diffusion resistance provides a good description of the adsorption process. With increase in
concentration the breakthrough curve became steeper, because of increase in the driving force. The investigation on adsorption of
phenol, p-chlorophenol and mercuric ions from aqueous solution onto activated carbon in fixed bed columns was done by Mckay and
Bino[15]. It was observed that the parameters like bed depth, solution flowrate and pollutant concentration affect the breakthrough
curve and breakthrough time. The Depth Service Time was used to analyze the data. The experimental data agreed with the model. In
case of modeling, insufficient models are available to describe and predict fixed-bed or column adsorption. Mathematical models
proposed to describe batch adsorption in terms of isotherm and kinetic behavior can be used for study of fixed bed.The review done by
Xu et al. indicates that the general rate models (and general rate type models) and (linear driving force) LDF model generally fit
well with the experimental data for most cases, but they are relatively time-consuming[16].It was also found in the review that The
Clark model was suitable to describe column adsorption obeying the Freundlich isotherm and do not show conspicuously better
accuracy than the above models. A research on Biological degradation of chlorophenols in packed-bed bioreactors using mixed
bacterial consortia was carried out by Zilouei and Guieysse[17]. For the continuous treatment of a mixture of 2-chlorophenol (2CP),
4-chlorophenol (4CP), 2,4-dichlorophenol (DCP) and 2,4,6-trichlorophenol (TCP), two packed-bed bioreactors filled with carriers of
foamed glass beads were tested at 14 degree C and 23 degree C. The results presented in their study represented some of the highest
chlorophenol volumetric removal rates reported, even in comparison to the rates achieved in well homogenized systems such as
fluidized bed and air-lift reactors. The maximum removal upto 99 percent was achieved without let concentration less than 0.1 mg/l.
Nakhli et.al. investigated biological removal of phenol from saline wastewater using a moving bed biofilm reactor containing
acclimated mixed consortia[18].It was observed that the performance of reactor depends on parameters such as inlet phenol
concentration(2001200 mg/L),hydraulic retention time (824 h),inlet salt content(1070 g/L),phenol shock loading, hydraulic shock
loading and salt shock loading. It was also observed that the an aerobic moving bed biofilm reactor (MBBR) was able to remove up to
99% of phenol. They concluded that the MBBR system with high concentration of the active mixed biomass can play a prominent
role in order to treat saline wastewaters containing phenol in industrial applications very efficiently.
III. CONCLUSION
The phenol removal by fixed bed operation is very promising method of treatment. The percentage removal of the order of 99 to 100
percent has been reported. The nature of breakthrough curve is affected by the parameters such as initial concentration, bed depth and
flow rates. With initial concentration and bed depth, the equilibrium adsorption capacity increases and it decreases with flow rate. The
nature of breakthrough curve was justified in most of the cases by using the available models. There is still scope for developing the
model for the fixed bed as available models in few studies were not able to explain the fixed bed adsorption phenomenon completely
in terms of breakthrough time, saturation time and retention time.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


38
www.ijergs.org


REFERENCES:
1. H.D.S.S. Karunarathnea, B.M.W.P.K. Amarasinghea, Fixed Bed Adsorption Column Studies for the Removal of Aqueous
Phenol from Activated Carbon Prepared from Sugarcane Bagasse, Energy Procedia, Vol. 34,pp. 83 90,2013.
2. S.M. Anisuzzaman, Awang Bono, Duduku Krishnaiah, Yit Zen Tan, A study on dynamic simulation of phenol adsorption in
activated carbon packed bed column, Journal of King Saud University Engineering Sciences,Vol.30,pp 30, (2014).
3. Girish C.R. and Ramachandra Murty V. Removal of Phenol from Wastewater in Packed Bed and Fluidized Bed Columns: A
Review, International Research Journal of Environment Sciences, Vol. 2, No.10, pp.96-100, 2013.
4. S. Laxmi Gayatri, Md. Ahmaruzzaman, Adsorption technique for the removal of phenolic compounds from wastewater using
low-cost natural adsorbents, Assam University Journal of Science & Technology : Physical Sciences and Technology,Vol. 5 ,
No.2,pp.156-166, 2010.
5. Ekpete, O.A, M. Horsfall Jnr and A.I. Spiff, Bed Adsorption Of Chlorophenol On To Fluted Pumpkin And Commercial
Activated Carbon, Australian Journal of Basic and Applied Sciences, Vol.5, No.11,pp. 1149-1155, 2011.
6. Ping Li,Guohua Xiu, Lei Jiang, Competitive Adsorption of Phenolic Compounds onto Activated Carbon Fibers in Fixed Bed,
Journal of Environmental Engineering, Vol. 127, No. 8, pp. 730-734, 2001.
7. E-S.Z. El-Ashtoukhy, Y.A.El-Taweel, O. Abdelwahab , E.M.Nassef, Treatment of Petrochemical Wastewater Containing
Phenolic Compounds by Electrocoagulation Using a Fixed Bed Electrochemical Reactor, Int. J. Electrochem. Sci.,
Vol.8,pp.1534 1550,2013.
8. M. T. Sorour, F. Abdelrasoul and W.A. Ibrahim, Application Of Adsorption Packed-Bed Reactor Model For Phenol Removal,
Tenth International Water Technology Conference, IWTC10 Alexandria, Egypt ,pp131-144,2006.
9. Sulaymon, Abbas Hamid; Abbood, Dheyaa Wajid; Ali, Ahmed Hassoon, Removal of phenol and lead from synthetic wastewater
by adsorption onto granular activated carbon in fixed bed adsorbers: prediction of breakthrough curves, Desalination & Water
Treatment, Vol. 40,No.1-3, pp.244, 2012.
10. Garg Sangeeta, Kohli Deepak and Jana A. K., Fixed Bed Column Studies For The Sorption Of Para-Nitrophenol From Aqueous
Solutions Using Cross-Linked Starch Based Polymer, Journal of Environmental Research And Development, Vol. 7
,No.2A,pp.843-850,2012.
11. Einab Bakhshi,Ghasem Najafpour,Ghasem Najafpour,Bahram Navayi Neya,E Smaeel Kariminezhad ,Roya Pishgar,Nafise
Moosav, Recovery Of Upflow Anaerobic Packed Bed Reactor From High Organic Load During Startup For Phenolic
Wastewater Treatment, Chemical Industry & Chemical Engineering Quarterly, Vol. 17,No.4, pp.517524,2011.
12. Zvezdelina Yaneva,Mirko Marinkovski,Liljana Markovska,Vera Meshko,Bogdana Koumanova, Dynamic studies of
nitrophenols sorption on perfil in a fixed-bed column, Vol 27, No, 2,pp.123-132,2008.
13. Asrar Al-Obaidy, Removal of Phenol Compounds from Aqueous Solution Using Coated Sand Filter Media, Iraqi Journal of
Chemical and Petroleum Engineering, Vol.14 No.3 pp. 23- 31,2013.
14. Shahlaa E. Ebrahim, Modeling the Removal of Phenol by Natural Zeolitein Batch and Continuous Adsorption Systems,
Journal of Babylon University/ Engineering Sciences, Vol.21, No.1,2013.
15. Mckay, Gordon Bino, M.J., Fixed Bed Adsorption for the Removal of Pollutants from Water, Environ. Pollut.. Vol. 66, pp. 33-
53,1990.
16. Zhe XU, Jian-guo CAI, Bing-cai PAN, Mathematically modeling fixed-bed adsorption in aqueous systems, Journal of Zhejiang
University-SCIENCE A (Applied Physics & Engineering),Vol.14,No.3,pp.155-176, 2013.
17. Hamid Zilouei, Benoit Guieysse,Bo Mattiasson, Biological degradation of chlorophenols in packed-bed bioreactors using mixed
bacterial consortia, Process Biochemistry,Vol. 41,pp.10831089,2006.
18. Seyyed Ali Akbar Nakhli, Kimia Ahmadizadeh, Mahmood Fereshtehnejad, Mohammad Hossein Rostami, Mojtaba Safari and
Seyyed Mehdi Borghei, Biological Removal Of Phenol From Saline Wastewater Using A Moving Bed Biofilm Reactor
Containing Acclimated Mixed Consortia, Journal of Environmental Engineering, Vol. 127, No. 8, pp. 730-734,2001





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


39
www.ijergs.org

Derating Analysis for Reliability of Components
K.Bindu Madhavi
1
, BH.Sowmya
2
, B. Sandeep
2
, M.Lavanya
2

1
Associate Professor, Department of ECE, HITAM, Hyderabad
2
Research Scholar, Department of ECE, HITAM, Hyderabad

ABSTRACT: Ensuring reliable operation over an extended period of time is one of the biggest challenge facing present day
electronic systems. The increased vulnerability of the components to various electrical, thermal, mechanical, chemical and
electromagnetic stresses poses a big threat in attaing the reliability required for various mission critical applications. Derating can be
defined as the practice of limiting electrical, thermal and mechanical stresses on devices to levels below their specified or proven
capabilities in order to enhance reliability. If a system is expected to be reliable, one of the major contributing factors must be a
conservative design approach incorporating part derating. Realizing a need for derating of electronic and electromechanical parts,
many manufacturers have established internal guidelines for derating practices.
In this project, a notch filter circuit is used in an aerospace application is selected. Circuit simulation will be carried out by
using E-CAD Tools. Further Derating analysis will be done as in the methodology given in MIL-STD-975A and provide design
margin against this standard as well.
Key for success of any product lies in its producibility, Quality and reliability. A lot of effort is needed to develop a new
product, make a prototype and prove its performance. Still more effort is required, if it is to be produced in large quantities with
minimum number of rejections. Minimum number of rejections or increase in First time yield saves production costs, testing time and
resources. Hence it helps to reduce cost of item. It is also required that product delivered to the customer should perform satisfactorily
without failure under its expected life cycle operational stresses. It should continue this performance over its expected operational life
time, or whenever it is required to operate, a factor which is called reliability. Reliable product performance increases customer
satisfaction and gives Brand name for manufacturer.
The increased vulnerability of the components to various electrical, thermal, mechanical, chemical and electromagnetic
stresses poses a big threat in attaing the reliability required for various mission critical applications. Derating is the practice of
operating at a lower stress condition than a part's rating.


INTRODUCTION:
Derating is the reduction of electrical, thermal, and mechanical stresses applied to a part in
order to decrease the degradation rate and prolong the expected life of the part. Derating increases the margin of safety between the
operating stress level and the actual failure level for the part, providing added protection from system anomalies unforeseen by the
designer.

DERATING CRITERIA
The derating criteria contained herein indicate the maximum recommended stress values and do not preclude further
derating. When derating, the designer must first take into account the specified environmental and operating condition rating factors,
consider the actual environmental and operating conditions of the applications, and then apply the recommended derating criteria
herein. Parts not appearing in these guidelines are lacking in empirical data and failure history, The derating instructions are listed
for each commodity in the following paragraphs.
To assure that these derating criteria are observed, an EEE parts list (item by item) shall be generated for each hardware
assembly. This list shall, as a minimum, contain the maximum rated capability (such as voltage, current, power, temperature, etc.) of
the part in comparison with the design requirements of the application, indicating conformance to the derating criteria specified herein.
In the following derating sections, the term ambient temperature as applied to low pressure or space vacuum operation, is
defined as follows:
For operation under conditions of very low atmospheric pressure or space vacuum, heat loss by convention is essentially zero, so
ambient temperature is the maximum temperature of the heat sink or other mounting surface in contact with the part, or the
temperature of the surface of the part itself (case temperature).

DERATING LEVELS
The range of derating is generally defined as a point between the minimum derating point and the point of over-derating. The
optimum derating, therefore, should occur at or below the point of stress (i.e., voltage, temperature) where a rapid increase in failure
rate occurs for a small increase in stress.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


40
www.ijergs.org



PART QUALITY LEVELS
Derating cannot be used to compensate for using parts of a lower quality than necessary to meet usage reliability
requirements. The quality level of a part has a direct effect on the predicted failure rate.
These derating criteria for hybrid devices such as Integrated circuits, Transistors, Capacitors, Resistors these devices may use
thick film or thin films as interconnections and resistive elements. The primary failure modes are failures of active components,
integrated circuits or transistor chips, and interconnection faults.
The derating criteria for other complex integrated circuits such as LSI, VHSIC, VLSI, Microprocessors), for the memory
devices such as Bipolar, MOS, which are broken up into RAM (Random access memories) and ROM (Read only memories), for
Microwave devices such as GaAs FET, Detectors and Mixers, Varactor diodes, Step recovery diodes, PIN diodes, Tunnel diodes,
IMPATT diodes, Gunn diodes, and Transistors. The derating criteria procedure is even carried out for Surface Acoustic Wave (SAW)
devices such as Delay lines, Oscillators, Resonators, and Filters.
In this project we are derating the hybrid devices which are Resistors, Capacitors and Operational Amplifiers by using an E-
CAD Tool MULTISIM which is a circuit simulator developed by SPICE and designed as per the MIL.STD 975M.

RESISTORS DERATING CRITERIA
The derated power level of a resistor is obtained by multiplying the resistors nominal power rating by the appropriate power ratio
found on the (y) axis in the graphs below and on the next page. This ratio is also a function of the resistors ambient temperature
maximum (x axis). The voltage applied to resistors must also be controlled. The maximum applied voltage should not exceed 80% of
the specification maximum voltage rating of PR which ever is less, where:
P = Derated Power (watts).
R = Resistance of that portion of the element actually active in the circuit.

This voltage derating applies to DC and Regular waveform AC applications.







Fig: Resistor derating chart

Power and ambient temperature are the principal stress parameters. Thus by following these parameters based as per
MIL.STD 975M and datasheet rules the resistors have been derated which are present in the design.

There are many military specifications that deal with different types of resistors; listing of Resistor MIL Specs.

CAPACITOR DERATING CRITERIA
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


41
www.ijergs.org

Voltage derating is accomplished by multiplying the maximum operating voltage by the appropriate derating factor. The
principal stress parameters for capacitors are temperature and DC and/or AC voltage.


OP - AMPS DERATING CRITERIA
The principal stress parameters for linear microcircuits are the supply voltage, input voltage, output current, total device
power, and junction temperature.
Even though a component is rated to a particular maximum temperature, derating will insure a worst-case design. So that
some unpredictable event, operating condition or design uncertainty does not cause a component to over-heat. However even without
derating an integrated circuit is normally specified below its maximum temp, because of part-to-part variations.


So there is always some head room, but the concern is about reliability and good design practices. Derating is a sound design
practice because it lowers the junction temperature of the device increasing component life, and reliability.

First stage where factors of producibility, reliability can be taken care is Design phase. Things can be improved later but only with
higher costs. One important step that needs to be done during Design stage is simulation with E-CAD tool i.e MULTISIM. Multisim is
a circuit simulator powered by SPICE which is the industry standard circuit simulation engine, developed here at Berkeley.

Fig : Outlook of the simulation tool

Many Designers perform simulation and basic nominal functional performance analysis of an Electronic circuit during Design stage.
This involves application of appropriate inputs to circuit, simulation and examination of outputs for expected/designed behavior. All
component parameters are set to their nominal values. This approach proves circuit behavior at nominal component values. A NOTCH
FILTER circuit used in an aerospace application is taken up for analysis. The schematic that has been simulated for the analysis
procedure is the notch filter. Notch filter is a band- stop filter with a narrow stop band (or) band - rejection filter of high Q- factor.
The performance parameter for the schematic is notch frequency value. Tolerance for this performance parameter is specified. For
carrying out this derating analysis procedure we estimate the minimum and maximum currents , voltages , temperature and the
required parameters that are considered for any component specifications some of those components are resistors , capacitors,
operational amplifiers etc. First simulation is run at nominal values for all component values in schematic. Finally optimum
component tolerances, which give low rejection rate during production, are obtained.

DESIGN ANALYSIS OF THE NOTCH FILTER

Fig : Notch filter schematic design
In above schematic , FD and FB are analog inputs with a dynamic range of 10 Volts . OUT is analog output with a dynamic
range of 10 Volts .V1 and V2 are Voltage sources used to simulate the circuit. Above circuit exhibits notch Frequency at 90 Hz with
respect to input on FD with FB grounded.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


42
www.ijergs.org

NOMINAL FUNCTIONAL ANALYSIS:
A nominal functional simulation is run by using an EDA tool, with all component values set to nominal.V2 voltage source is set
to 0 V to ground FB input.V1voltage source is set to 0.1 V rms sine wave to perform Frequency response Analysis with respect to FD
input.Expected nominal value of Notch Frequency is 903 Hz.Frequency Response on OUT for nominal simulation is shown below
in Fig.It is giving a value of 91.20 Hz as Notch Frequency value, which is as per expectation.

Fig : Nominal Functional
Analysis of Notch Frequency
The simulation process consists of AC ANALYSIS, TRANSIENT ANALYSIS and even DC ANALYSIS. The simulation is being
carried out with and without presence of load and at different temperatures for checking of design margin and longetivity of the
components.

Conclusion:
The outputs that are observed are shown below:
AC ANALYSIS:
Frequency response is observed from magnitude and phase plots .

MAGNITUDE

PHASE


TRANSIENT RESPONSE:

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


43
www.ijergs.org



Many more steps are required to make a reliable product. Product should have a reliability program as per US Military standard.

REFERENCES:
[1]Dillard, R.B., Reliability for the Engineer, Book Two, Martin Marietta Corporation, 1973.
[2]"Electronic Reliability Design Handbook," MIL-HDBK-338-1A, October 1988.
[3]Klion, Jerome, A Redundancy Notebook, Rome Air Development Center, RADC-TR-77-287, December 1987.
[4]Lalli, Vincent R. and Speck, Carlton E., "Traveling-Wave Tube Reliability Estimates, Life Tests, and Space Flight Experience,"
NASA TM X-73541, January 1977.
[5]"Reliability Modeling and Prediction," MIL-STD-756B, November 1981.
[6] Reliability of components , MIL-STD-975M.
[7]National Instruments tutorials and general information:
http://search.ni.com/nisearch/app/main/p/bot/no/ap/global/lang/en/pg/1/ps/10/q/multisim %20tutorial/
[8]Transitioning from PSPICE to NI Multisim: A Tutorial http://zone.ni.com/devzone/cda/tut/p/id/5964
[9]Adding components to your database: http://zone.ni.com/devzone/cda/tut/p/id/5607#toc1
[10[Ultiboard PCB layout system: http://digital.ni.com/manuals.nsf/websearch/D97873AF18C4EA84862571F5006D0EF3
http://www.opsalacarte.com/Pages/reliability/reliability_des_comp.html














International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


44
www.ijergs.org

A Survey on Feature Selection Techniques
Jesna Jose
1

1
P.G. Scholar, Department of Computer Science and Engg, Sree Buddha College of Engg, Alappuzha
E-mail- jesnaakshaya@gmail.com
Abstract Feature selection is a term commonly used in data mining to describe the tools and techniques available for reducing
inputs to a manageable size for processing and analysis. Feature selection implies not only cardinality reduction, which means
imposing an arbitrary or predefined cutoff on the number of attributes that can be considered when building a model, but also the
choice of attributes, meaning that either the analyst or the modeling tool actively selects or discards attributes based on their
usefulness for analysis. Feature selection is an effective technique for dimension reduction and an essential step in successful data
mining applications. It is a research area of great practical significance and has been developed and evolved to answer the challenges
due to data of increasingly high dimensionality. The objective of feature selection is three fold. Improving the prediction performance
of the predictors, Providing faster and more cost effective prediction and providing a better understanding of the underlying process
that generate the data. This paper is actually a survey on various techniques of feature selection and its advantages and disadvantages.
Keywords Feature selection, Graph based clustering, Redundancy, Relevance, Minimum spanning tree, Symmetric uncertainity, correlation
.
INTRODUCTION
Data mining is a form of knowledge discovery essential for solving problems in a specific domain. As the world grows in
complexity, overwhelming us with the data it generates, data mining becomes the only hope for elucidating the patterns that underlie it
[1]. The manual process of data analysis becomes tedious as size of data grows and the number of dimensions increases, so the process
of data analysis needs to be computerized. Feature selection plays an important role in the data mining process. It is very essential to
deal with the excessive number of features, which can become a computational burden on the learning algorithms as well as various
feature extraction techniques.. It is also necessary, even when computational resources are not scarce, since it improves the accuracy
of the machine learning tasks.This paper made a survey on various existing feature selection techniques.
SURVEY
1. Efficient Feature Selection via Analysis of Relevance and Redundancy
This paper[4] propose a new framework of feature selection which avoids implicitly handling feature redundancy and turns to efficient
elimination of redundant features via explicitly handling feature redundancy. Relevance definitions divide features into strongly
relevant, weakly relevant, and irrelevant ones; redundancy definition further divides weakly relevant features into redundant and non-
redundant ones. The goal of this paper is to efficiently find the optimal subset. We can achieve this goal through a new framework of
feature selection (figure 1) composed of two steps: first, relevance analysis determines the subset of relevant features by removing
irrelevant ones, and second, redundancy analysis determines and eliminates redundant features from relevant ones and thus produces
the final subset. Its advantage over the traditional framework of subset evaluation lies in that by decoupling relevance and redundancy
analysis, it circumvents subset search and allows a both efficient and effective way in finding a subset that approximates an optimal
subset. The disadvantage of this technique is that it does not process the image data.

Figure 1: A new framework of feature selection
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


45
www.ijergs.org


2. Graph based clustering

The general methodology of graph-based clustering includes the below given five part story[2]:
(1) Hypothesis. The hypothesis can be made so that a graph can be partitioned into densely connected subgraphs that are
sparsely connected to each other.
(2) Modeling. It deals with the problem of transforming data into a graph or modeling the real application as a graph by
specially designating the meaning of each and every vertex, edge as well as the edge weights.
(3) Measure. A quality measure is an objective function that rates the quality of a clustering. The quality measure will
identify the cluster that satisfy all the desirable properties.
(4) Algorithm. An algorithm is to exactly or approximately optimize the quality measure. The algorithm can be either top
down or bottom up.
(5) Evaluation. Various metrics can be used to evaluate the performance of clustering by comparing with a ground truth
clustering.

Graph-based Clustering Methodology
We start with the basic clustering problem. Let = {1,, } be a set of data points, =(ij),=1,, be the
similarity matrix in which each element indicates the similarity j 0 between two data points and . A nice way to
represent the data is to construct a graph on which each vertex represents a data point and the edge weight carries the
similarity of two vertices. The clustering problem in graph perspective is then formulated as partitioning the graph into
subgraphs such that the edges in the same subgraph have high weights and the edges between different subgraphs have low
weights.
A graph can be represented in such a way that A graph is a triple G=(V,E,W) where = {1,, } is a set of
vertices, EVV is a set of edges, and = (Wij),=1,, is called adjacency matrix in which each element indicates a
non-negative weight ( 0) between two vertices and . The hypothesis behind graph-based clustering can be
stated in the following ways[2]. First is the graph consists of dense subgraphs such that a dense subgraph contains more well
connected internal edges connecting the vertices in the subgraph than cutting edges connecting the vertices across subgraphs.
Second is a random walk that visits a subgraph will likely stay in the subgraph until many of its vertices have been visited
(Dongen, 2000). Third hypothesis is among all shortest paths between all pairs of vertices, links between different dense
subgraphs are likely to be in many shortest paths (Dongen, 2000)
While considering the modeling step, Luxburg (2006) stated three most common methods to construct a graph:
neighborhood graph, -nearest neighbor graph, and fully connected graph. About measuring the quality of a cluster, it is worth
noting that quality measure should not be confused with vertex similarity measure where it is used to compute edge weights. The
main difference is that cluster quality measure directly identifies a clustering that fulfills a desirable property while evaluation
measure rates the quality of a clustering by comparing with a ground-truth clustering.
Graph based clustering algorithms can be divided into two major classes: divisive and agglomerative. In the divisive
clustering class, we categorize algorithms into several subclasses like cut-based, spectral clustering, multilevel, random walks,
shortest path. Divisive clustering follows top-down style and recursively splits a graph into various subgraphs. The agglomerative
clustering works bottom-up and iteratively merges singleton sets of vertices into subgraphs. The divisive and agglomerative
algorithms are also called hierarchical since they produce multi-level clusterings, i.e., one clustering follows the other by refining
(divisive) or coarsening (agglomerative). Most graph clustering algorithms ever proposed are divisive.

3. Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution

Symmetric uncertainty is in fact the measure of how much a feature is related to another feature. This correlation based filter
approach is making use of this symmetric uncertainty method. This involves two aspects: (1) how to decide whether a feature is
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


46
www.ijergs.org

relevant to the class or not; and (2) how to decide whether such a relevant feature is redundant or not when considering it with
other relevant features. The solution to the first question can be using a user- defined threshold SU value, as the method used by
many other feature weighting algorithms (e.g., Relief). The answer to the second question is more complicated because it may
involve analysis of pairwise correlations between all features (named F-correlation), which results in a time complexity of O(N2)
associated with the number of features N for most existing algorithms. To solve this problem, FCBF algorithm is proposed. FCBF
means Fast Correlation-Based Filter Solution[3]. This algorithm involves two steps. First step is select relevant features and
arrange them in descending order according to the correlation value. Second step is remove redundant features and only keeps
predominant ones.
For predominant feature selection another algorithm is there.
a) Take the first element Fp as the predominant feature.
b) Then take the next element Fq.
- if Fp happens to be redundant peer of Fq, remove Fq
c) After one round of filtering based on Fp , take the remaining features next to Fp as the new reference and repeat.
d) The algorithms stops until there is no more feature to be removed.
The disadvantage of this algorithm is that it does not work with high dimensional data.


CONCLUSION
Feature selection is a term commonly used in data mining to describe the tools and techniques available for reducing inputs to a
manageable size for processing and analysis. Feature selection implies not only cardinality reduction, which means imposing an
arbitrary or predefined cutoff on the number of attributes that can be considered when building a model, but also the choice of
attributes, meaning that either the analyst or the modeling tool actively selects or discards attributes based on their usefulness for
analysis. Feature selection techniques has wide variety of applications in data mining, digital image processing etc. Various feature
selection techniques and its advantages as well as disadvantages are depicted in this paper.

REFERENCES:
[1] I.H. Witten, E. Frank and M.A. Hall, Data mining practical machine learning tools and techniques, Morgan Kaufmann publisher,
Burlington 2011

[2] Zheng Chen, Heng Ji, Graph-based Clustering for Computational Linguistics: A Survey ,Proceedings of the 2010 Workshop on
Graph-based Methods for Natural Language Processing, ACL 2010, pages 19, Uppsala, Sweden, 16 July 2010. c 2010 Association
for Computational Linguistics

[3] Lei Yu, Huan Liu, Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution, Department of
Computer Science & Engineering, Arizona State University, Tempe, AZ 85287-5406, USA

[4]Lei Yu, Huan Liu, Efficient Feature Selection via Analysis of Relevance and Redundancy, Journal of Machine Learning Research 5
(2004) 12051224

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


47
www.ijergs.org

Face Recognition using Laplace Beltrami Operator by Optimal Linear
Approximations
Tapasya Sinsinwar
1
, P.K.Dwivedi
2

1
Research Scholar (M.Tech, IT), Institute of Engineering and Technology
2
Professor and Director Academics, Institute of Engineering and Technology, Alwar, Rajasthan Technical University, Kota(Raj.)

AbstractWe propose an appearance-based face recognition technique called the laplacian face method. With Locality Preserving
Projections (LPP), the face imageries are mapped into a face subspace for analysis. Unlike from Principal Component Analysis (PCA)
and Linear Discriminant Analysis (LDA) which commendably see merely the Euclidean structure of face space, LPP discovers a set in
subspace that conserves native information, and finds a face subspace that best perceives the essential face manifold structure. The
laplacian faces are the optimal linear approximations to the Eigen functions of the Laplace beltrami operator on the face manifold. In
this way, the undesirable variations resulting from changes in lighting, facial expression, and pose may be removed or reduced.
Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We equate the proposed Laplacian
face approach with Eigen face and Fisher face methods on three different face data sets. Experimental results recommend that the
proposed Laplacian face method provides a better representation and attains lower error rates in face recognition.

KeywordsFace recognition, principal component analysis, linear discriminant analysis, locality preserving projections, face
manifold, subspace learning.


1 Introduction
Lots of face recognition methods have been developed over the former few years. One of the best popular and well-studied practices
for face recognition is the appearance-based method [28], [16]. By means of appearance-based methods, we generally characterize an
image of size n m pixels by a vector in an n m-dimensional space. In fact, these nm-dimensional spaces are too huge to permit
robust and fast face recognition. A common manner to attempt to determine this problem is to use dimensionality reduction methods
[7], [9], [6], [10], [16], [15], [21],[28], [29], [37], [37]. Two of the most popular methods for this purpose are Principal Component
Analysis (PCA) [26] and Linear Discriminant Analysis (LDA) [3]. PCA is an Eigenvector technique intended to model linear
difference in high-dimensional data. PCA achieves dimensionality reduction by projecting the unique n-dimensional data onto the k
(<< n)-dimensional linear subspace covered by the foremost Eigenvectors of the datas covariance matrix. Its aim is to discover a set
of communally orthogonal basis functions that capture the directions of maximum variance in the data and for which the coefficients
are pairwise decorrelated. For linearly fixed manifolds, PCA is sure to find the dimensionality of the manifold and produces a
compacted representation. Turk and Pentland[29] use Principal Component Analysis to define face images in terms of a set of basic
functions, or Eigen faces.
LDA is a managed learning algorithm. LDA examines for the project axes on which the data points of various classes are far
away from each other while needing data points of the same class to be convenient to each other. Different from PCA which encodes
data in an orthogonal linear space, LDA encodes discriminating information in a linearly independent space using bases that are not
essentially orthogonal. It is mostly believed that algorithms based on LDA are superior to those based on PCA. However, some
modern work [14] demonstrates that, when the training data set is lesser, PCA can outperform LDA, and also that PCA is less delicate
to different training data sets. In recent times, a lots of research efforts have shown that the face images possibly exists on a nonlinear
sub manifold [7], [10], [18], [19], [21], [23], [27]. However, both PCA and LDA excellently see only the Euclidean structure. They
fail to find the underlying structure, if the face images lie on a nonlinear sub manifold hidden in the image space.
In this paper, we propose a new method to face analysis (representation and recognition), which openly considers the manifold
structure. To be particular, the manifold structure is modelled by a nearest-neighbour graph which preserves the local structure of the
image space. A face subspace is obtained by Locality Preserving Projections (LPP) [9]. Each face image in the image space is plotted
to a low-dimensional face subspace, which is considered by a set of feature images, called Laplacian faces. The face subspace
preserves native structure and seems to have more discriminating power than the PCA approach for sorting purpose. We also offer
theoretical analysis to show that PCA, LDA, and LPP can be obtained from different graph models. Vital to this is a graph structure
that is contingent on the data points. LPP discovers a projection that compliments this graph structure. In our theoretical analysis, we
show how PCA, LDA, and LPP arise from the same principle applied to different choices of this graph structure.
It is worthwhile to focus some aspects of the proposed approach here:

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


48
www.ijergs.org

1. While the Eigen faces technique purposes to preserve the global structure of the image space, and the Fisher faces technique goals
to preserve the discriminating information; our Laplacian faces method aims to preserve the local structure of the image space. In
various real-world classification problems, the local manifold structure is more important than the global Euclidean structure,
especially when nearest-neighbour like classifiers are used for classification. LPP seems to have discriminating power even though it
is unproven.

2. An effective subspace learning algorithm for face recognition should be able to find the nonlinear manifold structure of the face
space. Our proposed Laplacian faces technique explicitly reflects the manifold structure which is modelled by an adjacency graph.
Furthermore, the Laplacian faces are gained by finding the optimal linear approximations to the Eigen functions of the Laplace
Beltrami operator on the face manifold. They imitate the intrinsic face manifold structures.

3. LPP shares several similar properties to LLE [18], such as a locality preserving character, though, their objective functions are
completely unlike. LPP is achieved by finding the optimal linear approximations to the Eigen functions of the Laplace Beltrami
operator on the manifold. LPP is linear, whereas LLE is nonlinear. Furthermore, LPP is defined everywhere, while LLE is defined
only on the training data points and it is unclear how to evaluate the maps for new test points. In contrast, LPP may be merely applied
to any new data point to discover it in the reduced representative space test points.

2 PCA and LDA
One approach to deal with the difficulty of extreme dimensionality of the image space is to reduce the dimensionality by combining
features. Linear permutations are particular, attractive because they are simple to calculate and logically tractable. In effect, linear
methods project the high-dimensional data onto a lower dimensional subspace.
Considering the problem of representing all of the vectors in a set of n-dimensional samples x
1
, x
2
,... x
n
, with zero mean, by a
single vector y ={y
1
,y
2
.y
n
} such that y
i
represents x
i
. Precisely, we find a linear mapping from the

d-dimensional space to a line.
Without loss of generality, we

represent the transformation vector by w. That is, w
T
x
i
= y
i
.

In reality, the magnitude of w is of no real
significance

because it just scales y
i
. In face recognition, each vector x
i
denotes a face image.

Different objective functions will yield different algorithms with different properties. PCA aims to extract a subspace in which the
variance is maximized. Its objective function is as follows:
n _
max
w
(y
i
y )
2
, (1)
i=1



_ n
y =1/n y
i
. (2)
i=1



The output set of principal vectors w
1
,w
2
,.w
k
is an orthonormal set of vectors representing the Eigenvectors of the sample
covariance matrix associated with the k < d largest Eigenvalues.


3 Learning a locality preserving subspace
PCA and LDA aim to preserve the global structure. Though, in many real-world applications, the local structure is more important. In
this section, we describe Locality Preserving Projection (LPP) [9], a new algorithm for learning a locality preserving subspace. The
comprehensive derivation and theoretical explanations of LPP can be traced [9]. LPP looks for preserving the basic geometry of the
data and local structure. The objective function of LPP is as follows:


min

(y
i
y
j
)
2
w
ij

,
ij

where y
i
is the one-dimensional representation of x
i
and the matrix S is a similarity matrix. A possible way of defining S is as follows:

w
ij =
{exp(-xi -xj
2
/ t
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


49
www.ijergs.org

0 otherwise }


The objective function with our choice of symmetric weights w
ij
(w
ij
= w
ji
) incurs a heavy penalty if neighbouring points x
i
and x
j
are
mapped far apart, i.e., if(y
i
- y
j
)
2
is large. Therefore, minimizing it is an attempt to

ensure that, if x
i
and x
j
are close, then y
i
and y
j
are
close

as well. Following some simple algebraic steps, we see that :

(y
i
y
j
)
2
w
ij


ij

= (w
T
x
i
w
T
x
j
)
2
w
ij


ij

= w
T
XDX
T
w - w
T
XWX
T
w

=w
T
X ( D W ) X
T
w

= w
T
XLXTw

where X =[x
1
,x
2
..... x
n
], and D is a diagonal matrix; its
entries are column (or row since S is symmetric) sums of W, D
ii
=
j
W
ji
. L= D - W is the Laplacian matrix [6].



4 Locality Preserving Projections
4.1. The linear dimensionality reduction problem
The basic problem of linear dimensionality reduction is the follows: Given a set x
1
, x
2
,.x
m
in R
n
, nd a transformation matrix A that
maps these m points to a set of points y
1
, y
2
,y
m
in R
l
(l << n), such that y
i
represents x
i
, where y
i
= A
T
x
i
.
Our method is of particular applicable in the special case where x
1
, x
2
.x
m
M and M is a nonlinear manifold embedded in R
n
.


4.2. The algorithm
Locality Preserving Projection (LPP) is a linear approximation of the nonlinear Laplacian Eigen map [2]. The algorithmic procedure is
formally stated below:
1. Constructing the adjacency graph: Let G denote a graph with m nodes. We put an edge between nodes i and j if x
i
and x
j

are close. There are two variations:
(a) -neighbourhoods: [parameter R] Nodes i and j are connected by an edge if x
i
- x
j

2
< where the norm is the usual
Euclidean norm in R
n
.
(b) k nearest neighbours: [parameter k N] Nodes i and j are connected by any edge if i is among k nearest neighbours of j or j is
among k nearest neighbours of i.
Note: The process of constructing an adjacency graph outlined above is correct if the data actually lie on a low dimensional manifold.
In general, though, one might take a more practical viewpoint and construct an adjacency graph based on any principle (for example,
perceptual similarity for natural signals, hyperlink structures for web documents, etc.). Once such an adjacency graph is obtained, LPP
will try to optimally preserve it in choosing projections.

2. Choosing the weights: Here, as well, we have two variations for weighting the edges. W is a sparse symmetric mm matrix with
W
ij
having the weight of the edge joining vertices i and j, and 0 if there is no such edge.
(a) Heat kernel: [parameter t R]. If nodes i and j are connected, put
W
ij
= e
(-x
i
-x
j
2
/
t)
The justication for this choice of weights can be traced back to [2].
(b) Simple-minded: [No parameter]. W
ij
= 1 if and only if vertices i and j are connected by an edge.

3. Eigen maps: Compute the Eigenvectors and Eigenvalues for the generalized Eigenvector problem:
XLX
T
a = XDX
T
a
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


50
www.ijergs.org

where D is a diagonal matrix whose entries are column (or row, since W is symmetric) sums of W, D
ii
=
j
W
ji
. L = D - W is the
Laplacian matrix. The i
th
column of matrix X is x
i
.
Let the column vectors a
0
.a
l-1
be the solutions of equation (1), ordered according to their Eigenvalues,
0
<.<
l-1
. Thus,
the embedding is as follows:
x
i
yi = A
T
x
i
, A = ( a
0
, a
1
,.........a
l-1
)
where y
i
is a l-dimensional vector, and A is a n l matrix.

5 Geometrical Justication
The Laplacian matrix L= (D - W) for nite graph, or [4], is analogous to the Laplace Beltrami operator L on compact Riemannian
manifolds. While the Laplace Beltrami operator for a manifold is generated by the Riemannian metric, for a graph it comes from the
adjacency relation.
Let M be a smooth, compact, d-dimensional Riemannian manifold. If the manifold is embedded in R
n
, the Riemannian structure on the
manifold is induced by the standard Riemannian structure on R
n
. We are looking here for a map from the manifold to the real line such
that points close together on the manifold get mapped close together on the line. Let f be such a map. Assume that f : M R is twice
differentiable.
Belkin and Niyogi [2] showed that the optimal map preserving locality can be found by solving the following optimization problem on
the manifold:

arg min
M
f
2

fL
2
(M)=1

which is equivalent to

arg min
M
L(f) f


fL
2
(M)=1

where the integral is taken with respect to the standard measure on a Riemannian manifold. L is the Laplace Beltrami operator on the
manifold, i.e. L f =- div (f). Thus, the optimal f has to be an Eigen function of L. The integral
M
L (f)f can be discretely
approximated by [f(X), Lf(X)] = f
T
(X) Lf(X) on a graph, where

f(X) = [f(x
1
), f(x
2
),f(x
m
))]
T
, f
T
(X) = [f(x
1
), f(x
2
), ..f(x
m
))]

If we restrict the map to be linear, i.e. f(x) = a
T
x, then we have
f(X) = X
T
a [f(X),Lf(X)] = f
T
(X)Lf(X) = a
T
XLX
T
a

The constraint can be computed as follows,

f
L
2
(M)
=
M


(a
T
x)
2
dx =
M


(a
T
x x
T
a)dx = a
T
(
M

xx
T
dx)

a

where dx is the standard measure on a Riemannian manifold. By spectral graph theory [4], the measure dx directly corresponds to the
measure for the graph which is the degree of the vertex, i.e. D
ii
. Thus,

fL
2
(M) can be discretely approximated as follows,
f
2
L
2
(M)
= a
T
(
M

xx
T
dx)

a a
T
(
i
xx
T
D
ii
)a = a
T
XDX
T
a

Finally, we conclude that the optimal linear projective map, i.e. f(x) = a
T
x, can be obtained by solving the following objective
function,

arg min a
T
XLX
T
a
a
T
XDX
T
a= 1

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


51
www.ijergs.org

These projective maps are the optimal linear approximations to the Eigen functions of the Laplace Beltrami operator on the manifold.
Therefore, they are capable of discovering the nonlinear manifold structure.

6 Experimental Results
Some simple mock examples given in [9] show that LPP can have additional discriminating power than PCA and be less subtle to
outliers. In this section, numerous experiments are carried out to demonstrate the effectiveness of our proposed Laplacian faces
technique for face representation and recognition.

6.1 Face Representation Using Laplacian faces
As we defined earlier, a face image can be represented as a point in image space. A typical image of size m n describes a point in m
n - dimensional image space. On the other hand, due to the undesirable variations resulting from changes in lighting, facial
expression, and pose, the image space might not be an ideal space for visual representation. The images of faces in the training set are
used to learn such a locality preserving subspace. The subspace is covered by a set of Eigenvectors of (35), i.e., w
0
,w
1
,......,w
k-1
. We
can show the Eigenvectors as images. These images may be called Laplacian faces. Using the face database as the training set, we
present the first 10 Laplacian faces in figure, in conjunction with Eigen faces and Fisher faces. A face image can be mapped into the
locality preserving subspace by using the Laplacian faces. It is interesting to note that the Laplacian faces are in some way similar to
Fisher faces.


Figure 1: Distribution of the 10 testing samples in the reduced representation subspace. As can be seen, these testing
samples optimally find their coordinates which reflect their intrinsic properties, i.e., pose and expression.

When the Laplacian faces are created, face recognition [2],[14], [28], [29] becomes a pattern classification task. In this section, we
examine the performance of our proposed Laplacian faces method for face recognition. The system performance is equated with the
Eigen faces method [28] and the Fisher faces method [2], two of the most popular linear methods in face recognition. In this study,
face database was tested. That one is the PIE (pose, illumination, and expression) database. In all the experiments, pre-processing to
trace the faces was applied. Original images were normalized (in scale and orientation) such that the two eyes were aligned at the
same position. Then, the facial areas were cropped into the final images for matching. Figure 2 shows the original image and the
cropped image. The size of each cropped image in all the experiments is 32 32 pixels, with 256 grey levels per pixel. Thus, each
image is represented by a 1,024-dimensional vector in image space.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


52
www.ijergs.org


Figure 2: The original face image and the cropped image

The facts of our methods for face detection and alignment can be found in [30], [32]. No further pre-processing is done. Different
pattern classifiers have been applied for face recognition, including nearest-neighbour [2], Bayesian [15], Support Vector Machine[17],
etc. In this paper, we apply the nearest-neighbour classifier for its simplicity. The Euclidean metric is used as our distance measure.
In short, the recognition process has three steps. First, we calculate the Laplacian faces from the training set of face images and
then the new face image to be recognized is projected into the face subspace spanned by the Laplacian faces. Finally, the new face
image is identified by a nearest-neighbour classifier.


Figure 3: The Eigenvalues of LPP and Laplacian Ei gen map.


6.1.1 PIE Database
The PIE face database contains 68 subjects with 41,368 face images as a whole. The face images were taken by 13 synchronized
cameras and 21 flashes, under varying pose, illumination, and expression. We used 170 face images for each individual in our
experiment, 85 for training and the other 85 for testing. Figure 4 shows some of the faces with pose, illumination and expression
variations in the PIE database.


Figure 4: The sample cropped face images of one individual from PIE database. The original face images are taken under
varying pose, illumination, and expression

Table 1 shows the recognition results. As can be seen, Fisher faces perform comparably to our algorithm on this database, while Eigen
faces performs poorly. The error rate for Laplacian faces, Fisher faces, and Eigen faces are 4.6 per cent, 5.7 per cent, and 20.6 per cent,
respectively. Figure 5 shows a plot of error rate versus dimensionality reduction. As can be seen, the error rate of our Laplacian faces
method decreases fast as the dimensionality of the face subspace increases, and achieves the best result with 110 dimensions. There is
no significant progress if more dimensions are used. Eigen faces achieves the best result with 150 dimensions. For Fisher faces, the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


53
www.ijergs.org

dimension of the face subspace is bounded by c - 1, and it achieves the best result with c -1 dimensions. The dashed horizontal line in
the figure shows the best result obtained by Fisher faces.



TABLE 1: Performance Comparison on the PIE Database



Figure 5: Recognition accuracy versus dimensionality reduction on PIE database


6.2 Discussion
These experiments on the database have been system-atically performed. These experiments disclose a number of remarkable points :

1. All these three approaches performed well in the optimal face subspace than in the original image space.

2. In all the three experiments, Laplacian faces consis-tently performs better than Eigen faces and Fisher faces.
These experiments also demonstrate that our algorithm is especially suitable for frontal face images. Likewise, our algorithm takes
advantage of more training samples, which is important to the real-world face recognition systems.

3. Equating to the Eigen faces method, the Laplacian faces method scrambles more discriminating information in the low-dimensional
face subspace by preserving indigenous structure which is more important than the global structure for classification, especially when
nearest-neighbour-like classifiers are used. In effect, if there is a reason to believe that Euclidean distances (x
i
- x
j
) are significant
only if they are small (local), then our algorithm finds a projection that respects such a belief.

7 Conclusion and future work
The manifold ways of face analysis (representation and
recognition) are introduced in this paper in order to identify the basic nonlinear manifold structure in the way of linear subspace
learning. To the best of our knowledge, this is the first devoted work on face representation and recognition which unambiguously
reflects the manifold structure. The manifold structure is estimated by the adjacency graph computed from the data points. Using the
notion of the Laplacian of the graph, we then compute a transformation matrix which maps the face images into a face subspace. We
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


54
www.ijergs.org

call this the Laplacian faces approach. The Laplacian faces are obtained by finding the optimal linear approximations to the Eigen
functions of the Laplace Beltrami operator on the face manifold. This linear conversion optimally preserves local manifold structure.
One of the vital problems in face manifold learning is to approximate the inherent dimensionality of the nonlinear face manifold,
or, degrees of freedom. We know that the dimensionality of the manifold is equal to the dimensionality of the local tangent space.
Some previous works [35], [36] show that the local tangent space can be estimated using points in a neighbour set. Hence, one
possibility is to approximate the dimensionality of the tangent space.
An additional possible extension of our work is to study the use of the unlabelled samples. It is important to note that the work
presented here is a general method for face analysis (face representation and recognition) by discovering the underlying face manifold
structure. Learning the face manifold (or learning Laplacian faces) is principally an unverified learning process. Meanwhile the face
images are supposed to exist in a sub manifold embedded in (Placeholder1) a high-dimensional ambient space, we believe that the
unlabelled samples are of great value.

REFERENCES:
[1] A. Levin and Shashua , Principal Component Analysis over Continuous Subspaces and Intersection of Half-Spaces, Proc.
European Conf. Computer Vision, May 2002.
[2] A. Levin, Shashua and S. Avidan, Manifold Pursuit: A New Approach to Appearance Based Recognition, Proc. Intl Conf.
Pattern Recognition, Aug. 2002.
[3] A.M. Martinez and A.C. Kak, PCA versus LDA, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp.
228-233, Feb. 2001.
[4] A.U. Batur and M.H. Hayes, Linear Subspace for Illumination Robust Face Recognition, Proc. IEEE Intl Conf. Computer
Vision and Pattern Recognition, Dec. 2001.
[5] A. Pentland and Moghaddam, Probabilistic Visual Learning for Object Representation, IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 19, pp. 696-710, 1997.
[6] F.R.K. Chung, Spectral Graph Theory, Proc. Regional Conf. Series in Math., no. 92, 1997.
[7] H. Murase and S.K. Nayar, Visual Learning and Recognition of 3-D Objects from Appearance, Intl J. Computer Vision,
vol. 14, pp. 5-24, 1995.
[8] H. Zha and Z. Zhang, Isometric Embedding and Continuum ISOMAP, Proc. 20th Intl Conf.Machine Learning, pp. 864-
871, 2003.
[9] H.S. Seung and D.D. Lee, The Manifold Ways of Perception, Science, vol. 290, Dec. 2000.
[10] J. Shi and J. Malik, Normalized Cuts and Image Segmentation, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.
22, pp. 888-905, 2000.
[11] J. Yang, Y. Yu, and W. Kunz, An Efficient LDA Algorithm for Face Recognition, Proc. Sixth Intl Conf. Control,
Automation, Robotics and Vision, 2000.
[12] J.B. Tenenbaum, V. de Silva, and J.C. Langford, A Global Geometric Framework for Nonlinear Dimensionality Reduction,
Science, vol. 290, Dec. 2000.
[13] K.-C. Lee, J. Ho,M.-H. Yang, and D. Kriegman, Video-Based Face Recognition Using Probabilistic Appearance Manifolds,
Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 313320, 2003.
[14] L. Sirovich and M. Kirby, Low-Dimensional Procedure for the Characterization of Human Faces, J. Optical Soc. Am. A,
vol. 4, pp. 519-524, 1987.
[15] L. Wiskott, J.M. Fellous, N. Kruger, and C.v.d. Malsburg, Face Recognition by Elastic Bunch Graph Matching, IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 19, pp. 775-779, 1997.
[16] L.K. Saul and S.T. Roweis, Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds, J.
Machine Learning Research, vol. 4, pp. 119-155, 2003.
[17] M. Belkin and P. Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering, Proc. Conf.
Advances in Neural Information Processing System 15, 2001.
[18] M. Belkin and P. Niyogi, Using Manifold Structure for Partially Labeled Classification, Proc. Conf. Advances in Neural
Information Processing System 15, 2002.
[19] M. Brand, Charting a Manifold, Proc. Conf. Advances in Neural Information Processing Systems, 2002.
[20] M. Turk and A.P. Pentland, Face Recognition Using Eigen faces, IEEE Conf. Computer Vision and Pattern Recognition,
1991.
[21] M.-H. Yang, Kernel Eigen faces vs. Kernel Fisher faces: Face Recognition Using Kernel Methods, Proc. Fifth Intl Conf.
Automatic Face and Gesture Recognition, May 2002.
[22] P.J. Phillips, Support Vector Machines Applied to Face Recognition, Proc. Conf. Advances in Neural Information
Processing Systems 11, pp. 803-809, 1998.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


55
www.ijergs.org

[23] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, Eigen faces vs. Fisher faces: Recognition Using Class Specific Linear
Projection, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19,no. 7, pp. 711-720, July 1997.
[24] Q. Liu, R. Huang, H. Lu, and S. Ma, Face Recognition Using Kernel Based Fisher Discriminant Analysis, Proc. Fifth Intl
Conf. Automatic Face and Gesture Recognition, May 2002.
[25] R. Gross, J. Shi, and J. Cohn, Where to Go with Face Recognition, Proc. Third Workshop Empirical Evaluation Methods in
Computer Vision, Dec. 2001.
[26] R. Xiao, L. Zhu, and H.-J. Zhang, Boosting Chain Learning for Object Detection, Proc. IEEE Intl Conf. Computer Vision,
2003.
[27] S. Roweis, L. Saul, and G. Hinton, Global Coordination of Local Linear Models, Proc. Conf. Advances in Neural
Information Processing System 14, 2001.
[28] S. Yan, M. Li, H.-J. Zhang, and Q. Cheng, Ranking Prior Likelihood Distributions for Bayesian Shape Localization
Framework, Proc. IEEE Intl Conf. Computer Vision, 2003.
[29] S.T. Roweis and L.K. Saul, Nonlinear Dimensionality Reduction by Locally Linear Embedding, Science, vol. 290, Dec.
2000.
[30] S.Z. Li, X.W. Hou, H.J. Zhang, and Q.S. Cheng, Learning Spatially Localized, Parts-Based Representation, Proc. IEEE
Intl Conf. Computer Vision and Pattern Recognition, Dec. 2001.
[31] T. Shakunaga and K. Shigenari, Decomposed Eigenface for Face Recognition under Various Lighting Conditions, IEEE
Intl Conf. Computer Vision and Pattern Recognition, Dec. 2001.
[32] T. Sim, S. Baker, and M. Bsat, The CMU Pose, Illumination, and Expression (PIE) Database, Proc. IEEE Intl Conf.
Automatic Face and Gesture Recognition, May 2002.
[33] W. Zhao, R. Chellappa, and P.J. Phillips, Subspace Linear Discriminant Analysis for Face Recognition, Technical Report
CAR-TR-914, Center for Automation Research, Univ. ofMaryland, 1999.
[34] X. He and P. Niyogi, Locality Preserving Projections, Proc. Conf. Advances in Neural Information Processing Systems,
2003.
[35] Y. Chang, C. Hu, and M. Turk, Manifold of Facial Expression, Proc. IEEE Intl Workshop Analysis and Modeling of
Faces and Gestures, Oct. 2003.
[36] Z. Zhang and H. Zha, Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment,
Technical Report CSE-02-019, CSE, Penn State Univ., 2002













International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


56
www.ijergs.org

Operation and Control Techniques of SMES Unit for Fault Ride through
Improvement of a DFIG Based WECS
Sneha Patil
1

1
Research Scholar (M.Tech), Bharti Vidyapeeth University College of Engineering, Pune
Abstract The energy storage in an SMES is in the state of magnetic field within a superconductor coil. The magnetic field is
formed by flowing DC current in the SMES. To ensure proper operation the temperature of SMES should be maintained below critical
temperature. At this temperature the resistance of the coil is zero and hence there is no loss in stored energy. The ability of SMES to
store energy is influenced by the current density. The energy is fed back to the grid by conversion of magnetic field into electrical
energy. An SMES system has a superconductor, refrigerant, power conditioning unit and control unit. The storage of energy is
achieved by continuous circulation of current inside the coil. Since the energy is not converted in any form other than electrical there
are lesser losses in SMES configuration than any other storage mechanism. Thus the efficiency is very high. It inhibits very low
cycling time and the number of charge discharge cycles is very high. The major drawbacks of this technology being very high initial
cost as well as losses associated with auxiliaries. This paper covers various aspects of SMES configuration and its connection in
power system.
Keywords Energy Storage, Superconducting Magnetic Energy Storage (SMES), Voltage Source Converter (VSC), Current Source
Converter (CSC),Wind Energy Conversion System (WECS), Doubly Fed Induction Generators (DFIG), Voltage Sag, Voltage Swell
INTRODUCTION



Fig. 1. Block diagram of an SMES unit

I. CONTROL METHODS FOR SMES

Various controlling methods for an SMES unit are discussed below:


THYRISTOR BASED SMES
A thyristor based SMES technology uses a Star- Delta transformer along with a thyristorised AC to DC bridge converter and an SMES
coil. A layout of a thyristorized SMES controller is shown in Fig. 2. Converter assigns polarity to the superconductor. Charging and
discharging operation is performed by varying the sequence of firing thyristors by modifying the delay angle. The converter performs
rectification operation for a delay angle is set lesser than 90. This enables charging of the SMES coil. For a converter angle set more
than 90 the converter allows discharging of SMES by operating as an inverter. Thus energy transfer can be achieved as desired. When
the power system is operating in steady state the SMES coil should not supply or absorb any active or reactive power.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


57
www.ijergs.org


Fig. 2. SMES unit controlled by a AC- DC 6 pulse thryristorized bridge converter

If V
sm0
is the no load max. DC voltage of the bridge, the voltage across DC terminals of the converter is
V
sm
= V
sm0
cos (1)

If I
sm0
is the initial coil current and P
sm
is the active power transferred between SMES and the grid, then the relation between current
and voltage of SMES coil is given as
(2)
P
sm
= V
sm
I
sm
(3)

The polarity of bridge current I
sm
cannot be changed therefore the value of active power P
sm
is a function of that has polarity as per
V
sm
. If V
sm
is positive the SMES unit gets charged by absorbing power from the grid. Whereas if V
sm
is negative the SMES coil is
discharged by feeding the power from SMES to the grid. The amount of energy that is stored within the SMES coil is given by
(4)
defines the initial energy in SMES



VOLTAGE SOURCE CONVERTER BASED SMES
The various components of a voltage source converter based SMES are a star- delta transformer, IGBT controlled six pulse width
modulation based converter and an IGBT controlled 2 quadrant chopper and an SMES unit. The two converters are connected with a
DC link capacitor. A schematic diagram of this arrangement is shown in Fig. 3 and the control technique of voltage source converter is
depicted in Fig. 4. The voltage source converter serves as interfacing device linking the SMES coil to the grid. The potential integral
controllers generate the values of direct and quadrature axis currents by comparing the actual value of the DC link voltage and
terminal voltage to their reference values. This quantity is used as an input signal to the voltage source converter. PWM converter
performs the operation of maintaining the voltage across DC link capacitor constant.


Fig. 3. Controlling technique of voltage source converter

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


58
www.ijergs.org


Fig. 4. Control technique of a voltage source converter

The chopper controls of energy transfer through the SMES coil. The chopper performs the operation of switching appropriate IGBTs
by controlling the polarity of V
sm
. This voltage can be adjusted by varying the duty cycle of the chopper. If the duty cycle is greater
than 0.5 the energy is stored into the SMES coil whereas if the duty cycle has a value lesser than 0.5 the SMES coil is discharged. The
gate signals for chopper circuit are generated by comparing the PWM signals with a triangular signal.


CURRENT SOURCE CONVERTER BASED SMES
The block diagram of a current source converter controlled SMES is shown in Fig. 5.


Fig. 5. Controlling technique for a current source converter

SMES coil is directly linked to the DC terminals of the current source converter whereas the AC terminals of the converter are
connected with the grid. The shunt connected capacitor bank protects from the energy stored within the line inductance during
commutation of AC current and also filter out the higher order harmonics. The input signal to IGBTs is regulated to control the current
flowing through SMES. SMES stores energy in the form of current. Therefore the real as well as reactive power gets transferred at a
very hih speed. A pulse width modulation technique is implemented to ensure that the higher order harmonics of a 12 pulse current
source controller are minimized. If the value of modulation index is maintained in the range between 0.2 to 1 then the higher order
harmonics are totally eliminated. The ripple content on DC side is higher when a 6 pulse current source converter is employed
whereas it is reduced in case of a 12 pulse converter. This eliminates the losses on AC side of the system. As depicted in Fig. 5 the
proportional integral controller compares the actual and reference values of I
d
. L stands for the inductance of the superconductor coil
whereas R
d
and V
d
are the respective resistance and voltage of the DC circuit. The rate of charging superconductor coil is influenced
by the value of V
d
which is a function of modulation index.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


59
www.ijergs.org

II. COMPARISON OF VARIOUSCONTROL TECHNIQUES
Comparison of control techniques for SMES coil is represented in Table 1. The topologies are compared on the basis of their ability of
controlling active and reactive power, layout and operational features of the control unit, the effective total harmonic distortion
generated by the control technique, the installation and operational costs as well as their self- commutation capabilities.

CRITERIA
SMES CONTROL TECHNIQUE
THYRISTORIZED CONTROL
VOLTAGE SOURCE
CONVERTER CONTROL
CURRENT SOURCE
CONVERTER CONTROL
ABILITY TO
CONTROL ACTIVE
AND REACTIVE
POWER
Effective control over real power but
inefficient in controlling the reactive power
since the controller has a lagging pf to
network. Significant lower order harmonics
generated by firing of thyristors. Real and
reactive power cannot be controlled
independently.
Independent real and reactive
power control is possible.
Continuous reactive power
support at rated capacity even in
absence of negligible current in
the superconductor.
Independent control of real as
well as reactive power
exchange through SMES.
Reactive power support to the
coil depends upon the coil.
OPERATION OF
CONTROL UNIT
Highly controllable due to presence of a
single AC- DC converter unit.
The control technique is
convoluted compared to the
other two techniques due to the
presence of AC- DC converter
and DC- DC chopper unit.

It has an a single AC- DC unit
and hence can be controlled
easily. For applications of
higher rated power they can
be operated in parallel
connection.
TOTAL HARMONIC
DISTORTION (THD)
Generation of total harmonic distortion is
more than the other two techniques.
The value of total harmonic
distortion is reduced in case of
this control technique
The value of total harmonic
distortion is reduced in case of
this control technique
COST OF
INSTALLATION
AND OPERATION
Very economic installation and operational
costs
Lower than CSC having
equivalent rating
The total cost of switching
devices is over 170 percentage
of the cost of switchnig
devices and diodes used in a
voltage source controller of
equivalent rating
SELF
COMMUTATION
Poor self commutating capabilities than VSC Better than CSC
Poor self commutating
capabilities than VSC

Table 1. Comparison of various SMES control techniques

V. APPLICATION OF SMES
Because of its capability to respond instantaneously proves beneficial for several application in power system.

STORAGE DEVICE:
SMES has the ability of storing as high as 5000 MWh of energy at an efficiency as high as 95 percent. The efficiency is found to be
higher for larger sized units. It can respond within few ms which makes it suitable during dynamic changes in the power system. It can
serve as a spinning reserve. It can serve as a spinning reserve or as a supplementary reserve and hence provide supply during outages.

IMPROVEMENT OF PERFORMANCE OF FACTS DEVICES
An SMES unit is capable of storing energy for operation with FACTS devices. The inverter used for FACTS application and the
power conditioning system of an SMES unit have similarity in their configuration.The only dissimilarity being that the FACTS
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


60
www.ijergs.org

devices performs their operation by using the energy which is provided by the power system and utilizes a capacitor unit to the DC
side of converters. SMES provides real power along with the reactive power through the DC bus and hence improves the operation of
FACTS devices.



Fig. 6. SMES unit applied to FACTS devices


LOAD FOLLOWING:
SMES can support the generators to maintain the output energy at a constant value by following the variations in load pattern.

STABILITY ENHANCEMENT:
SMES unit can effectively damp the oscillations of lower frequencies and maintain the system stability after occurrence of any
transient. It absorbs the excessive energy from the power system and releases energy in case of any deficiency. Thus it increases the
stability of system by energy transfer.

AUTOMATIC GENERATION CONTROL:
SMES can be implemented to minimize the value of area control error in the automatic generation and control [4].

SPINNIG RESERVES:
When there is an outage of major generation units due to faults or maintenance purpose, the unallocated spinning reserves are
implemented to feed the load. When the superconductor coil is completely charged the SMES can serve as a large share of spinning
reserve. This is more economical alternative than other spinning reserves [4,5].

REACTIVE POWER COMPENSATION AND IMPROVEMENT OF POWER FACTOR:
SMES has an ability for independent active and reactive power control and therefore it can provide reactive power support and
enhance the power factor of the system [4].

SYSTEM BLACK START:
SMES units have the ability to make provisions for stating a generation unit by drawing power from SMES unit instead of
withdrawing power from the power system. This can help the system to restore from faulty conditions on the grid side [4].

ECONOMIC ENERGY TRANSFER:
By storing energy when it is available in excess and discharging it during deficiency or congestion it can reduce the price of electrical
energy and hence be an economic alternative of supplying energy.

SAG RIDE THROUGH IMPROVEMENT:
Voltage sag can be defined as a drop in the rms value of the voltage level from 0.1 to 0.9 per unit at the power frequency level for a
time ranging from 0.5 cycle to 1 minute. Some of the causes of voltage sag are starting of large motors, switching of large loads.
SMES unit can efficiently provide voltage during such conditions [6].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


61
www.ijergs.org


Fig. 7. Active and reactive power supplied by SMES connected to PCC




DYNAMIC STABILITY:
During sudden addition of large load or a large generating unit is lost the power system becomes dynamically instable. The reactive
power available within the system is not sufficient to maintain the stability of the system. An SMES unit can be used to provide the
requisite active as well as reactive power support to the grid [6,7].

REGULATION OF TIE LINE POWER:
While transferring electricity from one control area into another the amount of power transferred must match its predefinedvalue. If
the generating units are ramped up for sending power from on control area and the amount of loading of that system gets changed.
This variation may cause errors in the amount of power delivered and consequently inefficient utilization of generating units. An
SMES unit can be used for elimination of such errors and to ensure efficient utilization of the generators [6].

LOAD SHEDDING DURING LOW FREQUENCY:
When a large load or a transmission line is lost the resultant frequency of the system drops and keeps reducing as long as the available
generation and the load are balanced. Due to the ability of the SMES unit to supply active power quickly in the system it serves as an
effective means to bring the system frequency to its rated value by eliminating the imbalance among generation and load [6].

RECLOSING OF CIRCUIT BREAKERS:
In order to clear a system fault and bring the line back into operation the circuit breakers are reclosed. Circuit breaker reclosing is
performed when the power angle difference of the circuit breaker lies inside the limitations. But when the differences in the value of
the power angle is very high then the protective equipments prohibits the reclosing operation of the circuit breaker. The SMES unit
can feed some part of the load and hence decrease the power angle difference to inhibit its reclosing. Thus power flow can be restored
back to normal conditions after outage of transmission lines [6].

ENHANCEMENT OF POWER QUALITY:
SMES unit has the ability to improve the quality of power by increasing the LVRT and HVRT capabilities of the power system. It
eliminates the variations in power which interrupts the supply to critical consumers. In case of any momentary variations in load like a
flashover or thunder strokes the transmissions system trips the power supply which leads to a voltage sag. by providing a quick
response the SMES unit can avoid disconnection of critical loads [6].

BACKUP SOURCE:
The capability of SMES to store energy can serve as a backup source for sensitive loads and has the ability to supply the heavy
industries if there is any outage of generating units. The SMES units size can be designed to provide storages and prove economic at
the same time [6,7].

DAMPING SSR:
Sub synchronous resonances are observed in generating units that have a connection with transmission line that contains large series
compensation of capacitive form. This can be damaging for generators. This sub synchronous resonance can be avoided by using
SMES.




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


62
www.ijergs.org


ELECTRO- MAGNETIC LAUNCHERS:
Electro-magnetic launchers have an application in large power pulsating source. They are utilized as rail gun in defense areas. Rail
guns are capable of releasing a projectile having velocity more than 2000 meters per second. Since the SMES configuration has very
large energy densities they are prove as an attractive alternative for this application.

STABILITY OF WIND TURBINES:
The wind turbine generator has issues related to stability of power system during transients. A voltage source converter based SMES
unit controls the real as well as reactive power independently. This characteristic feature of the SMES configuration serves as an
efficient device for stabilizing the wind energy conversion system [8, 9].

STABILIZATION OF VOLTAGE AND POWER FLUCTUATION IN WECS:
Because of the variation of the velocity of wind, the value of voltage and power generated by wind turbine generators is always
varying. Such variations gives rise to flickering of incandescent bulbs and inaccurate operation of timing device. As the SMES device
has the ability to control real as well as reactive power independently, it serves as an attractive means for reduction of fluctuations
present in voltage and power.


VI. CURRENT STATUS AND FUTURE SCOPE
In 1982- 83 An SMES system of rated 30 MJ was assembled in Bonneville Power Administration Tacoma. The installed configuration
functioned for 1,200 hour and from the various results obtained it can be concluded that the SMES configuration had successfully met
the design requirements [11]. A 20 MWh SMES unit was proposed by Wisconsin university in the year 1988- 89. An array of D-
SMES was developed for stabilization of transmission system. The transmission system in this particular area was introduced with
huge suddenly changing loads because of operation of paper mills which gave rise to uncontrollable load fluctuation and collapsing of
voltages. The SMES units were efficient in stabilization of grid and improving the power quality [12]. The largest installation includes
six or seven units in upper Wisconsin by American Superconductor in year 2000. These units of 3 MW/0.83 kWh are currently
operated by the American Transmission Company, and are used for power quality applications and reactive power support where each
can provide 8 MVA [4]. In USA super-conductivity inc. supplies 1 and 3 MJ rated SMES devices.

Current an SMES having an energy rating of 100 MJ/ 50 MW is being designed. It is said to be the largest SMES configuration till
date. The purpose of design of this SMES unit is for damping the low freq. oscillations generated within the transmission network.
The superconducting magnet which is to be used for this configuration was materialized in 2003 and the tests on this magnet were
carried out from the center of advanced power system [13]. In Japan in 1986 an institute named 'The Superconductive energy storage
research association set up for promotion of applications of the SMES configuration practically. The Kyushu Electric corporation had
manufactured a 30 KJ SMES device in 1991for the stabilization of a 60 kW Hydro- electric generation plant. Several tests were
performed to prove the suitability of SMES unit to yield a desirable performance [14]. To simplify the choice of the capacity of an
SMES unit with the most suitable and appropriate cost and quality a 1 KWh 11 MW and a 100 KWh 120 MW SMES configuration is
manufactured. The 1 KWh 11MW unit is being validated by connecting it to a 6 KW and a 66 KW grids. These units were tested for
compensation of variations in load present in the network [15]. In Japan a 100 MW of wind farm was connected with a 15 MWh of
SMES unit in the year 2004, for stabilization of the output generated from the wind farm [16]. In the year 1988, in Russia an institute
named T 15 Superconducting Magnet has manufactured an SMES unit that has a capacity as high as 370 to 760 MJ [17]. After 1990
the Russian scientists are designing a 100 MJ 120 MW SMES unit [18]. Korea has developed a 1 MJ 300 KV SMES unit for UPS
applications. This unit can compensate a 3 second interruption of power and is 96 percent efficient [19]. The Korean electro
technology research institute had fabricated a 3 MJ, 750 KVA superconducting magnetic energy storage unit having 1000 Amperes of
operational current for enhancement of power quality in the year 2005[20]. Delengation generale pour L'annement support the
researches of applied superconductivity held in France. DGA has built a 100 KJ SMES made from Bi 2212 tapes having liquidized
Helium as a coolant. Later on it was decided to materialize an SMES unit that could work at higher temperatures like 20 Kelvin. DGA
had targeted to manufacture an SMES unit oh 800 KJ which would work on high temperature storage principle. The proposed SMES
unit was expected to operate at temperatures as high as 20 Kelvin which will have current density more than 300 MA/m
2
[21]. Some
organizations in Germany are working together for designing an SMES unit having a rating of 150 KJ and 20 KVA. The SMES unit is
designed for operation as an uninterrupted power supply [22].

The foremost high temperature superconductor based SMES unit was fabricated by American superconductors in the year 1977. This
unit was applied to a scald power system located in Germany. Several tests were conducted which revealed that high temperature
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


63
www.ijergs.org

superconductors based SMES units were a viable and attractive alternative for commercialized production [23]. Distributed SMES
units of small size called micro SMES having a rating between 1 to 10 MW are available in the commercial market.

Currently United States Dept. of Energy advanced research projects agency for energy has sponsored projects to validate the
application of SMES unit in power system. The project is undertaken by a Swiss industry named ABB and has received funds of US
dollars 4.2 million grant. According to the outlines laid for the plan a 3.3 KWh SMES configuration is proposed. The project will be
done in collaboration with Superconducting wire manufacturers super power, Brookhaven National laboratory as well as university of
Houston. The unit must be manufactured for 1 to 2 MWh and must be economic compared to lead acid batteries [25]. In Japan, high
energy acceleration research organization have promoted research on SMES. The scientists here are working for combining liquid
hydrogen refrigeration based SMES unit with a hydrogen fuel cell. The concept behind this combination is that when there is an
interruption of power the SMES unit can supply energy instantaneously and later on the fuel cell can feed the loads. However the
device is not materialized yet though the simulation as well as designs are under studies [10].
VII. RATING OF SMES CONFIGURATION
Capacity of the SMES unit is dependent upon the various applications and the cycling times available. An SMES unit having a very
high capacity can damp oscillations quickly. Such an unit won't be much economic since it will carry very large currents in the coil.
Whereas a very small capacity of the SMES configuration is ineffective for damping the system oscillations immediately. This is
because the output power of the SMES unit will be limited.


VIII. SYSTEM UNDER STUDY

Fig. 8. Block diagram of the system under consideration

The proposed system has doubly fed induction generators of rating equal to 9 MW. The SMES unit chosen has an energy rating of 1
MJ and the inductance of 0.5 H. Rated current through the superconductor is calculated to be 2 KA. The operation of SMES unit
during swell conditions is feasible only if the value of rated inductor is chosen greater than the rated currents inside the coils. The
system under consideration has a nominal current of 2 KA flowing through the coil. Therefore the max. amount of energy that can be
stored within SMES coil is as high as 1 MJ while occurrence of a voltage swell.

RESPONSE OF SMES UNIT IN THE EVENT OF VOLTAGE SAG AND SWELL
The current flowing through the SMES coil is unidirectional but the value of duty cycle of chopper circuit obtained from fuzzy logic
controller gives numerous positive as well as negative values for SMES voltage. This provides a reversible and continuous flow of
power for all operating conditions. The proposed SMES unit works in three different operating modes:
(1) stand- by mode
(2) discharging mode
(3) charging mode

(1) Stand-by mode:
Standby mode of operation occurs when the wind energy conversion system is working in healthy operating conditions. The standby
operating mode is selected when the value of duty cycle is selected to be 0.5. And the SMES coil is maintained at the rated value, in
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


64
www.ijergs.org

this case 2 ka. There is no transfer of energy to or from the SMES coil while the SMES coil is charged for maximum energy, i.e., 1 MJ
in this case. The dc link capacitor has a constant voltage of 10 kv across its terminals.


(a) current

(b) voltage

(c) duty cycle

(d) energy stored in SMES


(e) DC voltage in SMES
Fig. 9: SMES transient responses during voltage sag and swell including; (a) current
(b) voltage, (c) duty cycle, (d) energy stored in SMES and (e) DC voltage in SMES

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


65
www.ijergs.org

(2) Discharging mode
During a voltage sag on the grid, discharging mode occurs in the SMES unit. In discharging mode d has a value less than 0.5. In this
mode of operation the energy stored inside the SMES unit is supplied to the power system. At time t= 2 seconds a voltage sag is and
the current flowing through SMES coil reduces with a negative slope. The rate of discharging of SMES coil is predetermined and is a
function of d. Voltage across SMES is dependent upon the value of d and the voltage present across the dc link capacitor. When the
fault is cleared the coil is recharged. Discharging mode of operation of SMES is compared with the charging mode in Fig. 7.16.

(3) Charging mode:
During a voltage swell event the SMES unit undergoes charging operation. The value of d in this mode lies above 0.5. At time t= 2
seconds a voltage swell is simulated therefore the current flowing through the SMES coil raises positively and the charge stored inside
SMES unit increases. The transfer of energy occurs from the power system to the SMES unit until it reaches a max. Capacity which is
determined by the value of duty cycle. In the system under consideration the max. Capacity of the unit is 1.03 MJ. Power modulation
is till this capacity is permissible and beyond this V
SMES
drops and becomes 0 when max. SMES current is acquired. Fig.7.16
represents the charging mode of an SMES coil.

Below mentioned observations are drawn
(i) The current flowing through the SMES unit during dip and swell occurrence are analogues to the energy that is stored inside the
coil. The level of energy at any instance is calculated as 1/2 LI
2

(ii) During both sang as well as swell occurrence the voltage across SMES unit is kept 0 after the max. Current starts flowing through
SMES. In order to reduce the SMES operating expenses it is advisable to bypass the SMES unit when the power system becomes
stable. This can be done by using a bypass switch in parallel with the SMES unit.
(iii) During occurrence of both voltage dip as well as swell the voltage across dc link capacitor of the SMES unit is observed to
oscillate in reverse manner then the voltage across SMES coil. The level of this voltage at any instant is dependent upon the SMES
voltage and D.
(iv) Max. overshoot of the voltage in dc link voltage is lies inside the safety limit of 1.25 per unit of the system voltage.
CONCLUSION
The paper gives a brief account of various control techniques used for SMES which include thyristorised control, control using a
Voltage Source Converter and control using a Current Source Controller. A comparative account of these methods is done. A brief
summary of the various applications for SMES and the installations of SMES technology throughout the world so far is also
highlighted along with a note on selection of the rating of the SMES unit for a given application. The behavior of SMES during
charging and discharging event on occurrence of a sag and a swell in the distribution end of the system is also analysed.

REFERENCES:
[1] Mahmoud Y. Khamaira, A. M. Shiddiq Yunus, A. Abu-Siada, "Improvement of DFIG-based WECS Performance Using SMES
unit" The Australasian Universities Power Engineering Conference, 2013.
[2] R. H. Lasseter, S . G. Jalali, "Dynamic Response of Power Conditioning Systems for Superconductive Magnetic Energy Storage",
IEEE Transactions on Energy Conversion, Vol. 6.
[3] Knut Erik Nielsen, "Superconducting magnetic energy storage in power systems with renewable energy sources", Master of
Science in Energy and Environment Thesis, Norweigan university of science and technology
[4] P. D. Baumann, Energy conservation and environmental benefits realized from SMES, IEEE Transaction on Energy
Conservation, vol. 7.
[5] C.-H. Hsu, W.-J. Lee, SMES storage for power system application, IEEE Transaction of Industrial Applications, vol. 29
[6] W. V. Torre, S. Eckroad, Improving power delivery through application of SMES, IEEE Power and Engineering Society Winter
Meeting, 2001
[7] X. D. Xue, K. W. E. Cheng, D. Sutanto, Power system applications of SMES, in IEEE Industrial Applications Conference 2005,
vol. 2
[8] O. Wasynczuk, Damping SSR using energy storage, IEEE Transactions of Power Application Systems, vol. PAS-101
[9] C.-J.Wu, C.-F. Lu, Damping torsional oscillations by SMES unit, Electrical Machines Power System, vol. 22
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


66
www.ijergs.org

[10] Makida Y, Hirabayashi H, Shintomi T, Nomura S, "Design of SMES with liquid hydrogen for emergency purpose", Applied
Superconductivity IEEE Transactions 17
[11] D. Rogers, H. J. Boenig, "Operation of 30MJ SMES in BPA Electrical Grid," IEEE Transactions on Magnetics, vol.21
[12] R. W. Boom, "SMES for electric utilities-A review of 20 year Wisconsin program," Proceedings of the International Power
Sources Symposium, vol.2
[13] Michael Steurer, Wolfgang Hribernik, "Frequency Response Characteristics of 100MJ SMES its Measurements and Model
Refinement," IEEE Transactions on Applied Superconductivity, vol.l S.
[14] F me, and M. Takeo, "A field Experiment on Power Line Stabilization by an SMES System," IEEE Transactions on Magnetics,
vol.l S
[15] Tsuneo Sannomiya, Hidemi Hayashi, "Test Results of Compensation for Load Fluctuation under a Fuzzy Control by IkWhIMW
SMES," IEEE Transactions on Applied Superconductivity, vol.l l [16] S. Nomura, Y. Ohata, "Wind Farms Linked by SMES
Systems," IEEE Transactions on Applied Superconductivity, vol.l S
[17] N. A. Chernoplekov, N. A. Monoszon, "T-15 Facility and Test," IEEE Transactions on Magnetics, vol. 23
[18] V. V. Andrianov, V. M. Batenin, "Conceptual Design of a 100MJ SMES," IEEE Transactions on Magnetics, vol.27
[19] K. C. Seong, H. J. Kim, "Design and Testing of I MJ SMES," IEEE Transactions on Applied Superconductivity, vol2
[20] H. J. Kim, K. C. Seong, "3 MJ/750 kVA SMES System for Improving Power Quality," Transactions on Applied
Superconductivity
[21] P. Tixador, B. Bellin, "Design of 800 kJ HTS SMES," IEEE Transactions on Applied Superconductivity, vol.l S
[22] M. Ono, S. 1. lanai, "Development of IMJ Cryo cooler Cooled Split Magnet with silver Sheathed Bi2223 Tapes for silicon
Single-Crystal Growth Applications," IEEE Transactions on Applied Superconductivity, vol. 10
[23] Weijia Yuan, "Second-GenerationHTS and Their Applications for Energy Storage", Springer Thesis, Doctoral Thesis accepted by
the University of Cambridge, Cambridge
[24] Phil Mckenna, "Superconducting Magnets for Grid-Scale Storage", Technology Review, Energy. March. 2011
[25] H. Chen, "Progress in electrical energy storage system A critical review", Progress in Natural Science 19















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


67
www.ijergs.org

Controlling Packet Loss at the Network Edges by Using Tokens
B.Suryanarayana
1
, K. Bhargav Kiran
2

1
Research Scholar (PG), Dept of Computer Science and Engineering, Vishnu Institute of Engineering, Bhimavaram, India
2
Assistant professor, Dept of Computer Science and Engineering, Vishnu Institute of Engineering, Bhimavaram, India
E-mail- Surya0530@gmail.com
Abstract The Internet accommodates simultaneous audio, video and data traffic. It requires the Internet to guarantee the packet
loss which at its turn depends very much on congestion controls. A series of protocols have been introduced to supplement the
insufficient TCP mechanism controlling the network congestion's. CSFQ was designed as an open-loop controller to provide the fair
best effort service for supervising the per-flow bandwidth consumption and has become helpless when the P2P flows started to
dominate the traffic of the Internet. Token-Based Congestion Control (TBCC) is based on a closed-loop congestion control principles,
which restricts token resources consumed by an end-user and provides the fair best effort service with O(1) complexity. As Self-
Verifying Re-feedback and CSFQ, it experiences a heavy load by policing inter-domain traffic for lack of trusts. In this paper, Stable
Token-Limited Congestion Control (STLCC) is introduced as new protocols which appends inter-domain congestion control to TBCC
and make the congestion control system to be stable. STLCC is able to shape input and output traffic at the inter-domain link with
O(1) complexity. STLCC produce a congestion index is pushes the packet loss to the network edge and improves the network
performance. At last, the simple version of STLCC is introduced. This version is deployable in the Internet without any IP protocols
modifications and preserves also the packet datagram.

KeywordsTCP, Tokens, Network, Congestion Control Algorithm, Addressing, Formatting, Buffering,
Sequencing, Flow Control, Error Control, Qos, Random Early Detection (RED).
INTRODUCTION
Modern IP network services provide for the simultaneous digital transmission of video, voice and data. These services
require congestion control protocols and algorithms which can solve the packet loss parameter can be kept under control. Congestion
control is the cornerstones of packet switching networks. It should prevent congestion collapse it provide fairness to competing flows
and optimize transport performance indexes such as throughput, loss and delay. The literature abounds in papers on this subject; there
are papers on high-level models of the flow of packets through the network, and on specific network architectures.
Despite this vast literature, congestion control in telecommunication networks struggles with two major problems that are not
completely solved. The first one is the time-varying delay between the control point and the traffic sources. The second one is related
to the possibility that the traffic sources do not follow the feedback signal. This latter may happen because some sources are silent as
they have nothing to transmit. Originally designed for a cooperative environment. It is still mainly dependent on the TCP congestion
control algorithm at terminals, supplemented with load shedding [1] at congestion links. This model is called the Terminal Dependent
Congestion Control case.
Core-Stateless Fair Queuing (CSFQ) [3] set up an open- loop control system at the network layer, it inserts the label of the
flow arrival rate onto the packet header at edge routers and drops the packet at core routers based on the rate label if congestion
happens. CSFQ is first to achieve approximate fair bandwidth allocation among flows with O(1) complexity at core routers.
According to Cache Logic report, P2P traffic was 60% of all the Internet traffic in 2004, of which Bit-Torrent [4] was
responsible for about 30% of the above, although the report generated quite a lot of discussions around the real numbers. In networks
with P2P traffic, CSFQ can provide fairness to competing flow, but unfortunately it is not what end-users and operators really want.
Token-Based Congestion Control (TBCC) [5] restricts the total token resource consumed by an end-user. So, no matter how many
connections the end-user has set up, it cannot obtain extra bandwidth resources when TBCC is used.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


68
www.ijergs.org

In this paper a new and better mechanism for congestion control with application to Packet Loss in networks with P2P traffic
was proposed. In this new method the edge and the core routers will write a measure of the quality of service guaranteed by the router
by writing a digital number in the Option Field of the datagram of the packet, this is called a token. This token is read by the path
routers and interpreted as its value will give a measure of the congestion [2] especially at the edge router. Based on the token number
the edge router at the source, reducing the congestion on the path. In Token-Limited Congestion Control (TLCC) [9], the inter-domain
router restricts the total output token rate to peer domain. When the output token rate exceeds the threshold, TLCC will decreases the
Token-Level of output packets, and then the output token rate will decrease.

Fig 1. Architecture


2. RELATED WORK

The basic idea of peer- to- peer network is to have peers participate in an application level overlay network and operate as
both A number of approaches for queue management at Internet gateways have been studied previously. Droptail gateways are used
almost universally in the current Internet because of their simplicity. A droptail gateway drops an incoming packet only when the
buffer becomes full, thus the providing congestion notifications to protocols like TCP. While simple to implement, it distributes losses
among the flows arbitrarily [5]. Often results in the bursts losses from a single TCP connection, reducing its window sharply. Thus,
the flow rate and consequently throughput for that flow drops. Tail dropping also results in multiple connections simultaneously
suffering from losses leading to global synchronization [6]. Random early detection (RED) addresses some [11][12] of the drawbacks
of droptail gateways. The RED gateway drops incoming packets with a dynamically computed probability when the exponential
weighted moving average queue size avg q exceeds a threshold. In [6], the author does per-flow accounting maintaining only a single
queue. It is suggest changes to the RED algorithm to ensure fairness and to penalize the misbehaving flow. It puts a maximum limit on
the number of packets a flow can have in the queue.

Besides it also maintains the per flow queue use. Drop or accept decision for an incoming packet is then based on the
average queue length and the state of that flows. It also keeps track of the flows which consistently violate the limit requirement by
maintaining a per-flow variable called as strike and penalizes those flows which have a high value for strike. It is intended that this
variable will becomes high for non- adaptive flows and so they will be penalized aggressively. It has been shown through simulations
[7] that FRED fails to ensure the fairness in many cases. CHOKE [8] is an extension to RED protocols. It does not maintain any per
flow state and works on the good heuristic that a flow sending at a high rate is likely to have more packets in the queue during the time
of the congestion. It decides to drop a packet during congestion if in a random toss, it finds another packet of the same flow. In [9], the
authors establish how rate guarantees can be provided by simply using buffer management. They show that the buffer management
approach is indeed capable of providing reasonably accurate rate guarantees and the fair distribution of excess resources.

3. Core Stateless Fair Queuing

In the proposed work, a model called the Terminal Dependent Congestion Control case which is a best-effort service in the
Internet that was originally designed for a cooperative environment which is the congestion control but still it is mainly dependent on
the TCP congestion control algorithm at terminal, supplemented with load shedding at[13][14] congestion links is shown in Figure 2.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


69
www.ijergs.org

In high speed network Core Stateless Fair Queuing (CSFQ) is enhanced to fairness set up an open- loop control system at the network
layer, which insert the label of the flow arrival rate onto the packet header at edge routers and drops the packet at core routers based on
the rate label if congestion happens. At the core routers CSFQ is the first to achieve approximate fair bandwidth allocation among
flows with O (1) complexity.

CSFQ can provide fairness to competing flows in the networks with P2P traffic, but unfortunately it is not what end-users
really want. By an end user Token Based Congestion Control (TBCC) restricts the total token resource consumed. It cannot obtain
extra bandwidth resources when TBCC is used so no matter how many connections the end user has set up. The Self Verifying CSFQ
tries to expand the CSFQ across the domain border. It randomly selects a flow,[15] then re-estimates the flows rate, and the checks
whether the re-estimated rate is consistent with the label on the flows packet. Consequently Self-Verifying CSFQ will put a heavy
load on the border router and makes the weighted CSFQ null as well as avoid.

The congestion control architecture re-feedback, which aims to provides the fixed cost to end-users and bulk inter-domain
congestion charging to network operator. Re-feedback not only demands very high level complexity to identify the malignant end
user, but it is difficult to provide the fixed congestion charging to the inter domain interconnection with low complexity. There are
three types of inter domain interconnection polices: the Internet Exchange Points[16], the private peering and the transits. In the
private peering polices, the Sender Keep All (SKA) peering arrangements are those in which the traffic is exchanged between two
domains without mutual charges. As Re-feedback is based on the congestion charges to the peer domain, it is difficult for re-
feedback to support the requirements of SKA.
The modules of the proposed work are:

- NETWORK CONGESTION
- STABLE TOKEN LIMIT CONGESTION CONTROL (STLCC) TOKEN
- CORE ROUTER
- EDGE ROUTER

Network Congestion: Congestion occurs when the number of packets being transmitted through the network crosses the packet
handling capacity of the networks. Congestion control aims to keep number of packets below the level at which performance falls off
dramatically.

Stable Token Limit Congestion Control (STLCC): STLCC is able to shape output and input traffic at the inter domain link with
O(1) complexity. STLCC produce a congestion index, pushes the packet loss to network edge and improves the overall network
performance. To solve the oscillation problems, the Stable Token-Limited Congestion Control (STLCC) is also introduced. It integrate
the algorithms of TLCC and XCP [10] altogether. In STLCC, output rate of the sender is controlled using the algorithm of XCP, there
is almost no packet lost at the congested link. At the same time, the edge router allocates all the access token resources to the
incoming flow equally. When congestion happens, the incoming token rate increases at the core router, and the congestion level of the
congested link will also increased as well. Thus STLCC can measure the congestion level analytically, and then allocates network
resources according to the

Token: A new and better mechanism for the congestion control with application to Packet Loss in networks with P2P traffic is
proposed. In this method the edge and the core routers will write a measure of the quality of service guaranteed by the router by
writing the digital number in the Option Field of the datagram of the packet. This is called as token. The token is read by the path
routers and then interpreted as its value will give a measure of the congestion especially at the edge routers. Based on the token
numbers, the edge router at the source, it reducing the congestion on the path.

Core Router: A core router is a router designed to operate in the Internet Backbone (or core). To fulfill this role, a router must be able
to support multiple telecommunications interfaces of the highest speed in use in the core Internet and must be able to forward the IP
packets at full speed on all of them. It must also supports the routing protocols being used in the backbone. A core router is distinct
from the edge routers.

Edge Router: Edge routers sit at the edge of a backbone network and connect to the core routers. Then the token is read by the path
routers and then interpret as its value will give a measure of the congestion especially at the edge routers. Based on the token number
of the edge router at the source, it reducing the congestion on the path.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


70
www.ijergs.org

4. RESULTS
Packets of Edge Router:





















Edge Router3:





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


71
www.ijergs.org



CONCLUSION:
The architecture of Token-Based Congestion Control (TBCC), which provides fair bandwidth allocation to end-users in the same
domain will be introduced. It evaluates two congestion control algorithms CSFQ and TBCC. In this STLCC is presented and the
simulation is designed to demonstrate its validity. It presents the Unified Congestion Control Model which is the abstract model
of STLCC, CSFQ and Re-feedback. Finally, conclusions will be given. To inter-connect two TBCC domains, then the inter-
domain router is added to the TBCC system. To support the SKA arrangements, the inter-domain router should limit its output
token rate to the rate of the other domains and police the incoming token rate from peer domains.

REFERENCES:
[1] Andrew S. Tanenbaum, Computer Networks, Prentice-Hall International, Inc.

[2] S. Floyd and V. Jacobson. Random Early Detection Gateways for Congestion Avoidance, ACM/IEEE Transactions on
Networking, August 1993.

[3] Ion Stoica, Scott Shenker, Hui Zhang, "Core-Stateless Fair Queueing: A Scalable Architecture to Approximate Fair
Bandwidth Allocations in High Speed Networks", In Proc. of SIGCOMM, 1998.

[4] D. Qiu and R. Srikant. Modeling and performance analysis of BitTorrent-like peer-to-peer networks. In Proc. of SIGCOMM,
2004.


[5] Zhiqiang Shi, Token-based congestion control: Achieving fair resource allocations in P2P networks, Innovations in NGN:
Future Network and Services, 2008. K-INGN 2008. First ITU-T Kaleidoscope Academic Conference.

[6] I. Stoica, H. Zhang, S. Shenker Self-Verifying CSFQ, in Proceedings of INFOCOM, 2002.

[7] Bob BriscoePolicing Congestion Response in an Internetwork using Refeedback In Proc. ACM SIGGCOMM05,
2005.

[8] Bob Briscoe,Re-feedback:Freedom with Accountability for Causing Congestion in a Connectionless Internetwork,
http://www.cs.ucl.ac.uk/staff/B.Briscoe/project s /e2ephd/ e2ephd_y9_cutdown_ appxs.pdf

[9] Zhiqiang Shi, Yuansong Qiao, Zhimei Wu, Congestion Control with the Fixed Cost at the Domain Border, Future Computer
and Communication (ICFCC),2010.

[10] Dina Katabi, Mark Handley, and Charles Rohrs, "Internet Congestion Control for Future High Bandwidth-Delay Product
Environments." ACM Sigcomm 2002, August 2002.

[11] Abhay K. Patekh, A Generalized Processor Sharing Approach Flow Control in Integrated Services Networks: The
Single- Node Case, IEEE/ACM Trans. on Network, Vol. 1, No.3, June 1993.

[12] Sally Floyd, Van Jacobson, Link-sharing and Resource Management Models for Packet Networks, IEEE\ACM Transactions
on Networking, Vol.3, No.4, 1995.

[13] John Nagle, RFC896 congestion collapse, January 1984.

[14] Sally Floyd and Kevin Fall, Promoting the Use of End-to-End Congestion Control in the Internet, IEEE/ACM Transactions
on Networking, August 1999.
[15] V. Jacobson. Congestion Avoidance and Control. SIGCOMM Symposium on Communications Architectures and
Protocols, pages 314329, 1988
[16]http://www.isi.edu/nsnam/ns/

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


72
www.ijergs.org

A Design for Secure Data Sharing in Cloud
Devi D
1
, Arun P S
2

1
Research Scholar (M.Tech), Dept of Computer Science and Engineering, Sree Buddha College of Engg, Alappuzha, Kerala, India
2
Assistant professor, Dept of Computer Science and Engineering, Sree Buddha College of Engg, Alappuzha, Kerala, India
E-mail- devidharman@gmail.com

Abstract Cloud computing, which enables on demand network access to shared pool of resources is the latest trend in todays IT
industry. Among different services provided by cloud, cloud storage service allows the data owners to store and share their data
through cloud and thus become free from the burden of storage management. But, since the owners lose physical control over their
outsourced data, there arise many privacy and security concerns. A number of attribute based encryption schemes are proposed for
providing confidentiality and access control to cloud data storage where the standard encryption schemes face difficulties. Among
them, Hierarchical Attribute Set Based Encryption (HASBE) provides scalable, flexible and fine grained access control as well as easy
user revocation. It is an extended form of Attribute Set Based Encryption (ASBE) with a hierarchical structure of users. Regarding
integrity and availability, HASBE is not sufficient to provide the data owner with the ability to perform checking against missing or
corruption of their outsourced data. So, this paper extends HASBE with privacy preserving public auditing concept which additionally
allows owners to securely ensure the integrity of their data in the cloud. We are using homomorphic linear authenticator technique for
this purpose.
Keywords Cloud Computing, Access control, Personal Health Record, HASBE, Integrity, TPA, Homomorphic Linear
Authenticator.
INTRODUCTION
Cloud computing is a general term for anything that involves delivering hosted services over the Internet. Three distinct
characteristics dierentiate cloud service from traditional hosting. It is sold on demand- giving the cloud consumer the
freedom to self-provision the IT resources, it is elastic - which means that at any given time a user can have as much or as
little of a service as they want, the service is fully managed by the provider-the consumer needs nothing but a personal
computer and Internet access. Other important characteristics of cloud are measured usage and resilient computing. In
measured usage cloud keep track of usage of its IT resources and the consumer need to pay only for what they actually
use. For resilient computing, cloud distributes redundant implementations of IT resources across physical locations. IT
resources can be pre-congured so that if one becomes imperfect, processing is automatically handed over to another
redundant implementation.
Infrastructure as a Service(IaaS), Platform as a Service(PaaS), and Software as a Service(SaaS)are the major service
oriented cloud computing models. Cloud storage is an important service of cloud computing which allows data owners to
move data from their local computing systems to the cloud. The physical storage spans across multiple servers and
locations. People and organizations buy or lease storage capacity from the providers to store end user,organization, or
application data. Cloud storage has several advantages over traditional data storage: relief from the burden for storage
management, universal data access with location independence and avoidance of capital expenditure on hardware,
software and personnel maintenances. It also allows sharing of data with others in a exible manner Moving the data to
an o-site storage system, maintained by a third party(cloud service provider), on which data owner does not have any
control posses many data security challenges of privacy - the risks of unauthorized disclosure of the users sensitive data
by the service providers, data integrity-validity of outsourced data due to its internet-based data storage and management
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


73
www.ijergs.org

etc. In cloud environment data condentiality is not the only data security requirement. Since cloud allows data sharing, a
great attention to be given to ne- grained access control to the stored data.
The traditional method to provide condentiality to such sensitive data is to encrypt them before uploading to the cloud.
In traditional public key infrastructure, each user encrypts his le and stores it in the server and the decryption key is
disclosed only to the particular authorized user. Regarding condentiality, this scheme is secure, but this solution requires
ecient key management and distribution which is proven to be dicult. Also, as the number of users in the system
becomes large this method will not be ecient. These limitations and the need for ne- grained access control for data
sharing, lead to the introduction of new access control schemes based on attribute based encryption(ABE)[3].Unlike in
traditional cryptography where the intended recipient identity is clearly known, in an attribute based systems one onl y
needs to specify the attributes or credentials of the recipient(s).Here cipher texts are not encrypted to one particular user as
in traditional public key cryptography.It enables to handle unknown users also. Dierent types of ABE schemes are
proposed to provide ne-grained access control to data stored in cloud. But they could not satisfy the requirements such as
scalability- ability to handle increasing number of system users without degrading eciency, exibility-should support
complex access control policies with great easiness and easy user revocation -should avoid re-encryption of data and re-
distribution of new access keys during the revocation of each user. These limitations of ABE schemes are covered by
Hierarchical Attribute Set Based Encryption (HASBE)[1].It is an extension of Attribute Set Based
Encryption(ASBE).HASBE achieves scalability due to its hierarchical structure and also inherits ne-grained access
control and exibility in supporting compound attributes from ASBE[7].Another highlighting feature of HASBE is its
easy user revocation method. In addition to these access control needs, the data owners want to know the integrity of the
data which they uploaded to the cloud. HASBE does not include integrity checking facility and it is the major drawback of
this scheme. This paper integrates integrity checking module based on privacy preserving public auditing with HASBE
scheme and thus provides more security to the system.
RELATED WORKS

This section reviews the concept of attribute based encryptions and provide a brief overview of Attribute Set Based
Encryption(ASBE) and Hierarchical Attribute Set Based Encryption(HASBE).All these schemes are proposed as access control
mechanisms to cloud storage.
Sahai and Waters proposed Attribute based encryption to provide better solution for access control. It used user identities as
attributes and these attributes play important role in encryption and decryption. The primary ABE used a threshold policy for access
control, but it lacks expressibility. ABE schemes are further classified into key-policy attribute based encryption (KP-ABE) and
ciphertext-policy attribute-based encryption (CP-ABE), in which concept of access policies are introduced. In KP-ABE[4] access
policies are asscociated with users private key while in CP-ABE[5] it is in the ciphertext. In the ABE scheme, ciphertexts are not
encrypted to one particular user as in traditional public key cryptography.Rather, both ciphertexts and users decryption keys are
associated with a set of attributes or a policy over attributes. A user is able to decrypt a ciphertext only if there is a match between
attributes in the decryption key and the ciphertext.
In KP-ABE since the access policy is built in to the users private key, the data owner who encrypt the data cant choose who
can decrypt the data. He has to trust the key issuer. But in CP-ABE since users decryption keys are associated with a set of attributes,
it is more natural to apply. These scheme provided fine grained access control to the sensitive data in the cloud but it failed in the case
of handling complex access control policies. It lacks scalability and in case a previously legitimate user needs to be revoked, related
data has to be re-encrypted. Here data owners need to be online all the time so as to encrypt or re-encrypt data .
In CP-ABE scheme decryption keys only support user attributes that are organized logically as a single set. So users can only
use all possible combinations of attributes in a single set issued in their key to satisfy a policy. To solve this problem, Bobba [7]
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


74
www.ijergs.org

introduced ciphertext-policy attribute-set-based encryption (CP-ASBE or ASBE for short). ASBE is an extended form of CP-ABE
which organizes user attributes into a recursive set structure and allows users to impose dynamic constraints on how those attributes
may be combined to satisfy a policy. It groups user attributes into sets such that those belonging to a single set have no restrictions on
how they can be combined. Similarly, multiple numerical assignments for a given attribute can be supported by placing each
assignment in a separate set.
To achieve scalability, flexibility and fine grained access control and efficient user revocation, Hierarchical attribute set
based encryption [HASBE] by extending cipher-text-policy attribute set based encryption [CP-ASBE or ASBE] scheme is
proposed[1]. HASBE extends the ASBE algorithm with a hierarchical structure to improve scalability and flexibility while at the same
time inherits the feature of fine-grained access control of ASBE. HASBE supports compound attributes due to flexible attribute set
combinations as well as achieves efficient user revocation without requiring re-encryption because of attributes assigned multiple
values.
HASBE system consists of five types of parties: a cloud service provider, data owners, data consumers, a number of domain
authorities, and a trusted authority. The trusted authority is the root authority and responsible for managing top-level domain
authorities. Each data owner/consumer is administrated by a domain authority. A domain authority is managed by its parent domain
authority or the trusted authority. Data owners encrypt their data files and store them in the cloud for sharing with data consumers.
Data consumers download and decrypt the file stored in cloud. Data owners, data consumers, domain authorities, and the trusted
authority are organized in a hierarchical manner and keys are delegated through this hierarchy.
PROBLEM STATEMENT
Even though HASBE scheme achieves scalability, flexibility and fine grained access control, there is no method called integrity
scheme in HASBE to ensure that the data will be remained correctly in the cloud. Hence it is the major drawback of HASBE scheme.
The data owners are facing a serious risk of corrupting or missing their data because of lack of physical control over their outsourced
data. In order to overcome this security risk, privacy preserving public auditing concept could be proposed, which integrates data
integrity proof with HASBE scheme.
OBJECTIVES
The data owners want to prevent the server and unauthorized users from learning the contents of their sensitive les. Each of them
owns a privacy policy. In particular, the proposed scheme has the following objectives:
- Fine grained access control : Dierent users can be authorized to read dierent sets of les.
- User revocation: Whenever it is necessary, a users access privileges should be revoked from future access in an ecient and
easy way.
- Flexible policy specication: The complex data access policies can be specied in a exible manner.
- Scalability: To support a large and unpredictable number of users, the system should be highly scalable, in terms of
complexity in key management, user management, and computation and storage.
- Enable users to ensure the integrity of data they are outsourced.
o Public audit ability: to allow a Third Part Auditor (TPA) to verify the correctness of the cloud data on demand
without retrieving a copy of the whole data or introducing additional online burden to the cloud users.
o Storage correctness: to ensure that there exists no cheating cloud server that can pass the TPAs audit without
indeed storing users data intact.
o Privacy-preserving: to ensure that the TPA cannot derive users data content from the information collected during
the auditing process.

METHODOLOGY
The entire system applies to Personal Health Record (PHR), which is an electronic record of an individual's health information.
Online PHR service [8-9] allows an individual to create, store, manage and share his personal health data in a centralized way. Since
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


75
www.ijergs.org

cloud computing provides infinite computing resources and elastic storage, PHR service providers shift the data and applications in
order to lower their operational cost.
The overall methodology of this work can be divided into two parts - Secure PHR Sharing using HASBE and Secure data
auditing. The architecture of Secure PHR sharing is given in figure 1 and secure data auditing in figure 2.

B. Secure PHR Sharing

For secure PHR sharing, HASBE has a hierarchical structure of system users. Hierarchy enables the system to handle increasing
number of users without degrading the efficiency. PHR owners can upload their encrypted PHR files to cloud storage and data
consumers can download and decrypt the required file from the cloud. In this system, the PHR owners need not be online all the time
since they are not responsible for issuing decryption keys to data consumers. It is the responsibility of a domain authority to issue
decryption keys to users under its domain. The system can be extended to any depth and in the same level there can be more than one
domain authorities so that no authority should become a bottleneck to handle large number of system users.Here, the system under
consideration uses a depth 2 hierarchy and there are five modules for secure PHR sharing.
1. Trusted Authority Module
2. Domain Authority Module
3. Data Owner Module
4. Data Consumer Module
5. PHR Cloud Service Module


Fig 1:HASBE Architecture


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


76
www.ijergs.org

1. Trusted Authority Module
The trusted authority is the root or parent authority. It is responsible for generating and distributing system parameters and root
master keys as well as authorizing the top-level domain authorities. In our system the Ministry of Health is the trusted authority.
The major functions of Ministry of Health are,
Admin can login from the home page and can perform domain authority registration.
To set up the system by generating master secret key MK
0
and a public key based on universal set of system attributes .
Generate master key for domain authority using public key PK, master key MK
0
and set of attributes corresponding to
domain authority.
2. Domain Authority Module
Domain Authority (DA) is responsible for managing PHR owners and authorizing data consumers. In our system a single
domain authority called National Medical Association(NMA) comes under Ministry of Health.
NMA first registers to the trusted authority. During the registration the attributes corresponding to the DA is specified. Also
a request for domain key is send to trusted authority through web services. Only after receiving domain key- public key and
domain master key, DA can authorize users in its domain.
Major functions of NMA are,
o To provide public key for the patients to perform attribute based encryption.
o Log in and View the details of medical professionals.
o To provide attribute based private key for the medical professionals for decrypting the medical records.
o Perform user revocation.
3. Data Owner Module
In our system patients are the data owners. A patient application is there which allows the patient to interact with PHR service
provider. The main functions of these module are,
Patients first register to the system and then log in.
Patients can set the access privilege as who can view the files and upload encrypted files to cloud.
Patient application performs encryption in two stages. First the file is encrypted with AES, then AES key is encrypted
with patient specified policy and public key provided by NMA. Second stage corresponds to attribute set based
encryption.
Encrypted file along with encrypted AES key is uploaded to the cloud.
4. Data Consumer Module
Medical professionals act as data consumers. Through the medical professional application doctors interact with PHR service
provider.
Each hospital administrator log in and creates employees by entering their details. Registration details are also
given to NMA through web services.
Doctors can later log in to the application using their username and password.
The application allows doctors to view required patient details and download their files by interacting with PHR
service provider in cloud through web services.
Medical professional application performs decryption of files for each employee by requesting corresponding
private key based on attributes of the employee from NMA.
5. PHR Cloud Service Module
Responsible for storing encrypted files. It preprocess the file for generating metadata for auditing purpose.
A. Secure Data Auditing

Data auditing is performed by a third party Auditor (TPA) on behalf of the PHR service provider. For the cloud PHR service
provideris the data owner. On the other hand PHR service provider is the client of TPA. It first registers to TPA. The initial
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


77
www.ijergs.org

verification details about uploaded files are given to TPA through proper communication channels. Upon getting data auditing
delegation from PHR service provider, TPA interact with cloud and performs a privacy preserving public auditing. Homomorphic
Linear Authenticator is used to allow TPA to perform integrity checking without retrieving the original data content. It issues
challenges to cloud which indicates random file blocks to be checked. Cloud generates data correctness proof and TPA verifies
it and indicates the result.

Fig 2: Auditing Architecture

CONCLUSION
In this paper, we proposed the privacy preserving public auditing concept for HASBE scheme, to overcome the drawback of, absence
of integrity assurance method in HASBE. Even though HASBE scheme achieves scalability, flexibility and fine- grained access
control, it fails to prove data integrity in the cloud. Since, the data owner has no physical control over his outsourced data, such an
auditing is necessary to prevent cloud service provider from hiding data loss or corruption information from the owner. Audit result
from TPA would also be benecial for the cloud service providers to improve their cloud based service platform, and users can give
their data to the cloud and be worry free about the data integrity.The proposed system preserves all advantages of HASBE and also
adds an additional quality of integrity proof to this system.

REFERENCES:
[1] Zhiguo Wan, June Liu, and Robert H. Deng, Senior Member, IEEE, HASBE: A Hierarchical Attribute set-Based Solution for Flexible and Scalable Access
Control in Cloud Computing, ieee transactions on information forensics and security, vol. 7, no. 2, april 2012
[2] Kangchan Lee, Security Threats in Cloud Computing Environments, International Journal of Security and Its Applications ,Vol. 6, No. 4, October, 2012.
[3] Cheng-Chi Lee1, Pei-Shan Chung2, and Min-Shiang Hwang ,A Survey on Attribute-based Encryption Schemes of Access Control in Cloud Environments,
International Journal of Network Security, Vol.15, No.4, PP.231-240, July 2013
[4] Vipul Goyal Omkant Pandeyy Amit Sahaiz Brent Waters, Attribute-Based Encryption for Fine-Grained Access Control of Encrypted Data
[5] John Bethencourt, Amit Sahai, Brent Waters Ciphertext-Policy Attribute-Based Encryption , in Proc. IEEE Symp. Security and Privacy, Oakland, CA, 2007.
[6] Guojun Wanga, Qin Liu a,b, Jie Wub, Minyi Guo, Hierarchical attribute-based encryption and scalable user revocation for sharing data in cloud servers,
www.elsevier.com locate / /cose
[7] Rakesh Bobba, Himanshu Khurana and Manoj Prabhakaran, Attribute-Sets: A Practically Motivated Enhancement to Attribute-Based Encryption University of
Illinois at Urbana-Champaign, July 27, 2009
[8] Ming Li, Shucheng Yu, ,Yao Zheng, Kui Ren, and Wenjing Lou, Scalable and Secure Sharing of Personal Health Records in Cl oud Computing using
Attribute-based Encryption in IEEE Transactions On Parallel And Distributed Systems ,2012
[9] Chunxia Leng1, Huiqun Yu, Jingming Wang, Jianhua Huang, Securing Personal Health Records in Clouds by Enforcing Sticky Policies in TELKOMNIKA,
Vol. 11, No. 4, April 2013, pp. 2200 ~ 2208 e-ISSN: 2087-278X.
[10] Cong Wang, Qian Wang, Kui Ren, Wenjing Lou (2010), Privacy Preserving Public Auditing for Data Storage Security in Cloud Computing.
[11] Jachak K. B., Korde S. K., Ghorpade P. P. and Gagare G. J.,Homomorphic Authentication with Random MaskingTechnique Ensuring Privacy & Security in
CloudComputing, Bioinfo Security Informatics, vol. 2, no. 2,pp. 49-52, ISSN. 2249-9423, 12 April 2012
[12] Devi D, Scalable and Flexible Access Control with Secure Data Auditing in Cloud Computing, (IJCSIT) International Journal of Computer Science and
Information Technologies, Vol. 5 (3) , 2014, 4118-4123,ISSN:0975-9646

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


78
www.ijergs.org

Design of Impact Load Testing Machine for COT
Sandesh G.Ughade
1
, Dr. A.V.Vanalkar
2
, Prof P.G.Mehar
2

1
Research Scholar (P.G), Dept of Mechanical Engg, KDK College of Engg, Nagpur, R.T.M, Nagpur university, Maharashtra, India
2
Assistant professor, Dept of Mechanical Engg, KDK College of Engg, Nagpur, R.T.M, Nagpur university, Maharashtra, India
E-mail- Sandesh.ughade@gmail.com
Abstract: this paper describes the design of a new pneumatically load applied machine that has been specifically design for studying
the dynamic mechanical behavior of COT (wooden bed). Such type of equipment has been used to generate simple and measurable
fracture processes under moderate to fast loading rates which produce complicated crack patterns that are difficult to analyze. We are
developing the machine as a facility to provide experimental data to validate numerical data of impact load on COT that obser ve
kinetic energy during collision. The machine consists of two main parts, the mechanical structure and the data acquisition system. The
development process included the design, development, fabrication, and function tests of the machine.
Keywords: component; load; impact; design;

I. INTRODUCTION

The starting point for the determination of many engineering timber properties is the standard short duration test where failure is
expected within a few minutes. During the last decades much attention is given to study the behavior of timber and timber joints with
respect to damaging effect of sustained loads, the so-called duration of load effect. In the design process of wooden structures is like a
cot. To increase human safety, some parts of automotive structure made by wood are designed to absorb kinetic energy during
collision. These components are usually in the form of columns which will undergo progressive plastic deformation during collision.
The impact force, i.e. the force needed to deform the cot, determines the deceleration of the load during collision and indicates the
capability of the cot to absorb kinetic energy. The value of impact force is determined by the geometry and the material of the cot. For
this purpose advance impact testing machine is required for checking the adult sleeping cot. Impacts are made on different desired
positions (depending on the size of the cot and location specified by quality engineers) with specified load. This assures the cot is safe
and is ready for the customer use. The test also provides assurance of mechanical safety and prevents from serious injury through
normal functional use as well as misuse that might reasonably expected to occur. For this purpose, in this project this impact testing
machine for testing adult sleeping cot is fabricate. Developing the interface for controlling the machine is one of the most important
parts of control system which includes the software analysis, design, and development and testing. Here we are going to develop a
program for controlling the fabricated wireless impact testing for testing sleeping cot.

II. DESIGN PROCEDURE

The aim of this is to give the complete design information about the impact testing machine. In this, the explanations and some other
parameters related to the project are included. With references from various sources as journal, thesis, design data book, literature
review has been carried out to collect information related to this project.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


79
www.ijergs.org




A. Design consideration
- Considered element
- Standard size of COT
- Material of COT plywood
- Maximum weight applied to the surface of COT.
- Height of impact
B. Design calculations
Determination of impact force

Impact force = (1/2 mv2) /d
Where,
m = mass
v = velocity
d = distance travel by material after impact
d = WL/ 48EI (data book, table I-7)

C. Cylinder specification
Force acting by cylinder
F = DLP
Where,
D = bore diameter
Fig.-1: Modeling of machine
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


80
www.ijergs.org

L = length of stroke
P = pressure
D. Design of Plate
T = thickness of plate
D = circular diameter of plate

Consider,
Shear of plate at joint.

Shear stress produced = Fc/Aj

Material used for plate - mild steel
Yield point stress (Syt)
Factor of safety (fos)

Shear strength = (0.5 Syt)/fos

Shear stress induced < Permissible stress
Then the plate is safe in compression and shear.

E. Design of Lead Screw

Type of thread used = square thread
Nominal diameter of sq. screw; d
Material for lead screw: Hardened steel - Cast Iron.
Syt = 330 N/mm2
Coefficient of friction, = 0.15

Therefore force on the lead screw= F ( max) + self-weight of screw and impactor assembly.

Lead angle = = 150
Nut material: FG200
Sut = 200 N/mm2

Torque required for the motion of impactor ( T);

T=P x dm /2
Where,
dm: Mean diameter of lead screw
dm = d - 0.5P
d:outside diameter of lead screw
p: pitch


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


81
www.ijergs.org

F. Design of Screw

c =P/ (/4 x dc2)
Where,
c = direct compressive stress.
dc = root dia. of square thread screw.

c = Syt/ FOS
Let FOS= 2

Therefore we take dc from data book for safety.

Also; Torsional shear stress

(s) =16T /dc3 ----- (1)

s = 0.5 Syt/2

Also screw will tend to shear off the threads at the root diameter.

Shear area of one thread= dc x t x z
Where
z: no. of threads in engagement with nut.

Transverse shear;
s =P/ x dc x t x z
As, t = p/2

Therefore we take standard size of z

G. Design of nut

The nut is subjected to shear off due to P.

Total shear area of nut = x d x t x z
Also
n = 0.5 Sut /FOS

t = pitch / 2

z: No. of threads;
Therefore
We take standard value of z from data book for safety

Length of nut= 5 x pitch

H. Design of compression spring

P = Force on each spring
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


82
www.ijergs.org


L= Deflection of spring
Therefore
Stiffness of spring = p / L

Material of the spring = cold drawn steel.

Ultimate tensile strength, Sut =1050 N/mm2

Modulus of rigidity, G = 81370 N/mm2.

Therefore shear stress = 0.30 Sut

Assume spring index = C

Therefore, Wahl shear factor,

k = {(4c-1)/ (4c-4)} + {(0.615)/(c)}

We know,
Shear stress = k x {(8PC)/d
2
}

Coil diameter of spring, D = cd

Number of active coils (N)
We know,
l = {8P (d
3
) N}/ (Gd
4
)

Spring used is square and ground end.

Therefore
Nt = actual no. of active coil = N-2

Solid length of spring = Nt x d

Assume gap between the coils (when the total compression)
Therefore,
Total gap = (Nt-1) x 2

Free length of the spring,

Free length = solid length + total gap + l

Pitch of the coil (p)

P = free length / (Nt-1)

We know when,
Free length/mean coil diameter (D) 2.6 guide not necessary

Free length /mean coil diameter (D) 2.6 guide is required


III. FABRICATION

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


83
www.ijergs.org


Mechanical components
- C- channel ( for column and beam)
- Lead screw: ( 3 nos)
- Pneumatic cylinder : ( 1 nos)
- Guide ways: ( 3 nos)
- Compressor: ( 1 nos)
- Bearings
- Spring(3 nos)
- Base plate of impactor (1 nos)


IV. CONCLUSION

Automatic technology used for testing the machine and requires very less human assistance which further reduces the labor cost for
quality testing of sleeping beds. Thus the objectives such as testing the quality of adult sleeping bed can reduce the human efforts for
testing the quality of bed.

REFERENCES:
[1] X. X. Zhang, G. Ruiz and Rena C. Yu, a new drop weight impact machine for studying Fracture process in structural concrete,
Anales de Mecnica de la Fractura 25, Vol. 2 (2008)


[2] S. Elavenil and G.M. Samuel Knight, Impact response of plates under drop weight Impact testing, Daffodil International
university journal of science and technology, volume 7, issue 1, january 2012


[3] Leonardo Gunawan, Tatacipta Dirgantara, and Ichsan Setya Putra, Development of a Dropped Weight Impact Testing Machine,
International Journal of Engineering & Technology IJET-IJENS Vol: 11 No: 06.


[4] Siewert, T. A., Manahan, M. P., McCowan, C. N., Holt, J. M., Marsh, F. J., and Ruth, E. A., "The History and Importance of Impact
Testing," Pendulum Impact Testing: A Century of Progress, ASTM STP 1380, T.A. Siewert and M. P. Manahan, Sr., Eds., American Society
for Testing and Materials, West Conshohocken








International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


84
www.ijergs.org

Heat Treating of Non Ferous Alloys
Jirapure S.C
1
, Borade A. B
2

1
Assistant Professor, Mechanical Engg. Dept, JD Institute of Engg & Tech, Yavatmal (MS), India
2
Professor and Head, Mechanical Engg. Dept, JD Institute of Engg & Tech, Yavatmal (MS), India
E-mail- Sagarjirapure@rediffmail.com
AbstractNon ferrous alloys are the most versatile engineering materials. The combination of physical properties such as strength,
ductility, conductivity, corrosion resistance and machinability makes these suitable for a wide range of applications. These properties
can be further enhanced with variations in composition and manufacturing processes. Present paper gives a clear idea about various
strengthening processes of non ferrous alloys and prepares it as per the need of the user.
Keywordshardening, heat treatment, properties, processes, grain structure, solid solution
INTRODUCTION
The hardenability of a steel is broadly defined as the property which determines the depth and distribution of hardness induced by
quenching. Hardenability is a characteristic determined by the following factors The hardenability is the depth and evenness of
hardness of a steel upon quenching from austenite [1].
Heat treatment is an operation or combination of operations involving heating at a specific rate, soaking at a temperature for a
period of time and cooling at some specified rate. The aim is to obtain a desired microstructure to achieve certain predetermined
properties (physical, mechanical, magnetic or electrical) [3].
Heat treating is a group of industrial and metalworking used to alter the physical, and sometimes chemical, properties of a
material. The most common application is metallurgical. Heat treatments are also used in the manufacture of many other materials,
such as glass. Heat treatment involves the use of heating or chilling, normally to extreme temperatures, to achieve a desired result such
as hardening or softening of a material. It is noteworthy that while the term heat treatment applies only to processes where the heating
and cooling are done for the specific purpose of altering properties intentionally, heating and cooling often occur incidentally during
other manufacturing processes such as hot forming or welding [4].
OBJECTIVE
- To increase strength, harness and wear resistance
- To increase ductility and softness
- To increase toughness
- To obtain fine grain size
- To remove internal stresses induced by differential deformation by cold working, non-uniform cooling from high temperature
during casting and welding
- To improve machineability
- To improve cutting properties of tool steels
- To improve surface properties
- To improve electrical properties
- To improve magnetic properties
PHYSICAL PROCESS
Metallic materials consist of a microstructure of small crystals called grains. The nature of the grains (i.e. grain size and
composition) is one of the most effective factors that can determine the overall mechanical behavior of the metal. Heat treatment
provides an efficient way to manipulate the properties of the metal by controlling the rate of diffusion and the rate of cooling within
the microstructure. Heat treating is often used to alter the mechanical properties of an alloy, manipulating properties such as
the hardness, strength, toughness, ductility, and elasticity [7].
There are two mechanisms that may change an alloy's properties during heat treatment. The martensite causes the crystals
to deform intrinsically. The diffusion mechanism causes changes in the homogeneity of the alloy.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


85
www.ijergs.org

Non ferrous metals and alloys exhibit a martensite transformation when cooled quickly. When a metal is cooled very quickly, the
insoluble atoms may not be able to migrate out of the solution in time. This is called a diffusionless transformation. When the crystal
matrix changes to its low temperature arrangement, the atoms of the solute become trapped within the lattice. The trapped atoms
prevent the crystal matrix from completely changing into its low temperature allotrope, creating shearing stresses within the lattice.
When some alloys are cooled quickly, such as steel, the martensite transformation hardens the metal, while in others, like aluminum,
the alloy becomes softer [15].

Effect of Composition:
The specific composition of an alloy system will usually have a great effect on the results of heat treating. If the percentage of
each constituent is just right, the alloy will form a single, continuous microstructure upon cooling. Such a mixture is said to
be eutectoid. However, If the percentage of the solutes varies from the eutectoid mixture, two or more different microstructures will
usually form simultaneously. A hypoeutectoid solution contains less of the solute than the eutectoid mix, while a hypereutectoid
solution contains more [20].

Effect of Time and Temperature:
Proper heat treating requires precise control over temperature, time held at a certain temperature and cooling rate.
Most heat treatments begin by heating an alloy beyond the upper transformation (A3) temperature. The alloy will usually be held at
this temperature long enough for the heat to completely penetrate the alloy, thereby bringing it into a complete solid solution. Since a
smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are
often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing
too large. For instance, when steel is heated above the upper critical temperature, small grains of austenite form. These grow larger as
temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain size directly affects the
martensitic grain size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually
controlled to reduce the probability of breakage.
The diffusion transformation is very time dependent. Cooling a metal will usually suppress the precipitation to a much lower
temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled
quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is
highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can
be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. However, the
martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (Ms) temperature before other
microstructures can fully form, the transformation will usually occur at just under the speed of sound.
When austenite is cooled slow enough that a martensite transformation does not occur, the austenite grain size will have an
effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure.
When austenite is cooled extremely slowly, it will form large ferrite crystals. This microstructure is referred to as "sphereoidite." If
cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form.
Similarly, these microstructures will also form if cooled to a specific temperature and then held there for a certain time.
Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce
a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold
worked. This cold working increases the strength and hardness of the alloy, and the defects caused by plastic deformation tend to
speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these
alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature
that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitation [14].

TECHNIQUES

Strain hardening
The phenomenon where ductile metals become strong and hard when they are deformed plastically is called strain hardening
(or) work hardening. The application of cold work, usually by rolling, forging or drawing operations, strengthens copper and its alloys,
while strength, hardness and elastic modulus increase and ductility decreases during this process. The effect of cold work can be
removed by annealing. Strain hardening is used for hardening/strengthening materials that are not responsive to heat treatment.

Solid solution hardening
Solid solution hardening of copper is a common strengthening method. In this method a small amount of alloying elements
such as zinc, aluminum, tin, nickel, silicon, beryllium etc. are added to the molten copper to completely dissolve them and to form a
homogeneous microstructure (a single phase) upon solidification. This is because stress fields generated around the solute atoms
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


86
www.ijergs.org

present in the substitutional sites interact with the stress fields of moving dislocations, thereby increasing the stress required for plastic
deformation. Traditional Brasses and Bronzes fall into this category. It is to be noted that these alloys are not heat treatable.



Grain boundary hardening
In a poly-crystalline metal, grain size has a tremendous influence on the mechanical properties. Because grains usually have
varying crystallographic orientations, grain boundaries arise. While undergoing deformation, slip motion will take place. Grain
boundaries act as an impediment to the dislocation motion for the following two reasons: (a) dislocation must change its direction of
motion due to the differing orientation of grains and (b) discontinuity of slip planes one grain to another. The stress required to move a
dislocation from one grain to another in order to plastically deform a material depends on the grain size. The average number of
dislocations per grain decreases with average grain size. A lower number of dislocations per grain results in a lower dislocation
'pressure' building up at the grain boundaries. This makes it more difficult for dislocations to move into adjacent grains. This
relationship is called the Hall-Petch equation.

Dual-phase hardening
Bronze is usually a single phase alloy. Aluminium Bronze is a type of Bronze in which aluminium is the main alloying
element added to copper, in contrast to standard Bronze (Cu and Sn) or Brass (Cu and Zn). A variety of aluminium Bronzes of
differing compositions have found industrial use, with most ranging 5wt. % to 11wt. % aluminium. Other alloying agents such as iron,
nickel, manganese, and silicon are also sometimes added to aluminium Bronzes. When adding aluminium above 10%, another phase
forms. The second phase also contributes the strengthening of the alloy.

Precipitation hardening
Precipitation hardening refers to a process where a supersaturated solid solution is heated at a low temperature for a period
(aging) so as to allow the excess solute to precipitate out in the form of a second phase. This process is often used for Cu alloys
containing Be. Precipitation hardening has several distinct advantages. Many combinations of ductility, impact resistance, hardness,
conductivity and strength can be obtained by varying the heat treatment time and temperature. The Cu-Be alloy possesses a
remarkable combination of properties such as tensile strength, electrical conductivity and corrosion resistance and wear resistance.
They may be cast and hot or cold worked. Despite its excellent properties, it is high cost because of addition of Be. Moreover, it is a
health hazardous material.

Order hardening
When the atoms of a disordered solid solution arrange themselves in an orderly manner at a lower temperature ordered
structure forms. Lattice strain develops due to the ordered nature of the structure and this strain contributes to the hardening and the
strengthening of these alloys.

New approach of hardening
The various new approaches of hardening of copper and its alloys are (a) Dispersion hardening/Metal matrix composites (b)
Surface modification and (c) Spinodal decomposition [17].

Dispersion hardening
Conventional strengthening mechanisms, such as cold working and precipitation hardening, are ineffective at high
temperature, owing to the effects of recrystallization, and particle coarsening and dissolution respectively. Applications require
materials with a high thermal conductivity in combination with high elevated temperature strength in oxygen or hydrogen rich
environments, for which copper based alloys are natural choices. In addition to its high thermal conductivity, copper has the advantage
of a low elastic modulus, which minimizes thermal stresses in actively cooled structures. Copper also offers good machinability, good
formability and, for fusion applications, it is attractive for its excellent resistance to neutron
displacement damage. However, copper requires a considerable improvement in strength to meet the design requirements for
high temperature applications. A substantial amount of recent work has emphasized particle and fiber strengthening of copper
composites, with up to 40 vol. % of reinforcing phase. The dispersion hardening is also called Metal Matrix Composites in the recent
literatures. Copper based composites appear to be a promising material for engineering applications due to their excellent thermo-
physical properties coupled with better high temperature mechanical properties as compared to pure copper and its alloys. In the
copper based metal matrix composite, SiCp is widely used as reinforcing element to the matrix to enhance their various properties.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


87
www.ijergs.org

Further, the metal matrix composites, in which hard ceramic particles are dispersed in a relatively ductile matrix, exhibit a
superior combination of properties such as high elastic modulus, high specific strength, desirable co-efficient of thermal expansion,
high temperature resistance and wear resistance. Metal matrix composites are being increasingly used for structural, automobile and
aerospace industry, sporting goods and general engineering industries. Copper matrix composites have the potential for use as wear
resistance and heat resistant materials; brush and torch nozzle materials and for applications in electrical sliding contacts such as those
in homopolar machines and railway overhead current collector systems where high electrical/thermal conductivity and good wear
resistant properties are needed.
Dispersion particles such as oxides, carbides and borides, which are insoluble in the copper matrix and thermally stable at
high temperature, are being increasingly used as the reinforcement phase.

Surface modification
In the surface modification process, hard-facing is a commonly employed method to improve surface properties. An alloy is
homogeneously deposited onto the surface of a soft material usually by welding, with the purpose of increasing hardness and wear
resistance without significant loss in ductility and toughness of the substrate.
A wide variety of hard-facing alloys is commercially available for protection against wear.
Spray forming or spray atomization and deposition is a newly emerging science and technology in the field of materials
development and production in recent years. Spray forming technology, as an advanced processing, combined the advantages of rapid
solidification, semi-solid processing and near net shape processing. Spray forming attracted great attention lately because it would
bring about a distinct improvement in microstructure and properties of materials. It can be used for developing new types of materials
and for improving microstructure and properties of commercial materials. The spray formed Cu-15Ni-8Sn alloy is an example of
developing new types of materials, in which Ni and Sn are sprayed over the Cu substrate. This alloy is of particular interest because
high strength can be
achieved with fairly high conductivity and good corrosion resistance. The alloy may replace Cu-Be alloys in highly
demanding applications in electronic equipment, e.g. electrical switchgear, spring, contacts, connectors etc.

Spinodal Decomposition
The theory of spinodal decomposition as developed by CahnHilliard has been discussed in detail by several authors. The
principal concept of the theory is described below.
A pair of partially miscible solids, i.e. solids that do not mix in all proportions at all temperatures, show a miscibility gap in
the temperature-composition diagram. Figure 1.1 (Favvas et al., 2008) shows a phase diagram with a miscibility gap (lower frame)
and a diagram of the free energy change (upper frame). Line (1) is the phase boundary. Above this line the two solids are miscible and
the system is stable (region-s). Below this line there is a meta-stable region (m). Within that region (point a to b) the system is stable
(where, 2G/xB2 > 0; G = Free energy of mixing; xB = Concentration of element B). Line (2) is the spinodal. Below this line, the
system is unstable (region-u) (where, 2G/xB2 < 0). With the spinodal region (u), the unstable phase will decompose into solute rich
and solute lean regions. This process is called spinodal decomposition. The spinodal decomposition depends on the temperature. For
example above Tc (Figure 1.1) the spinodal decomposition will not takes place.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


88
www.ijergs.org

Figure 1.1 Phase diagram with a miscibility gap (Favvas et al., 2008)

CONCLUSION
The Hall-Petch method, or grain boundary strengthening, is to obtain small grains. Smaller grains increase the likelihood of
dislocations running into grain boundaries after shorter distances, which are very strong dislocation barriers. In general, smaller grain
size will make the material harder. When the grain size approach sub-micron sizes, some materials may however become softer. This
is simply an effect of another deformation mechanism that becomes easier, e.g. grain boundary sliding. At this point, all dislocation
related hardening mechanisms become irrelevant.

REFERENCES:
[1] Archard, J.F. (1953), Contact and rubbing of flat surfaces, Journal of Applied Physics, Vol.24, No.8, pp. 981-988.
[2] Arther, (1991), Heat treating of copper alloys, Copper Development Association, ASM Hand Book, Vol. 4, pp. 2002-2007.
[3] Barrett, C.S. (1952), Structure of Metals, Metallurgy and Metallurgical Engineering series, Second Edition,McGRAW-Hill
book Co, INC.
[4] Copper & Copper Alloy Castings Properties & Applications a handbook published by Copper Development Association, British
Standards Institution, London W1A 2BS TN42 (1991).
[5] Copper The Vital Metal Copper Development Association, British Standards Institution, London W1A 2BS CDA Publication
No. 121, (1998).
[6] Cost-Effective Manufacturing -Design for Production a handbook published by Copper Development Association, British
Standards Institution, London W1A 2BS CDA Publication No 97, (1993).
[7] Copper and copper alloys- compositions, applications and properties a handbook published by Copper Development
Association, British Standards Institution, London W1A 2BS publication No. 120 (2004).
[8] Copper-Nickel Welding and Fabrication, Copper Development Association, British Standards Institution, London W1A 2BS
CDA Publication No. 139, 2013, pp.01-29.
[9] Copper Nickel Sea water piping systems , Application datasheet by Copper Development Association, British Standards
Institution, London W1A 2BS CDA Publication.
[10] Corrosion Resistance of Copper and Copper Alloys, Copper Development Association, British Standards Institution, London
W1A 2BS CDA Publication No. 106.
[11] Donald R. Askeland et al. (2011), Materials science and engineering, Published by Cengage Learning, Third Indian Reprint,
pp. 429.
[12] Equilibrium Diagrams Selected copper alloy diagrams illustrating the major types of phase transformation, Copper
Development Association, British Standards Institution, London W1A 2BS CDA Publication No 94, (1992).
[13] Jay L. Devore. (2008), Probability and statistics for engineers, Cengage Learning.
[14] John W. Cahn. (1966), Hardening by spinodal decomposition, Acta Metallurgica, Vol. 11, No. 12, pp. 1275-1282.
[15] Kodgire V.D. and Kodgire, S.V. (2011), Material Science and Metallurgy for Engineers, 30th Edition, A Text book published
by Everest Publishing house with ISBN 8186314008.
[16] Mike Gedeon (2010), Thermal Strengthening Mechanisms, 2010 Brush Wellman Inc.,Issue No. 18.
[17] Ilangovan, S. and Sellamuthu, R. (2012), An Investigation of the effect of Ni Content and Hardness on the Wear Behaviour of
Sand Cast Cu-Ni-Sn Alloys, International Journal of Microstructure and Materials Properties, Vol. 7, No.4. pp. 316-328.
[18] Naeem, H.T. and Mohammed, K.S.(2013), Microstructural Evaluation and Mechanical Properties of an Al-Zn-Mg-Cu-Alloy
after Addition of Nickel under RRA Conditions, Materials Sciences and Applications, 4, pp.704-711.
[19] Peters, D.T., Michels, H.T. and Powell, C.A. (1999), Metallic coating for corrosion control of marine structures published by
Copper development Association Inc., pp.01-28.
[20] Zhang, J.G., Shi, H.S. and Sun, D.S. (2003), Research in spray forming technology and its applications in metallurgy, Journal
of Materials Processing Technology, Vol.138, No. 1-3, pp.357-360




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


89
www.ijergs.org

Hadoop: A Big Data Management Framework for Storage, Scalability,
Complexity, Distributed Files and Processing of Massive Datasets
Manoj Kumar Singh
1
, Dr. Parveen Kumar
2

1
Research Scholar, Computer Science and Engineering, Faculty of Engineering and Technology, Shri Venkateshwara University,
Gajraula, U.P, India
2
Professor, Department of Computer Science and Engineering, Amity University, Haryana, India

Abstract: Every day people make 2.5 quintillion bytes of information. In the most recent two years alone in excess of 90% of the
information on the planet has been made and there is no sign that this will change, truth be told, information creation is expanding.
The purpose behind the enormous blast in information is that there are such a variety of sources, for example, sensors used to gather
barometrical information, information from posts on social networking locales, advanced and movie information, information created
from every day transaction records and cell and GPS information, simply to name a couple. The greater part of this information is
called Big Data and it incorporates three measurements: Volume, Velocity, Variety. To infer esteem from Big Data, associations need
to rebuild their reasoning. With information developing so quickly and the ascent of unstructured information representing 90% of the
information today, associations need to look past the legacy and select schemas that place extreme restrictions on overseeing Big Data
productively and gainfully. In this paper we give an in-profundity theoretical review of the modules identified with Hadoop, a Big
Data administration schema.
Keywords: Hadoop, Big Data Management, Big Data, Large Datasets, MapReduce, HDFS
Introduction:
Organizations over the globe are confronting the same unwieldy issue; a regularly developing measure of information joined with a
restricted IT base to oversee it. Enormous Data is considerably more than simply a substantial volume of information gathering inside
the association, it is presently the signature of most business ventures and crude unstructured information is the standard passage.
Slighting Big Data is no more a decision. Associations that are not able to deal with their information will be overwhelmed by it.
Humorously, as associations access to always expanding measures of information has expanded significantly, the rate that an
association can transform this gold mine of information has diminished. Removing subsidiary worth from information is the thing that
empowers an association to improve gainfulness and preference. Today the innovation exists to productively store, oversee and
examine basically boundless measures of information and that engineering is called Hadoop [1].

Hadoop?
Apache Hadoop is 100% open source, and spearheaded an on a very basic level better approach for putting away and preparing
information [2]. As opposed to depending on lavish, exclusive fittings and diverse frameworks to store and procedure information,
Hadoop empowers conveyed parallel transforming of immense measures of information crosswise over reasonable, industry-standard
servers that both store and methodology the information, and can scale without breaking points [1]. With Hadoop, no information is
too huge. Also in today's hyper-joined world where more information is constantly made consistently, Hadoop's leap forward focal
points imply that organizations and associations can now discover esteem in information that was as of late considered pointless.
However what precisely is Hadoop, and what makes it so unique? In its fundamental structure, Hadoop is hugely versatile capacity
and information handling framework which supplements existing frameworks by taking care of information that is ordinarily an issue
for them. Hadoop can at the same time assimilate and store any kind of information from an assortment of sources [2]. It is a method
for putting away huge information sets crosswise over circulated groups of servers and afterward running "appropriated" dissection
applications in each one group. It's intended to be vigorous, in that the Big Data applications will keep on running actually when
disappointments happen in individual servers or groups. It's additionally intended to be proficient, in light of the fact that it doesn't
require the applications to shuttle tremendous volumes of information over the system. It has two fundamental parts; an information
preparing structure called Mapreduce and an appropriated document framework called HDFS for information storage (fig 1).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


90
www.ijergs.org


Fig. 1

These are the parts that are at the heart of Hadoop however some different segments Hbase, Pig, Hive, Impala Sqoop, Chukwa,
YARN, Flume, Oozie, Zookeeper, Mahout, Ambari, Hue, Cassandra, and Jaql(fig 2). Every module fills it need in the substantial
Hadoop biological system, right from organization of huge bunches of datasets to inquiry administration. By contemplating every
module and accomplishing learning on it, we can successfully execute answers for Big Data[1].


High-level languages



Fig. 2

Hadoop Distributed File System (HDFS
The Hadoop Distributed File System (HDFS) [1] is a dispersed record framework intended to run on merchandise fittings. Despite the
fact that we may discover numerous likenesses with existing appropriated record frameworks, they are much diverse .HDFS has a
high level of shortcoming tolerance and is typically produced for conveying on ease equipment. Hadoop Distributed File System gives
proficient access to information and is fitting for applications having enormous information set.
HDFS has expert slave structural engineering, with a solitary expert called the Namenode and numerous slaves called
DataNodes.NameNode oversees and store the meta-information of the record framework [5]. The metadata is kept up in the
fundamental memory of the Namenode to guarantee quick get to the customer, on read/compose demands [5]. Datanodes store and
administration read/compose asks for on documents in HDFS, as regulated by the Namenode (Fig 3i). The records put away into
HDFS are duplicated into any number of Datanodes according to design, to guarantee dependability and information accessibility.
These reproductions are circulated over the bunch to guarantee fast reckonings. Documents in HDFS are separated into littler squares,
regularly square size of 64mb, and each one piece is recreated and put away in different Datanodes. The Namenode keeps up the
metadata for each one record put away into HDFS, in its fundamental memory. This incorporates a mapping between put away
filenames, the comparing squares of each one record and the Datanodes that have these pieces. Henceforth, every solicitation by
customer to make, compose, read or erase a record passes through the Namenode (Fig 3ii). Utilizing the metadata put away,
Namenode need to regulate each solicitation from customer to the fitting set of Datanodes. The customer then speaks specifically with
the Datanodes to perform record operations [5].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


91
www.ijergs.org



Fig 3(i) Fig 3(ii)

MapReduce
Mapreduce is a programming model and a related usage for handling and producing expansive information sets with a parallel,
dispersed calculation on a bunch. Computational preparing can happen on information put away either in a file system (unstructured)
or in a database (organized) [16]. Mapreduce can exploit territory of information, transforming it on or close to the stockpiling
possessions to decrease the separation over which it must be transmitted. The expert hub takes the data, partitions it into more modest
sub-issues, and appropriates them to laborer hubs. A specialist hub may do this again thus, prompting a multi-level tree structure. The
specialist hub forms the littler issue, and passes the reply once more to its ace hub. The expert hub then gathers the explanations for all
the sub-issues and joins together them somehow to structure the yield the response to the issue it was initially attempting to fathom.
The Mapreduce motor comprises of a Jobtracker and a Tasktracker. Mapreduce Jobs are submitted to the Jobtracker by the customer
[6]. The Jobtracker passes the occupation to the Tasktracker hub which tries to keep the work near the information. Since HDFS is a
rack mindful record framework, the Jobtracker knows which hub holds the information, and which different machines are adjacent.
On the off chance that the work can't be facilitated on the genuine hub where the information dwells, necessity is given to hubs on the
same rack. This lessens system activity on the primary spine system. On the off chance that a Tasktracker comes up short or times out,
that some piece of the employment is reschedule.


HBase
Hbase is the Hadoop application to utilize when you oblige ongoing read/compose irregular access to vast datasets. This is a non-
social disseminated database model [17]. Hbase gives line level questions as well as utilized for constant application transforming
dissimilar to Hive. Despite the fact that Hbase is not an accurate substitute for customary RDBMS, it offers both, direct and secluded
versatility and is strictly keeps up consistency of read and compose which as an exchange helps in programmed failover help. Hbase is
not social and does not help SQL, yet given the correct issue space, it can do what a RDBMS can't: have substantial, inadequately
populated tables on groups produced using ware fittings [18]. The sanctioned Hbase utilization case is the webtable, a table of
slithered site pages and their traits, (for example, dialect and MIME sort) keyed by the page URL. The webtable is huge, with line
tallies that run into the billions.

Pig (Programming Tool)

Pig is an abnormal state stage for making MapReduce projects utilized with Hadoop. The dialect for this stage is called Pig Latin [19].
Pig was at first created at Yahoo! to permit individuals utilizing Hadoop to center all the more on examining extensive infor mation
sets and invest less time needing to compose mapper and reducer programs. The Pig programming dialect is intended to handle any
sort of information. The Apache Pig, incorporates a Pig Latin programming dialect for communicating information streams, is an
abnormal state dataflow dialect which is utilized to decrease the complexities of MapReduce by changing over its administrators into
MapReduce code. It utilizes SQL-like operations to be performed on vast conveyed datasets. Pig Latin digests the programming from
the Java MapReduce colloquialism into a documentation which makes MapReduce programming abnormal state, like that of SQL for
RDBMS frameworks [20]. Pig Latin could be expanded utilizing UDF (User Defined Functions) which the client can compose in
Java, Python, JavaScript, Ruby or Groovy and after that call straightforwardly from the dialect.
Hive
Hive is an information distribution center base based on top of Hadoop for giving information synopsis, inquiry, and analysis.[1]while
at first created by Facebook [21]. Hive was made to make it feasible for experts with solid SQL abilities to run questions on the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


92
www.ijergs.org

colossal volumes of information that Facebook put away in HDFS. At the point when beginning Hive surprisingly, we can watch that
it is working by posting its tables: there ought to be none. The order must be ended with a semicolon to advise Hive to execute it:

hive> SHOW TABLES;
OK
Hive fails to offer a couple of things contrasted with RDBMS however, for instance, it is best suited for cluster occupations not
ongoing application preparing (fig 4). Hive needs full SQL help and does not give line level embeds redesigns or erase. This is the
place Hbase, an alternate Hadoop module is worth contributing [22].



Fig. 4

Zookeeper

Zookeeper is an elite coordination administration for dispersed applications where appropriated techniques coordinate with one
another through an imparted progressive name space of information registers. Zookeeper is connected with specific perspectives that
are obliged while planning and creating some coordination administrations [23]. The design administration helps putting away setup
information and offering the information over all hubs in the appropriated setup. The naming administration permits one hub to
discover a particular machine in a group of a huge number of servers. The synchronization administration gives the building pieces to
Locks, Barriers and Queues. The locking administration permits serialized access to an imparted asset in the conveyed framework.
The Leader Election administration serves to recoup the framework from programmed disappointment. Zookeeper is exceptionally
performant, as well. At Yahoo!, where it was made, Zookeeper's throughput has been benchmarked at in excess of 10,000 operations
for every second for compose predominant workloads.

Oozie
Oozie is a Java Web-Application that runs in a Java servlet-holder - Tomcat and utilization a database to store: Workflow definitions
& Currently running work process examples, including occurrence states and variables Oozie work process is an accumulation of
activities (i.e. Hadoop Map/Reduce occupations, Pig employments) masterminded in a control reliance DAG (Direct Acyclic Graph),
tagging an arrangement of activities execution [10] . With such a variety of Hadoop occupations running on diverse groups, there was
a requirement for a scheduler when Oozie came into the scene. The highlight of Oozie is that it joins numerous consecutive
employments into one consistent unit of work. There are two essential sorts of Oozie occupations: Oozie Workflow Jobs which is
more like a Directed Acyclic Graph, which tags a succession of employments to be executed, and the other is Oozie Coordinator Jobs
which are repetitive Workflow Jobs that are activated by the date and time accessibility.

Ambari
Ambari is a device for provisioning, overseeing, and observing Hadoop groups. The immense gathering of administrator instruments
and Apis conceal the multifaceted nature of Hadoop consequently rearranging the operation of and on bunches. Regardless of the
extent of the bunch, Ambari improves the organization and support of the host. It preconfigures adjusts for viewing the Hadoop
benefits and envisions and showcases the group operations in a straightforward web interface. The occupation symptomatic
instruments help to imagine work interdependencies and perspective timetables for noteworthy employment execution and
troubleshooting for the same [9]. The most recent adaptation holds Hbase multi-expert, controls for host and improved neighborhood
storehouse setup.


Sqoop
Sqoop is an apparatus which gives a stage to trade of information in the middle of Hadoop and any social databases, information
distribution centers and Nosql datastore. The change of the foreign made information is carried out utilizing Mapreduce or whatever
available abnormal state dialect like Pig, Hive or Jaql[1]. Sqoop imports a table from a database by running a Mapreduce work that
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


93
www.ijergs.org

concentrates columns from the table, and composes the records to HDFS. How does Map- Reduce read the lines? This area clarifies
how Sqoop functions under the hood.





At an abnormal state, Figure shows how Sqoop interfaces with both the database source and Hadoop. Like Hadoop itself, Sqoop is
composed in Java. Java gives an API called Java Database Connectivity, or JDBC, that permits applications to get to information put
away in a RDBMS and investigate the way of this information.

YARN

Yet Another Resource Navigator (YARN) The beginning arrival of Hadoop confronted issues where group was hard coupled with
Hadoop and there were a few falling disappointments. This prompted the advancement of a structure called YARN [8]. Not at all like
the past form, the expansion of YARN has given better adaptability, group usage and, client dexterity. The fuse of MapReduce as a
YARN system has given full retrogressive similarity existing MapReduce errands and applications. It pushes viable use of assets
while giving appropriated environment to the execution of an application. The approach of YARN has opened the Conceivable
outcomes of building new applications to be based on top of Hadoop.

J AQL
JAQL is a JSON based question dialect, which is abnormal state much the same as Pig Latin and Mapreduce. To endeavor enormous
parallelism, JAQL changes over abnormal state inquiries into low-level questions. Like Pig, JAQL likewise does not uphold the
commitment of having a pattern [15]. JAQL helps various in-fabricated capacities and center administrators. Include and Output
operations on JAQL are performed utilizing I/O connectors, which is in charge of preparing, putting away and deciphering and
furnishing a proportional payback as JSON organization.


Impala
Impala is an open source inquiry dialect for gigantic parallel handling created by Cloudera that runs locally on Hadoop. The key
profits of utilizing Impala is that it can perform intelligent dissection progressively, diminish information development and copy
stockpiling in this way lessening expenses and furnishing joining with heading Business Intelligence apparatuses.

Flume
One exceptionally basic utilization of Hadoop is taking web server or different logs from an expansive number of machines, and
intermittently preparing them to haul out investigation data. The Flume venture is intended to make the information social event
prepare simple and versatile, by running executors on the source machines that pass the information upgrades to gatherers, which then
total them into extensive pieces that might be effectively composed as HDFS records. It's normally set up utilizing a charge line
apparatus that backings normal operations, such as tailing a record or listening on a system attachment, and has tunable unwavering
quality certifications that let you exchange off execution and the potential for information misfortune.

Hue
Shade remains for Hadoop User Experience. It is an open source GUI for Hadoop, created by Cloudera. Its objective is to let client
free from stresses over the underlying and backend unpredictability of Hadoop. It has a HDFS record program, YARN & MapReduce
Job Browser, Hbase and Zookeeper program, Sqoop and Spark manager, an inquiry proofreader for Hive and Pig, application for
Ozzie work processes, access to shell and application for Solr looks [12].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


94
www.ijergs.org

Chukwa
Chukwa is an information gathering framework for observing extensive conveyed frameworks. It is based on top of the HDFS and
Mapreduce system and inherits Hadoop's adaptability and power. It exchanges information to gatherers and spares information to
HDFS [13]. It holds information sins which saves crude unsorted information. A usefulness called Demux is utilized to add structures
to make Chukwa records which in the long run go to the database for examination. It incorporates an adaptable tool compartment for
showing, observing and dissecting results to bring about a noticeable improvement utilization of the gathered information.

Mahout
Mahout is an open source schema that can run basic machine learning calculations on gigantic datasets. To accomplish that
adaptability, the greater part of the code is composed as parallelizable occupations on top of Hadoop. Mahout is an adaptable machine
learning library based on top of Hadoop focusing on synergistic separating, grouping and characterization [11]. With information
developing at speedier rate consistently, Mahout explained the requirement for recollecting yesterday's strategies to process
tomorrow's information It accompanies calculations to perform a great deal of normal errands, such as bunching and ordering items
into gatherings, prescribing things focused around other clients' practices, and spotting traits that happen together a considerable
measure. It's a vigorously utilized task with a dynamic group of engineers and clients, and its well worth attempting on the off chance
that you have any huge number of transaction or comparative information that you'd get a kick out of the chance to get more esteem
out of.

Cassandra
Cassandra was produced to address the issue of customary databases. It takes after Nosql structure and consequently creates straight
versatility and gives shortcoming tolerance via naturally reproduced to multi hubs on merchandise equipment or whatever available
cloud foundation administrations. It brags of lower dormancy and represses local outages [14]. It is decentralized, flexible and has
profoundly accessible nonconcurrent operations which are upgraded with different gimmicks.

Conclusion

Presently a day, Hadoop may be more qualified for a lot of information; it is not the proposed result or the substitution for all issues.
Just on account of information sets surpassing exabytes requesting expansive stockpiling, adaptability many-sided quality and
dispersed records is Hadoop a suitable alternative. Separated from elucidating on the capacities of every framework, this paper gives
knowledge on the functionalities of the different modules in the Hadoop .With information developing consistently; it is apparent that
Big Data and its usage are the innovative result without bounds. Before long, very nearly all commercial ventures and associations
around the globe will receive Big Data engineering for information administration.

REFERENCES:

[1] Tom White, Hadoop: The Definitive Guide, OReilly Media, 2012 Edition.

[2] Intel It Center, Planning Guide- Getting started with Big Data

[3] Academia.edu, Processing Big Data using Hadoop Framework

[4] Robert D. Schneider, Hadoop for Dummies

[5] Hadoop Distributed File System Architecture Guide, Online:
http://hadoop.apache.org/docs/stable1/hdfs_design.html

[6] Donald Miner, Adam Shook, MapReduce Design Patterns, OReilly Media, 2012 Edition

[7] Jason Venner, Pro Hadoop, Apress, 2009 Edition

[8] Hadoop Yet Another Resource Navigator Hortonworks, Online: http://
hortonworks.com/hadoop/yarn/
[9] Apache Ambari Hortonworks, Online: http://hortonworks.com/hadoop/ambari/

[10] Apache Oozie Hortonworks, Online: http://hortonworks.com/hadoop/oozie/

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


95
www.ijergs.org

[11] Sean Owen, Robin Anil, Ted Dunning, Ellen Friedman, Mahout in Action, Manning, 2011 Edition.

[12] Apache Hue, Online: http://gethue.tumblr.com/


[13] Chukwa Processes and Data Flow, Online: http://wiki.apache.org/hadoop /Chukwa_Processes_and_Data_Flow/

[14]] Ebin Hewitt, Cassandra: The Definitive Guide, OReilly Media, 2010 Edition

[15] http://en.wikipedia.org/wiki/Jaql

[16] http://en.wikipedia.org/wiki/MapReduce

[17] http://en.wikipedia.org/wiki/Apache_HBase

[18] http://hbase.apache.org/

[19] http://pig.apache.org/

[20] http://en.wikipedia.org/wiki/Pig_(programming_tool)

[21] https://hive.apache.org/

[22] http://www-01.ibm.com/software/data/infosphere/hadoop/hive/

[23] Aaron Ritchie, Henry Quach, Developing Distributed Applications Using
Zookeeper, Big Data University, Online: http://bigdatauniversity.com/bduwp/
bdu-course/developin-distributed-applications-using-zookeeper













International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


96
www.ijergs.org

VLSI Based Design of Low Power and Linear CMOS Temperature Sensor
Poorvi Jain
1
, Pramod Kumar Jain
2

1
Research Scholar (M.Teh), Department of Electronics and Instrumentation,SGSIS, Indore
2
Associate Professor, Department of Electronics and Instrumentation, SGSIS, Idore
E-mail- pjpoorvijain1@gmail.com
Abstract Complementary Metal Oxide Semiconductor (CMOS) temperature sensor is introduced in this paper which aims at
developing the MOSFET as a temperature sensing element operating in sub-threshold region by using dimensional analysis and
numerical optimization techniques A linear CMOS temperature to voltage converter is proposed which focuses on temperature
measurement using the difference between the gate-source voltages of transistors that is proportional to absolute temperature with low
power. This proposed CMOS temperature sensor is able to measure the temperature range from 0
o
C to 120
o
C . A comparative study is
made between the temperature sensors based on their aspect ratio under the implementation of UMC 180nm CMOS process with
single rail power supply of 600mV.
Keywords Aspect ratio , CMOS-Complementary Metal Oxide Semiconductor, MOSFET-Metal Oxide
Semiconductor Field Effect Transistor, Sub-threshold ,Temperature sensor, Low power , Linearity.
INTRODUCTION
An important issue for powerful, high-speed computing systems (containing microprocessor cores and high speed DRAM) is thermal
management. This is of special concern with laptops and other portable computing devices where the heat sinks and/or fans can only
help dissipate the heat to a limited degree. This makes variations in clock frequency and/or variation in modes of device operation for
DRAM, Flash, and other systems necessary. On-chip smart CMOS temperature sensors have been commonly used for thermal
management in these applications. The main considering factors of temperature sensor are as follow:
Power : In VLSI implementation many small devices are incorporated resulting into higher and higher level of integration causing
too much of heat dissipation. So there is a need of reducing power and thereby also reducing the production cost. For this purpose the
power consumption must be in nanowatt.
Area : Series connected MOSFET's used for current sink increases the die area of the design, also the sizing of transistor plays an
important role in deciding the chip area. The area should be small approximately 0.002m
2
.
Start-up circuit: Start-up circuit is required in the design if the transient response of the sensor takes a significant amount of time in
reaching the steady state. If steady state time is less than 200ms in the worst case then it eliminates the necessity to start-up circuit.
As the CMOS technology scales down, the supply voltage also scales down from one generation to the next. It becomes difficult to
guarantee that all the transistors work in saturation as the supply voltage drops.Therefore, the traditional temperature sensor
configuration is not suitable for ultra low voltage applications for that reason the sensor should incorporate some modifications. This
modification can be brought by making MOS transistors to work in sub-threshold region.
This paper presents a nanoWatt integrated temperature sensor for ultra-low power applications such as battery powered portable
devices are designed and simulated using Cadence analog and digital system design tools UMC 180nm CMOS technology. Ultra-low
power consumption is achieved through the use of sub-threshold ( also known as weak inversion) MOS operation. The transistor are
used in this domain because the current here is exponentially dependent on the control voltages of the MOSFET and they draw small
currents so as to reduce power consumption. The sensor sinks current in nano-amperes from a single power supply of 0.6V and its
power consumption is in nanoWatt . The performance of the sensor is highly linear in the range of 0120
0
C.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


97
www.ijergs.org

PROPOSED SCHEME
The proposed CMOS temperature sensor as shown in Fig 1 consists of main three blocks:

Fig 1. Circuit diagram of CMOS temperature sensor
(i) Current source sub-circuit: Analog circuits incorporate current references which are self biasing circuit . Such references are dc
quantities that exhibit little dependence on supply and process parameters and a well defined dependence on the
(ii) Temperature variable sub-circuit: The temperature variable sub-circuit consists of three couples of serially connected transistors
operating in sub-threshold region. It accepts current through PMOS current mirrors current mirror which gives a replica ( if necessary,
attenuated or amplified ) of a bias or a signal current and produces an output voltage proportional to temperature.
(iii) One point calibration sub-circuit: Calibration consists of determining the indication or output of a temperature sensor with
respect to that of a standard at a sufficient number of known temperatures so that, with acceptable means of interpolation, the
indication or output of the sensor will be known over the entire temperature range of use. After packaging, the sensor is calibrated by
measuring its die temperature at reference point using on-chip calibration transistors.
METHODOLOGY
The sub-threshold drain current I
D
of a MOSFET is an exponential function of the gate-source voltage V
GS
and the drain source
voltage V
DS
, and given by[8]:

=
0

, (1)
Where

0
=

1
,
2
(2)
and K is the aspect ratio (=W/L) of the transistor, is the carrier mobility, C
OX
is the gate-oxide capacitance, V
T
is the thermal voltage
V
TH
is the threshold voltage of a MOSFET, and is the sub-threshold slope factor.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


98
www.ijergs.org

In the current-source sub-circuit, gate-source voltage V
GS9
is equal to the sum of gate-source voltage V
GS8
and drain source voltage
V
DS10
:

9
=
10
+
8
(3)

10
=
9

8
=

9
(4)
M
10
is operated in the sub-threshold region, so trans-conductance G
DS10
is obtained by using Eqs. (1) and (4)

10
=

10

10
=

10

10

10

(5)
=
10

10

=
1

10

(6)
As M
10
operates in sub-threshold region (V
m
- V
TH10
<0), I is increased by temperature, so the highest power consumption is in the
upper temperature limit .Choosing the maximum current is a tradeoff between power consumption and linearity that can be obtained
by simulation. In the temperature variable sub-circuit, as M
5,
M
6,
M
15,
M
12,
M
16
,M
17
are in sub-threshold region, the relation between
gate to source voltage and MOS current is equal to Eq. (4). According to Fig.1, currents of M
5
, M
12
, M
16
, M
17
are I and currents of M
6

and M
15
are 3I and 2I . The transistor sizes in our design are simple having the same aspect ratio.

=
6

5
+
15

12
+
16

17
(7)
By using Eq. (4) with regard to currents of MOSFETs,output voltage is given by:

15

16

12

17

12

17

15

16
+

(8)
By replacing the currents of transistors, output voltage is obtained by:


6
5

12

17

15

16
+

(9)
By combination of Eqs. (7) and (9) output voltage can be obtained:


6
5

12

17

15

16
+
0
= + (10)
where T is absolute temperature, A and B are temperature independent constants. Eq. (10) shows a linear relationship between
absolute temperature and output voltage as depicted in Fig 3. Based on aspect ratio (W/L), temperature sensor is designed into two
ways:
(i) Temperature sensor based on designed W/L ratio : In this design all the MOS transistors used in the circuit diagram are of
different width and length. By using large length transistors (L
M6-11
>> L
min
)

the sensitivity to the geometric variations can be
minimized and an accurate temperature coefficient is expected. Large transistors also help to reduce the impact on threshold voltage
due to random doping fluctuations. Different values of W/L ratio for corresponding MOS transistors is given in the Table I.
(ii) Temperature sensor based on minimum W/L ratio : In this design all the MOS transistors used in the circuit diagram are of same
width and length. By minimum technology parameter it indicates that width(W) of transistor is 240nm and length(L) of transistor is
180nm. So the equation (10) becomes

6 +
0
= + .
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


99
www.ijergs.org

Table I Size of transistors
Transistor W/L (m/m)

M
1
1 (1.5/20)
M
2
10 (3/3)
M
3
4 (3/3)
M
6
, M
8
, M
10
1 (1/3)
M
7
3 (3/3)
M
9
4 (3/3)
M
11
28 (3/3)
M
4
, M
5
, M
12-14
1 (3/10)
M
C1
, M
C2
1 (1/20)



Fig 2.The linear relationship of output voltage and temperature of a Fig 3. The linear relationship of output voltage and temperature
of a temperature Sensor based on designed W/L Temperature Sensor based on minimum W/L .

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


100
www.ijergs.org

Fig 4.
Sink current versus temperature graph for minimum W/L sensor Fig 5. Sink current versus temperature graph for designed W/L sensor

Fig 6. Power versus temperature graph for Temperature Sensor Fig 7. Power versus temperature graph for Temperature Sensor
based on designed W/L based on minimum W/L

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


101
www.ijergs.org


Fig 8. Transient response of designed W/L sensor at temperature 17
0
C Fig 9. Transient response of minimum W/L sensor at temperature 17
0
C
Table II Comparison of temperature sensor with previous works
Sensor Power supply Power cons. Temp. range Inaccuracy Process
[1] 0.5, 1 V 119 nW -1030 C -0.8 to +1 C 180 nm CMOS
[2] 38.5 W 2 C 65 nm CMOS
[3] 2.75.5 V 429 W -50125 C 0.5 C 0.5 m CMOS
[4] 1V 220 nW 0100 C -1.6 to +3 C 180 nm CMOS
[5] 3.03.8 V 10 W 0100 C -0.7 to +0.9 C 0.35m CMOS
[6] 1V 25 W +50125 C -1 to +0.8 C 90 nm CMOS
[7] 8.6 W -55125 C 0.4 C 160 nm CMOS
[8] 0.62.5 V 7 nW +10120 C 2 C 180 nm CMOS
[This work
designed W/L]
0.62.5 V 12.5nW 0120 C 3 C 180 nm CMOS
[This work minimum W/L] 0.62.5 V 1.05nW 0120 C 67 C 180 nm CMOS

SIMULATION RESULTS AND DISCUSSION
A linear temperature sensor that incorporates semiconductor devices and is capable of high accuracy over a very wide temperature
range, such that the voltage drop varies approximately linearly in negative or positive dependence on temperature. The linear
relationship of output voltage and temperature at a supply voltage of 600mV is shown in Fig 2 and Fig 3.Temperature Sensor based on
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


102
www.ijergs.org

designed W/L sinks upto 28nA from a wide range of temperature with a V
DD
of 600mV is given in Fig 5. As sink current is
exponentially increasing with temperature its power consumption also increases. The overall power consumption is more of this
temperature design as compared to temperature sensor based on minimum W/L ratio as shown in Fig 4.As the temperature is increased
power consumed by the sensor also increases. The power consumption is of merely 12.5nW at 120
0
C is shown in Fig 7.The power
consumed is more by this sensor as compared to temperature sensor based on minimum W/L given in Fig 6.At temperature of 15
0
C
sustained oscillations are obtained in case of designed aspect ratio temperature sensor, at temperature of 17
0
C smooth response is
obtained spontaneously shown in Fig 8 and further on increasing the temperature oscillations are dominant. This proves temperature
of 17
0
C is the best temperature for transient response. In case of minimum aspect ratio temperature of 17
0
C is the suitable temperature
for transient response given in the Fig 9. The transient response of temperature sensor based on designed W/L is more practically
realizable ( similar to unit step response ) as compared to temperature sensor based on minimum W/L .
CONCLUSION

This research investigate an ultra low power temperature sensor. Tables II and III shows the comparison between the designed sensor
with previous works and its performance summary. As the oscillations were profound in the transient response of the temperature
sensor, this can be eliminated by using Proportional Integral Derivative controller based on IMC approach so as to get smooth steady
state response at a particular temperature. This transient response is helpful in determining the need of start-up circuit. It is required
that if steady state time is less than 200ms then there is no need of start -up circuit. As a result, the temperature sensor based on two
approaches of aspect ratio are significantly important according to their performances in relative desired characteristics. The layout
area of the sensor is shown in the Fig 10 and Fig 11. From area and power point of view temperature sensor based on minimum aspect
ratio is preferred whereas considering linearity, temperature error inaccuracy and transient response temperature sensor based on
designed aspect ratio is dominant.
Table III Performance Summary

Parameter temperature sensor based on
designed W/L
Value
temperature sensor based on
minimum W/L
Value
Power Supply 0.6-2.5V 0.6-2.5V
Power Consumption 12.5nW @ 120
0
C,
V
DD
= 0.6V
1.05nW @ 120
0
C,
V
DD
= 0.6V
Circuit area 0.0076mm
2
0.00013mm
2

Inaccuracy versus temperature 3
0
C 6-7
0
C
Inaccuracy versus V
DD
0.52
0
C/V 0.47
0
C/V
Sensitivity 1.41mV/
0
C 0.354mV/
0
C
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


103
www.ijergs.org

Transient response Stable at 17
0
C Sustained oscillations
Sink current 28nA @ 120
0
C,
V
DD
= 0.6V
5nA @ 120
0
C,
V
DD
= 0.6V
Transconductance 5.005 nA/V @ 25
0
C,
V
DD
= 0.6V
0.815 nA/V @ 25
0
C,
V
DD
= 0.6V
Transresistance 22.7 M @ 25
0
C,
V
DD
= 0.6V
416.6 M @ 25
0
C,
V
DD
= 0.6V



Fig 10. The layout of CMOS temperature sensor based Fig 11. The layout of CMOS temperature sensor based on designed
aspect ratio on minimum aspect ratio

REFERENCES:
[1] Law, M. K., Bermak, A., & Luong, H. C. A. Sub-W embedded CMOS temperature sensor for RFID food monitoring
application. IEEE Journal of Solid-State Circuits, 45(6), 12461255, (2010)
[2] Intel Pentium D Processor 900 Sequence and Intel Pentium Processor Extreme Edition 955 Datasheet. On 65 nm Process in the
775-Land LGA Package and Supporting Intel Extended Memory 64 Technology, and Supporting Intel Virtualization Technology,
Intel Corp., Document 310306-002,(2006).
[3] Pertijs, M. A. P., Niederkorn, A., Xu, M., McKillop, B., Bakker, A., & Huijsing, J. H. A CMOS smart temperature sensor with
a 3 inaccuracy of 0.5 C from -50 to 120 C. IEEE Journal of Solid-State Circuits, 40(2), 454461. (2005).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


104
www.ijergs.org

[4] Lin, Y. S., Sylvester, D., & Blaauw, D. An ultra low power 1 V, 220 nW temperature sensor for passive wireless applications. In
Custom Integrated Circuits Conference (CICC). IEEE, pp. 507510. (2008).
[5] Chen, P., Chen, C. C., Tsai, C. C., & Lu, W. F. A time-to digital- converter-based CMOS smart temperature sensor. IEEE Journal
of Solid-State Circuits , 40(8), 16421648. (2005).
[6] M. Sasaki, M. Ikeda, and K. Asada, "A temperature sensor with an inaccuracy of -1/+0.8
o
C using 90-nm I-V CMOS for online
thermal monitoring of VLSI circuits," IEEE Trans. Semiconductor Manufacturing, Vol. 2 1, No.2, pp.201 -20S, May (2005).
[7] Souri, K., Chae, Y., Ponomarev, Y., & Makinwa, K. A.. A precision DTMOST-based temperature sensor. In Proceedings of the
ESSCIRC (ESSCIRC) ,pp. 279282, (2011).
[8] Sahafi, Jafar Sobhi, Ziaddin Daie Koozehkanani,"Nanowatt CMOS temperature sensor".Analog Integr circ sig process in
springer,75:343-348, (2013).
[9] Ueno, K., Asai, T., & Amemiya, Y. Low-power temperature- to-frequency converter consisting of sub-threshold CMOS circuits
for integrated smart temperature sensors. Sensors and Actuators A Physical, 165, 132137 (2011).
[10] Balachandran, G. K., & Barnett, R. E. A 440-nA true random number generator for passive RFID tags. IEEE Transactions on
Circuits and Systems I Regular Papers, 55(11), 37233732, (2008).
[11] Bruce W. Ohme, Bill J. Johnson, and Mark R. Larson SOI CMOS for Extreme Temperature Applications Honeywell
Aerospace, Defense and Space, Honeywell International Plymouth Minnesota USA, (2012).
[12] Q. Chen, M. Meterelliyoz, and K. Roy, "A CMOS thermal sensor and its applications in temperature adaptive design," Proc. of
the 7th Int'l Symposium on Quality Electronic Design, pp.-24S, (2006).
Man kay law, and A Bermak A 405-nw CMOS Temperature sensor based on linear MOS operation IEEE Transaction on Circuits
and Systems,vol.56,no,12,December (2009)














International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


105
www.ijergs.org

Analysis of Advanced Techniques to Eliminate Harmonics in AC Drives
Amit P. Wankhade
1
, Prof. C. Veeresh
2

2
Assistant Professor, MIT mandsour
E-mail- amitwankhade03@gmail.com

Abstract Variable speed AC drives are finding their place in all types of industrial and commercial loads. This Work covers the
current source converter technologies, including pulse width-modulated current-source inverters (CSIs) and in addition, this also
addresses the present status of the direct converters & gives an overview of the commonly Used modulation schemes for VFD
systems. The proposed work flow is that to work with the simulation of three phases PWM Current Source Inverter fed Induction
Motor (CSI-IM) drive systems using Matlab/Simulink simulation Software. This work primarily presents a Unified approach for
generating pulse width-modulated patterns for three-phase current-source rectifiers and inverters (CSR/is) that provides unconstrained
selective harmonic elimination and fundamental current control. This conversion Process generates harmonics in the motor current
waveform. This project deals with the analysis of motor current Harmonics using FFT analysis and use of filter for mitigating them for
smooth operation of motor. The filter used for Reduction of harmonics is passive filter. The filter is such it reduces only the 5th & 7th
order harmonics. Thus the Analysis of motor current harmonics is done firstly without filter & then it has been compared with the
results after the Addition of filter. It is found that the 5th & 7th order harmonics has reduced considerably.
Keywords: Harmonics, Total harmonic distortion (THD), variable frequency drives (VFD), power factor, and current Source
inverter (CSI), Fast Fourier Transform (FFT).
INTRODUCTION
The proposed work is based on current source inverter fed induction motor scheme. At the front end a current source Rectifier is
connected which converts the 6.6Kv ac voltage into dc by rectifying it. The inverter converts the dc voltage again into ac & then
supplies to induction motor. As the switches used in the rectifier & inverter are GTOs & SCRs which requires triggering pulse? The
triggering pulse is given by the discrete six pulse generator which is connected to the gate of both rectifier & inverter having six
switching devices in each section. Due to the switching processes Harmonics are produced in the system. The output of the inverter
which is ac but not sinusoidal due to switching time Taken by the switches & is in quazi square form which is the main cause of
harmonics. As six switches are used the Harmonics which are dangerous to the system are 5th & 7th. Thus main focus is to reduce this
harmonic order. For doing So low pass filter is to be used so as to reduce this harmonics. An LC filter is used by selecting the values
of inductor & Capacitor. Thus it is a passive filter which is used in this scheme. The output of the induction motor is given to the bus -
Bar which shows the stator, rotor & mechanical quantities. As our main focus is on current on stator side we choose Stator quantities
from bus-bar. A scope is connected to observe the waveforms.
METHODOLOGY
Adding a variable frequency drive (VFD) to a motor-driven system can offer potential energy savings in a system in Which the
loads vary with time. The operating speed of a motor connected to a VFD is varied by changing the Frequency of the motor supply
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


106
www.ijergs.org

voltage. This allows continuous process speed control. Motor-driven systems are often Designed to handle peak loads that have a
safety factor. This often leads to energy inefficiency in systems that operate For extended periods at reduced load. The ability to adjust
motor speed enables closer matching of motor output to load And often results in energy savings. The VFD basically consist of a
rectifier section which converts the ac supply into Dc, a dc choke which is used to smooth the dc output current & an inverter section
which converts dc into ac supply Which is fed to induction motor? The VFD consists of a switching devices such as Diodes, IGBT,
GTO, SCR etc.[1]


Fig.1: Generalized Variable Frequency Drive
A VFD can be divided into two main sections:
A. Rectifier stage: A full-wave, solid-state rectifier converts three-phase 50 Hz power from a standard 208, 460, 575 or higher utility
supply to either fixed or adjustable DC voltage.
B. Inverter stage: Electronic switches power transistors or thyristor switch the rectified dc voltage on and off, And produce a current
or voltage waveform at the desired new frequency. The amount of distortion depends On the design of the inverter and filter.

III. SYSTEM SIMULATION
The proposed work is based on current source inverter fed induction motor scheme. At the front end a current source Rectifier is
connected which converts the 6.6Kv ac voltage into dc by rectifying it. For smoothing this voltage before Applying it to inverter a DC
choke coil is used this removes the ripples. The inverter converts the dc voltage again into Ac & then supplies to induction motor. As
the switches used in the rectifier & inverter are GTOs & SCRs which Requires triggering pulse. The triggering pulse is given by the
discrete six pulse generator which is connected to the gate Of both rectifier & inverter. Six switching devices in each section. Due to
the switching processes harmonics are Produced in the system. The output of the inverter which is ac but not sinusoidal due to
switching time taken by the Switches & is in quazi square form which is the main cause of harmonics. As six switches are used the
harmonics Which are dangerous to the system are 5th & 7th. Thus main focus is to reduce this harmonic order. For doing so low pass
filter is to be used so as to reduce this harmonics. An LC filter is used by selecting the values of inductor & Capacitor. Thus it is a
passive filter which is used in this scheme. The output of the induction motor is given to the bus - Bar which shows the stator, rotor &
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


107
www.ijergs.org

mechanical quantities. As our main focus is on current on stator side we choose Stator quantities from bus-bar. A scope is connected
to observe the waveforms. An FFT block is connected to the motor Current of any one phase whose order of harmonic is to be found
out. To this FFT block an FFT spectrum window is connected which displays the order of harmonics from 0 to 19th order of
harmonic. Also a bar graph is displayed which shows the order of harmonics which is shown by the FFT spectrum. Thus the work is
divided into two sections one before use of filter and after the use of filter. After running the simulation it is observed that the 5th &
7th harmonic components are reduced than that without filter which is shown by the FFT spectrum block.



Fig.2: Simulation Diagram of CSI Fed Induction.


Table: Induction Motor Specifications
Motor Supply Voltage 6600V
Horse Power Rating of Motor 200 HP
Supply Frequency 50 Hz
Stator resistance [Rs] , Stator
inductance [Ls]
1.485,
0.03027 H
Pole Pairs 2
IV.HARMONICS
Harmonics are the major problems in any industrial drives. They cause serious problems in the motor which is connected as a load fed
from the VFD. The VFD is a current source inverter fed (CSI) .At the front end a current source rectifier is connected which converts
the 6.6Kv ac voltage into dc by rectifying it. For smoothing this voltage before applying it to inverter a DC choke coil is used this
removes the ripples. The inverter converts the dc voltage again into ac & then supplies to induction motor. As the switches used in the
rectifier & inverter are GTOs & SCRs which requires triggering pulse. The triggering pulse is given by the discrete six pulse
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


108
www.ijergs.org

generator which is connected to the gate of both rectifier & inverter. Six switching devices in each section. Due to the switching
processes harmonics are produced in the system. The output of the inverter which is ac but not sinusoidal due to switching time taken
by the switches & is in quazi square form which is the main cause of harmonics. As six switches are used the harmonics which are
dangerous to the system are 5th & 7th . Thus main focus is to reduce this harmonic order. For doing so low pass filter is to be used so
as to reduce this harmonics. Total harmonic distortion is the contribution of all the harmonic frequency currents to the fundamental.
Table 3.2.1: Harmonics & Multiples of Fundamental Frequencies

The nonlinear loads such as AC to DC rectifiers produce distorted waveforms. Harmonics are present in waveforms that are not
perfect sine waves due to distortion from nonlinear loads Around the 1830s a French mathematician named Fourier discovered that a
distorted waveform can be represented as a series of sine waves each an integer number multiple of the fundamental frequency and
each with a specific magnitude. For example, the 5th harmonic on a system with a 50 Hertz fundamental waveform would have a
frequency of 5 times 50 Hertz, or 2500Hertz. These higher order waveforms are called harmonics. The collective sum of the
fundamental and each harmonic is called a Fourier series. This series can be viewed as a spectrum analysis where the fundamental
frequency and each harmonic component are displayed. [8] Graphically in pu is shown in a bar chart in Figure 3.4.1

Figure 3: Harmonics order in per unit with respect to fundamental
From above study it is clear that harmonic currents flow in an AC drive with a 6-pulse front end, lets address what, if any, problems
this may cause. Power is only transferred through a distribution line when current is in phase with voltage. This is the very reason for
concerns about input power factor. Displacement power factor in a motor running across the line can be explained as the cosine of
the phase angle between the current and voltage. Since a motor is an inductive load, current lags voltage by about 30 to 40 degrees
when loaded, making the power factor about 0.75 to 0.8 as opposed to about 0.95 for many PWM AC drives? In the case of a resistive
load, the power factor would be 1 or unity. In such a case all of the current flowing results in power being transferred. Poor power
factor (less than 1 or unity) means reactive current that does not contribute power is flowing.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


109
www.ijergs.org

CONTROL STRATEGIES

The induction motor control can be done with the help variable frequency drives. The high power drives can be divided into subparts
depending upon the areas of applications. Following chart shows the high frequency drives with different schemes.

Figure 4. Chart showing types of VFD schemes
Adding a variable frequency drive (VFD) to a motor-driven system can offer potential energy savings in a system in which the loads
vary with time. VFDs belong to a group of equipment called adjustable speed drives or variable speed drives. (Variable speed drives
can be electrical or mechanical, whereas VFDs are electrical.) The operating speed of a motor connected to a VFD is varied by
changing the frequency of the motor supply voltage. This allows continuous process speed control. Motor-driven systems are often
designed to handle peak loads that have a safety factor. This often leads to energy inefficiency in systems that operate for extended
periods at reduced load. The ability to adjust motor speed enables closer matching of motor output to load and often results in energy
savings
OVERALL WORKING OF MODEL
The proposed work is based on current source inverter fed induction motor scheme. At the front end a current source rectifier is
connected which converts the 6.6Kv ac voltage into dc by rectifying it. For smoothing this voltage before applying it to inverter a DC
choke coil is used which removes the ripples. The inverter converts the dc voltage again into ac & then supplies to induction motor.
As the switches used in the rectifier & inverter are GTOs & SCRs which requires triggering pulse. The triggering pulse is given by
the discrete six pulse generator which is connected to the gate of both rectifier & inverter. Six switching devices in each section. Due
to the switching processes harmonics are produced in the system. The output of the inverter which is ac but not sinusoidal due to
switching time taken by the switches & is in quazi square form which is the main cause of harmonics. As six switches are used the
harmonics which are dangerous to the system are 5th & 7th . Thus main focus is to reduce this harmonic order. For doing so low pass
filter is to be used so as to reduce this harmonics. An LC filter is used by selecting the values of inductor & capacitor. Thus it is a
passive filter which is used in this scheme. The output of the induction motor is given to the bus -bar which shows the stator ,rotor &
mechanical quantities. As our main focus is on current on stator side we choose stator quantities from bus-bar. A scope is connected to
observe the waveforms. An FFT block is connected to the motor current of any one phase whose order of harmonic is to be found out.
To this FFT block an FFT spectrum window is connected which displays the order of harmonics from 0 to 19th order of harmonic.
Also a bar graph is displayed which shows the order of harmonics which is shown by the FFT spectrum. Thus the work is divided into
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


110
www.ijergs.org

two sections one before use of filter and after the use of filter. After running the simulation it is observed that the 5th & 7th harmonic
components are reduced than that without filter which is shown by the FFT spectrum block. The Total harmonic distortion is also
found out by connecting a THD block available in simulink library. It is found that the THD is also reduced after the use of filter. Also
a single tuned filter can be used to reduce the harmonics which takes care of only one frequency harmonic component which is to be
reduced. The FFT analysis can also be done by using powergui which contains FFT tool. The motor current signal is imported in work
space by connecting a simout block to the scope & by selecting structure with time option. Further the proposed work deals with the
Wavelet analysis of motor current signal. The Wavelet analysis is performed by two methods. The first one by doing programming in
M-file. The program is written so that the motor current without & with filter are compared. The Wavelet analysis uses the time
scaling technique. Thus the Low & high frequency components of motor current are compared. It is observed that higher frequency
components become zero after filter is used.
FFT ANALYSIS OF MOTOR CURRENT
The FFT analysis of motor current is done in two steps i.e without filter & after the addition of filter circuit. The output of the
induction motor is given to the bus -bar which shows the stator ,rotor & mechanical quantities. As our main focus is on current on
stator side we choose stator quantities from bus-bar. A scope is connected to observe the waveforms.
An FFT block is connected to the motor current of any one phase whose order of harmonic is to be found out. To this FFT block an
FFT spectrum window is connected which displays the order of harmonics from 0 to 19th order of harmonic. Also a bar graph is
displayed which shows the order of harmonics which is shown by the FFT spectrum. Thus the work is divided into two sections one
before use of filter and after the use of filter. After running the simulation it is observed that the 5th & 7th harmonic components are
reduced than that without filter which is shown by the FFT spectrum block. The Total harmonic distortion is also found out by
connecting a THD block available in simulink library. It is found that the THD is also reduced after the use of filter. Also a single
tuned filter can be used to reduce the harmonics which takes care of only one frequency harmonic component which is to be reduced.
The FFT analysis can also be done by using powergui which contains FFT tool.

Figure 5: Bar-graph showing magnitude of harmonics without filter
FFT analysis of motor current with LC filter
The FFT analysis of motor current harmonics is done after adding filter An FFT block is connected to the motor current of any one
phase whose order of harmonic is to be found out as seen in the diagram . To this FFT block an FFT spectrum window is connected
which displays the order of harmonics from 0 to 19th order of harmonic. After the simulation is run the FFT spectrum displays the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


111
www.ijergs.org

harmonic orders. As we are using six switches in both current source rectifier & current source inverter we are more concerned with
5th & 7th order harmonic. Also a bar graph is displayed which shows the order of harmonics which is shown by the FFT spectrum.
The analysis is done for 5th & 7th harmonic components. The magnitude of this components is 6.19A & 6.18A respectively.


Figure 6. Bar-graph showing magnitude of harmonics with filter
VII Conclusion
The simulation of CSI fed Induction Motor drive caused harmonics in the motor current. This harmonics are the byproduct of
switching devices used in rectifier & inverter section. From all the harmonic orders 5th & 7th harmonic cause problem as we use 6
pulse rectifier & Inverter Sections. For the reduction of harmonics we have used LC filter with typical values of inductor & Capacitor.
Thus reduction in the 5th & 7th harmonics components is done by passive filter.
REFERENCES:
1. K. H. J. Chong and R. D. Klug, High power medium voltage drives, in Proc. Power Con, vol. 1, pp. 658664. Nov. 2124, 2004.
2. Bin Wu, S.B.Dewan and G.R.Slemon, PWMCSI Inverter Induction Motor Drives", IEEE Trans. IA, Vol. 28, NO. 1, pp. 64-71,Jan
1992.
3. P. M. Espelage, J. M. Nowak, " Symmetrical GTO current source inverter for wide speed range control of 2300 to 4160 volt, :350
tp 7000hp,
induction motors," IEEE IAS Annual Meeting, pp302-307, 1988.
4. M.Salo and H.Tuusa, A vector-controlled PWM current-source-inverter fed induction motor drive with a new stator current control
method,
IEEE Trans. Ind. Electron., vol. 52, no. 2, pp. 523531, Apr. 2005.
5. H. Karshenas, H. Kojori, and S. Dewan, Generalized techniques of selective harmonic elimination and current control in current
source
inverters/ converters, IEEE Trans. Power Electron., vol. 10, pp. 566573,Sept. 1995.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


112
www.ijergs.org

6. B.Wu, G. R. Slemon, and S. B. Dewan,Stability analysis of GTO-CSI induction machine drive using constant rotor frequency
control, in
Proc. 6th Int. Conf. Elect. Machines and Drives, pp. 576581,1993.
7. J. Espinoza and G. Joos, On-line generation of gating signals for current source converter topologies, ISIE, pp. 674678, 1993




















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


113
www.ijergs.org

Fitting Performance of Empirical and Theoretical Soil Water Retention
Functions and Estimation of Statistical Pore-Size Distribution-Based
Unsaturated Hydraulic Conductivity Models for Flood Plain Soils
Alka Ravesh
1
, R.K.Malik
2

1
Assistant Professor, Department of Applied Sciences, Savera Group of Institutions, Farrukhnagar, Gurgaon, Haryana, India
2
Professor of Hydrology and water Resources Engineering and Head,, Department of Civil Engineering, Amity School of Engineeing
and Technology, Amity university, gurgaon, Haryana, India
E-mail- rkmalik@ggn.amity.edu

Abstract For identifying the soil water retention function for its best fitting performance, the empirical retention functions of
Brooks-Corey, Van Genuchten and theoretical Kosugi were parameterized for the clay loam and silt loam flood plain soils. The
parameters using non-linear least- squares optimization technique as used in the RETC code were optimized and these were used in
the Mualems statistical pore-size distribution-based unsaturated hydraulic conductivity models. It was observed that the log-normal
function of Kosugi gave an excellent fitting performance having the highest co-efficient of determination and the lowest residual sum
of squares. The physically-based Kosugi function was observed to be followed by empirical functions of Van Genuchten and Brooks-
Corey in their fitting performances, respectively
KeywordsSoil water retention functions-Brooks-Corey, van Genuchten, Kosugi, RETC computer code, parameterization, fitting
performance, Mualem-based hydraulic conductivity models, model estimation.
INTRODUCTION
Modeling of water dynamics within the partially-saturated soil profile of a specific textural class requires knowledge of the
related soil hydraulic characteristics viz: soil water retention functions and soil hydraulic conductivity models and has applications in
analyzing the hydrological, environmental and solute transport processes within the soil profile. Different functions have been
proposed by various investigators and were reviewed [1]. For estimation of these functions, direct and indirect methods have been
employed and in 2005 these have been discussed by Durner and Lipsius [2]. They reported that the direct measurement of unsaturated
hydraulic conductivity is considerably more difficult and less accurate and they further suggested the use of indirect method using
easily measured soil water retention data from which soil water retention functions can be developed. These retention functions, either
empirical or theoretical expressions, fitting the observed soil water retention data to different extents having the specific number of
parameters are further embedded into the statistical pore-size distribution-based relative hydraulic conductivity models to develop
corresponding predictive theoretical unsaturated hydraulic conductivity models having the same parameters as in the corresponding
soil water retention functions given the saturated hydraulic conductivity and the related tortuosity factor. The estimation of the
parameters of the retention functions is, therefore, important. In 2012 Solone, et al. [3] reported that the parameterization of the soil
water retention functions can be obtained by fitting the function to the observed soil water retention data using the least-squares non-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


114
www.ijergs.org

linear fitting algorithms or employing the inverse methods in which the function parameters are iteratively changed so that a given
selected function approximates the observed response or using the pedotransfer functions which are regression equations.
Scarce information is available about the parameterization of these functions and the extent of fitting performance of various empirical
and theoretical soil water retention functions and subsequently based on these parameters the hydraulic conductivity models for
the flood plain soils which constitute mainly the clay loam and silt loam soils need to be estimated. So, in this study, the
parameterization of empirical and theoretical soil water retention functions fitting the observed data of these soils has been made
to identify suitable functions and further to estimate the unsaturated hydraulic conductivity models based on the estimated
parameters for identifying the appropriate models of unsaturated hydraulic conductivity for further use in the modeling of soil
water dynamics.
Materials and Methods
Soil water retention data
The average soil water retention data [4] for the soil water suction heads of 100, 300, 1000, 2000 , 3000 , 5000 , 10000 and
15000 cm of different soil samples from the soil profiles (depth 150 cm) of silt loam (percentage of sand, silt and clay: 58.6, 21.9,
14.6, respectively) and clay loam (percentage of sand, silt and clay ranging from 38.3 to 37.4, 20.5 to 24.3 and 34.2 to 37.6,
respectively) soils of the flood plains of a seasonal river Ghaggar flowing through a part of Rajasthan was utilized for estimating the
parameters of the soil water retention functions described below.
Soil water retention functions
The empirical soil water retention functions proposed by van Genuchten in 1980 [5] with independent m and n of each other and fixed
(m = 1-1/n) shape parameters, by Brooks-Corey in 1964 [6] and the statistical pore-size distribution-based soil water retention
function by Kosugi in 1996 [7] were used for parameterization. The van Genuchten proposed the sigmoidal- shaped continuous
(smooth) five-parametric power-law function as:
(h) =
r
+
s

r
1 +(
VG
h)
n

m
(1)
Where is the soil water content at the soil water suction head h and
s
and
r
are the residual and saturated soil water contents,
respectively. The parameter
VG
is an empirical constantL
1
. In this function the five unknown parameters are
r
,
s
,
VG
, n and m
when the shape parameters n and m are independent of each other and when n and m are fixed then these unknown parameters
reduced to four. The dimensionless parameters n and m are the parameters related to the pore-size distribution affecting the shape of
the function. However, Durner reported that the constraint of fixed condition eliminated some of the flexibility of the function [8].
Brooks Corey proposed the following empirical four-parametric power-law soil water retention function as:
(h) =
r
+
s

r
(
BC
h)

BC
(2)
Where
BC
is an empirical parameterL
1
which represents the desaturation rate of the soil water and is related to the pore-
size distribution and whose inverse is regarded as the reciprocal of the height of the capillary fringe. The parameter
BC
is the pore-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


115
www.ijergs.org

size distribution index affecting the slope of this function and characterizes the width of the pore-size distribution. In this function, the
four unknown parameters are
r
,
s
,
BC
and
BC
.
In 1994, Kosugi [9] assumed that the soil pore-size is a log-normal random variable and based on this hypothesis he derived
physically-based three-parameter model for the soil water retention function; the three parameters being the mean, variance of the
pore-size distribution and the maximum pore radius. In the limiting case, where the maximum pore radius becomes infinite, the three-
parameter model simplifies to two-parameter model and based on this simplification Kosugi in 1996 improved the function by
developing a physically-based (theoretical) two-parameter log-normal analytical model based on the log-normal distribution density
function of the pore radius for the soil water retention as:
(h) =
r
+ (
s

r
)
1
2
erfc
ln (h)ln (h
m
)
2
(3)
Where the parameters ln (h
m
) and denote the mean and standard deviation of ln (h), respectively. The function erfc denotes the
complementary error function [10].
Parameter estimation of soil water retention functions
For estimation of unknown parameters of these functions, RETC (RETention Curve) computer code [11] was used by utilizing the soil
water retention data only. These unknown parameters were represented by a vector b consisting of to
r
,
s
,
VG,
n , m for independent
shape parameters and
r
,
s
,
VG,
n for fixed shape parameters for van Genuchten function and for Brooks-Corey function the vector
b represented unknown parameters
r
,
s
,
BC,

BC
. For Kosugi function, the vector b represented the unknown parameters

r
,
s
,

, .These parameters were optimized iteratively by minimizing the residual sum of squares (RSS) of the observed and fitted
soil water retention data (h) and the RSS was taken as the objective function O (b) which was minimized by means of a weighted
non-linear least-squares optimization approach based on the Marquardt-Levenbergs maximum neighborhood method [12] as:
O (b) = w
i

2
N
i=1
(4)
Where
i
and

i
are the observed and the fitted soil water contents, respectively. N is the number of the soil water retention
points and equal to 8 in this analysis. The weighting factors w
i
, which reflects the reliability of the measured individual data, were set
equal to unity in this analysis as the reliability of all the measured soil water retention data was considered equal. A set of appropriate
initial estimates of these unknown parameters was used so that the minimization process converges fast after certain iterations to the
optimized values of these parameters.
The goodness of fit of the observed and fitted data was characterized by the coefficient of determination (r
2
) which measures
the relative magnitude of the total sum of squares associated with the fitted function as:
r
2
=

2
/
i

2
(5)
Where

i
is the mean of observed soil water content data.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


116
www.ijergs.org

The soil water retention functions for these soils were identified in order of superior fitting performance having comparatively higher
coefficient of determination (r
2
) and lower residual sum of squares (RSS) for the observed and predicted soil water retention data.
Estimation of hydraulic conductivity models
For predicting the unsaturated hydraulic conductivity from the measured soil water retention data, approaches were developed based
on the capillary-bundle theory by Childs and Collis-George in 1950 [13], Burdine in 1953 [14] and Mualem in 1976 [15]. In this
analysis the widely used Mualem approach was used.
Mualem developed a relative hydraulic conductivity model based on the capillary theory which assumes that the pore radius is
inversely proportional to the suction head (h) at which the pore drains and conceptualized the pores as the pairs of capillary tubes
whose lengths are proportional to their radii and the conductance of each capillary-tube pair is determined according to the
Poiseuilles law (Poiseuilles law states that the flow rate per unit cross-sectional area of a capillary tube is proportional to the square
of the radius). He derived the model for the prediction of the relative unsaturated hydraulic conductivity from the soil water retention
function. He incorporated the statistical model based on some assumptions. One of these assumptions is that the pore size of a
particular radius is randomly distributed in the porous media and another assumption is to incorporate the average flow velocity given
by the Hagen-Poiseulles formulation. He developed the relative hydraulic conductivity model as:
K
r
(h) = S
e

[h

r
()]
1
d
[h
s

r
()]
1
d

2
(6)
Where S
e
[= (
r
) / (
s

r
)] is the dimensionless effective saturation. The parameter l is the tortuosity factor. K
r
(S
e
) (= K
(S
e
)/ K
s
) is the relative unsaturated hydraulic conductivity and K
s
is the saturated hydraulic conductivity measured independently.
Black reported that the Mualem model to predict the relative hydraulic conductivity from the behavior of the measured soil water
retention data is most commonly employed to obtain closed-form analytical expression of unsaturated hydraulic conductivity [16].
Coupling the Brooks-Corey soil water retention function with the Mualem model of relative hydraulic conductivity, the
corresponding h-based relative hydraulic conductivity function is expressed as:
k
r
(h) =
BC
h

BC
+2 + 2
(7)
For developing the closed- form model of the hydraulic conductivity the van Genuchten soil water retention function was
coupled with the relative hydraulic conductivity model of Mualem. The condition of fixed shape parameter m = 1-1/n needs to be
satisfied for developing the closed form. Embedding the soil water retention function of van Genuchten into the Mualem model
resulted into the following corresponding h-based relative hydraulic conductivity model in the closed-form for the condition m = 1
1/n as:
K
r
(h) =
1(
VG
h)
n1
(1+
VG
h
n
)
m

2
1+
VG
h
n

m
(8)
Kosugi developed a two-parameter hydraulic conductivity model using the corresponding soil water retention function in the
Mualem model as:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


117
www.ijergs.org

K
r
(h) = S
e

1
2
erfc
ln (h/h
m
)
2
+

2
(9)
The value of tortuosity factor (l) equal 0.5 as reported by Mualem was used in this analysis. The optimized parameters were
used in these hydraulic conductivity models for estimation of unsaturated hydraulic conductivity of these soils for further use in
modeling the soil water dynamics


Results and Discussion
It is observed from Table 1 that the clay loam flood plain soil having comparatively more clay content was found to have less value of

in comparison to that for the silt loam flood plain soils indicating more height of the capillary fringe in the clay loam soil as the
inverse of

represents the height of capillary fringe. Kalane et al. also observed more height of the capillary fringe as the clay
content in the soil increases [17]. The values of

of the clay flood plain soil and silt loam flood plain soil were observed to the
more or less the same i.e. these values were observed 0.21325 and 0.2025, respectively indicating that the slope of the soil water
retention curve is more or less the same for these soils.
In 2002, Kosugi et al. reported that theoretically
C
value approaches infinity for a porous medium with a uniform pore-size
distribution, where as its value approaches a lower limit of zero for soils with a wide range of pore sizes [18]. They reported
C
value
in the range 0.3 to 10.0 while in 2013, Szymkiewicz reported that these values generally ranged from 0.2 to 5.0 [19]. Zhu and
Mohanty [20] also reported that the soil water retention function of Brooks and Corey was successfully used to describe the soil water
retention data for the relatively homogeneous soils, which have a narrow pore-size distribution with a value for
BC
= 2. Nimmo [21]
reported that a medium with many large pores will have a retention curve that drops rapidly to low water content even at low suction
head and conversely, a fine-pored medium will retain even at height suction so will have a flatter a retention curve.
Table 1. Optimized parameters of the soil water retention functions for clay loam
and silt loam soils of flood plain.

Flood plain soil (clay loam)
Soil water retention function
Optimized parameters


( 1/ cm )
n/


()
m
()
Brooks-Corey 0.00730 0.21325
Van Genuchten
Independent m , n
Fixed m = 1- 1/n


0.00799
0.00746


1.005
1.2378


0.2427


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


118
www.ijergs.org

Kosugi h
m
= 507.29 cm = 4.4236 cm
Flood plain ( Silt loam)
Brooks-Corey 0.0219 0.2025

Van Genuchten
Independent m , n
Fixed m = 1- 1/n


0.00497
0.00521


1.005
1.2575


0.2690

Kosugi h
m
= 1232.78 cm = 3.6815 cm -

In the van Genuchten function, when the factor one is disregarded (
VG
h
n
1) then it becomes a limiting case and is
approximated to the Brooks-Corey function and the product of m and n in the van Genuchten function become equal to
BC
of the
Brooks-Corey function. The product of m and n remains constant and for that if n is increased then m must be simultaneously
decreased. For the fixed case i.e. m = 1-1/n, the parameter
C
should be equal to n 1. The properties of the soil media which are
described by the two parameters (
BC
,
BC
) in the Brooks-Corey model are described by three parameters (
VG
, n, m) in the van
Genuchten model. From Table 1 it is observed that for the case of van Genuchten function with independent shape parameters (m , n)
and fixed shape parameters m = 1-1/n, the value of
VG
was observed to be higher for clay loam soil (fine-textured) than the silt loam
soil which is comparatively medium- textured. The same observation was reported by Jauhiainen [22].
It is observed from Table 2 that the log-normal function of Kosugi gave an excellent description of the observed soil water retention
data having the highest r
2
= 0.9969 and the lowest RSS = 0.00016 for clay loam soil and r
2
= 0.9932 and RSS = 0.00033 for silt loam
soil followed by the van Genuchten function with independent shape parameters which yielded r
2
= 0.9929 and RSS = 0.00038 for
clay loam soil and r
2
= 0.9864 and RSS = 0.00066 for silt loam soil. Among the van Genuchten functions, the function with fixed
shape parameters yielded higher RSS (13.16 to 15.15 percent) for these soils. The non-linear least-squares fitting of the Brooks-Corey
function resulted in the least value of r
2
= 0.9881 and the highest value of RSS = 0.00063 for clay loam and r
2
= 0.9724 and RSS =
0.00135 for silt loam soils of flood plain showing that Brooks-Corey function followed the van Genuchten function in its fitting
performance and for these soils performed comparatively better in the clay loam. All the soil water retention functions gave
comparatively better fitting performance for the clay loam flood plain soil in comparison to silt loam flood plain soil.
Table 2.Statistics of the fitting performance of the soil water retention functions.
Soil water retention function
Flood plain soil (clay loam) Flood plain soil (silt loam)
RSS


RSS


Brooks-Corey 63 0.9881 135 0.9724
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


119
www.ijergs.org

Van Genuchten
Independent m , n
Fixed m = 1- 1/n


38
43


0.9929
0.9919


66
76


0.9864
0.9843

Kosugi 16 0.9969 33 0.9932

The physically-based log-normal function of Kosugi gave the best fitting performance followed by empirical van Genuchten and
Brooks-Corey functions in order of superior fitting performance for embedding in the statistical pore-size distribution-based Mualems
relative hydraulic conductivity model for developing the unsaturated hydraulic conductivity function for modeling the soil water
dynamics in these flood plain soils. The log-normal function of Kosugi has a merit in that it is a theoretically derived function, and
therefore, the physical meaning of each parameter is clearly defined.
However, for optimizing the parameters of the soil water retention functions, the number of fitted parameters must be reduced in order
to minimize the non-uniqueness of the optimized parameters and efforts should be made to independently measure the parameters
such as the saturated soil water content
s
. The assumed value of residual soil water content
r
can also be used as its measurement is
extremely difficult in the laboratory. This will further reduce the number of parameters to be optimized. It is also observed that the soil
water retention functions under study predict infinite value of the soil water suction head (h) as the effective saturation ( S
e
)
approaches zero which is not consistent with the fact that even under oven-dry condition the soil water suction has a finite value. So,
therefore, these functions should be used in the range of effective saturation significantly larger than zero.
Conclusion
The parameters of empirical soil water retention functions of Brooks-Corey and Van Genuchten and the theoretical soil water
retention function of Kosugi were optimized using non-linear least- squares optimization algorithm as used in the RETC computer
code for the clay loam and silt loam flood plain soils. These parameters were used in the Mualems statistical pore-size distribution-
based models for estimation of corresponding unsaturated hydraulic conductivity models. The log-normal function of Kosugi gave an
excellent fitting performance with highest co-efficient of determination and the lowest residual sum of squares equal for these soils.
The physically-based Kosugi function was observed to be followed by empirical functions of Van Genuchten and Brooks-Corey in
their fitting performances. It is proposed that theoretical Kosugi model of unsaturated hydraulic conductivity can be used for
mathematical simulation studies of soil water dynamics.

REFERENCES:
[1] Leij, F.J., Russell,W.B., and Lesch, S.M. Closed-form expressions for water retention and conductivity
data.Ground Water. 35(5): 848-858. 1997.
[2] Durner, W. and Lipsius, K..Encyclopedia of hydrological Sci. (Ed. M G Anderson).John Wiley & Sons Ltd. 2005.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


120
www.ijergs.org

[3] Solone, R., Bittelli, M.,Tomei, F. and Morari, F. Errors in water retention curves determined with pressure plates:
Effects on the soil water balance. J. Hydrology. 470-471: 65-74.2012.
[4] Yadav, B.S., Verma, B. L. and Deo, R. Water Retention and Transmission Characteristics of Soils in Command
Area of North-Western Rajasthan J. Ind. Soc. Soil Sci. 43 (1): 1-5.1995.
[5] Van Genuchten, M.Th., A closed-form equation for predicting the hydraulic conductivity of unsaturated soils Soil
Sci. Soc. Am. J. 44: 892-898.1980.
[6] Brooks, R.H. and Corey, A.T. Hydraulic properties of porous media Hydrology Paper, No.3, Colorado State
University, Fort Collins, Colorado.1964.
[7] Kosugi, K. Lognormal distribution model for unsaturated soil hydraulic properties Water Resour. Res.32 (9): 2697-
2703.1996.
[8] Durner,W. Hydraulic conductivity estimation for soils with heterogeneous pore structure Water Resour. Res. 30 (2) :
211-223.1994.
[9] Kosugi, K. Three-parameter lognormal model distribution model for soil water retention Water Resour. Res. 30(4)
:891-901.1994.
[10] Abramowitz, M. and Stegun, I.A. Handbook of mathematical functions Dover NewYork. 1972.

[11] Van Genuchten, M.Th., Leij, F.J. and Yates, S.R. The RETC code for quantifying the hydraulic functions of
unsaturated soils Res. Rep. 600 2-91 065, USEPA, Ada. O.K. 1991.
[12] Marquardt, D.W. An algorithm for least-squares estimation of non-linear parameters. J. Soc. Ind. Appl. Math. 11:
431-441. 1963.
[13] Childs, E.C. and Collis-George,N. The permeability of porous materials Soil Sci. 50: 239-252. 1950.
[14] Burdine, N.T. Relative permeability calculation size distribution data Trans. Amer. Inst. Mining Metallurgical, and
Petroleum Engrs. 198: 71-78. 1953.
[15] Mualem, Y. A new model for predicting the hydraulic conductivity of unsaturated porous media Water Resour.
Res. 12 (3): 513-522. 1976.
[16] Black, P.B. Three functions that model empirically measured unfrozen water content data and predict relative
hydraulic conductivity CRREL Report. 90-5. U.S. Army Corps of Engineers.Cold Regions Research and
Engineering Laboratory. 1990.
[17] Kalane, R.L., Oswal, M.C. and Jagannath. Comparison of theoretically estimated flux and observed values under
shallow water table J. Ind. Soc. Soil Sci. 42. (2): 169-172.1994.
[18] Kosugi, K., Hopmans, J.W. and Dane, J.H. Water Retention and Storage-Parametric Models. In Methods of Soil
Analysis Part 4. Physical Methods. (Eds. E.J.H. Dane and G.C. Topp) pp. 739-758. Book Series No.5. Soil Sci.
Soc. Amer., Madison, USA.2002.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


121
www.ijergs.org

[19] Szymkiewicz, A. Chapter 2 :Mathmatical Models of Flow in Porous Media. In Modeling Water Flow in Unsaturated
Porous Media Accounting for Nonlinear Permeability and Material Heterogeneity Springer. 2013
[20] Zhu, J. and Mohanty, B.P. Effective hydraulic parameter for steady state vertical flow in heterogeneous soils
Water Resour. Res. 39 (8) : 1-12.2003.
[21] Nimmo, J.R. Unsaturated zone flow processes in Anderson, M.G. and Bear, J. eds. Encyclopedia of Hydrological
Sci. Part 13-Groundwater: Chichester, UK, Wiley, v.4,p. 2299-2322.2005.
[22] Jauhiainen, M. Relationships of particle size distribution curve, soil retention curve and unsaturated hydraulic
conductivity and there implications on water balance of forested and agricultural hill slopes Ph. D. Thesis. Helsinki
University of Technology.pp. 167.2004



















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


122
www.ijergs.org

Information Security in Cloud
Divisha manral
1
, Jasmine Dalal
1
, Kavya Goel
1

1
Department of Information Technology, Guru Gobind Singh Indraprastha University
E-Mail- divishamanral@gmail.com

ABSTRACT-With the advent of the internet, security became a major concern where every piece of information was
vulnerable to a number of threats. Cloud is kind of centralized database where many clients store, retrieve and possibly
modify data. Cloud computing is environment which enables convenient and efficient access to a shared pool of
configurable computing resources. However the data stored and retrieved in such a way may not be fully trustworthy. The
range of study encompasses an intricate research on various information security technologies and proposal of an efficient
system for ensuring information security in cloud computing platforms. Information security has become critical to not
only personal computers but also corporate organizations and government agencies, given organizations these days rely
extensively on cloud for collaboration. Aim is to develop a secure system by encryption mechanisms that allow a clients
data to be transformed into unintelligible data for transmission.

Keywords Cloud, Symmetric Key, Data storage, Data retrieval, Decryption, Encryption, security

I. INTRODUCTION

Information Security is nothing but to protect the database from destructive forces, actions of unauthorized users and to
guard the information from malicious modification, leakage, loss or disruption. The world is becoming more
interconnected with the advent of the Internet and new networking technology. Information security
[1]
is becoming of
great importance because of intellectual property that can be easily acquired. There have been numerous cases of breaches
in security resulting in the leakage or unauthorized access of information worth a fortune. In order to keep the information
system free from threats, analysts employ both network and data security technologies.

Cloud computing is a model which provides a wide range of applications under different topologies and every topology
derives some new specialized protocols. This promising technology is literally called Cloud Data Security. It is the next
generation computing platforms that provide dynamic resource pools, virtualization and high availability.

II. INFORMATION SECURITY TECHNOLOGY

A. ENCRYPTION

In cryptography
[2]
encryption is the process of encoding messages in such a way that eavesdroppers or hackers cannot
read it, but that authorized parties can. In an encryption scheme, information is encrypted using an encryption algorithm,
turning it into an unreadable cipher text. This is usually done with the use of an encryption key, which specifies how the
message is to be encoded. Any adversary that can see the cipher text should not be able to determine anything about the
original message. An authorized party, however, is able to decode the cipher text using a decryption algorithm, which
usually requires a secret decryption key, which adversaries do not have access to. For technical reasons, an encryption
scheme usually needs a key-generation algorithm to randomly produce keys. There are two basic types of encryption
schemes: Symmetric-key and public-key encryption
[3].

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


123
www.ijergs.org


B. SYMMETRIC-KEY CRYPTOGRAPHY

An encryption system in which the sender and receiver of a message share a single, common key that is used to encrypt
and decrypt the message. Contrast this with public key cryptology, which utilizes two keys - a public key to encrypt
messages and a private key to decrypt them. Symmetric-key systems are simpler and faster, but their main drawback is
that the two parties must somehow exchange the key in a secure way. Public-key encryption
avoids this problem because the public key can be distributed in a non-secure way, and the private key is never
transmitted.


Figure 1: Cryptography Model using Symmetric Key

C. HARDWARE BASED MECHANISM

Hardware based or assisted computer security offers an alternative to software-only computer security. Devices such as
dongles may be considered more secure due to the physical access required in order to be compromised. Working of
hardware based security: A hardware device allows a user to login, logout and to set different privilege levels by doing
manual actions. The device uses biometric technology to prevent malicious users from logging in, logging out, and
changing privilege levels. The current state of a user of the device is read both by a computer and controllers in peripheral
devices such as hard disks.

D. DATA ERASURE

Data erasure is the process of permanently erasing data from disk media. It is not the same as file deletion. File deletion
and removal of the Volume Table of Contents (VTOC) simply erases the pointers to the data stored on the media so the
data is not viewable in directories. It does not physically erase the data from the media. Many firms physically destroy
hard drives or use various software utilities to erase data using these methodologies. However, these solutions are
inadequate and can potentially lead to data breaches, public disclosure and, ultimately, unplanned expenses as described
above.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


124
www.ijergs.org




E. DATA MASKI NG

Data masking technology provides data security by replacing sensitive information with a non-sensitive proxy, but doing
so in such a way that the copy looks and acts like the original. This means non-sensitive data can be used in business
processes without changing the supporting applications or data storage facilities. You remove the risk without breaking
the business! In the most common use case, masking limits the propagation of sensitive data within IT systems by
distributing surrogate data sets for testing and analysis. In other cases, masking will dynamically provide masked content
if a users request for sensitive information is deemed risky.

III. CLOUD

For some computer owners, finding enough storage space to hold all the data they've acquired is a real challenge. Some
people invest in hard drives. Others prefer external storage devices like pen drives or compact discs. Desperate computer
owners might delete entire folders worth of old files in order to make space for new information. But some are choosing
to rely on a growing trend: cloud storage. Cloud computing encompasses a large number of computers connected through
a real-time communication network such as the Internet. It is a type of computing that relies on sharing computing
resources rather than having local servers or personal devices to handle applications. Cloud computing allows consumers
and businesses to use applications without installation and access their personal files at any computer with internet access.

A. CLOUD STORAGE

A basic cloud storage system needs just one data server connected to the Internet. A client sends copies of files over the Internet to the
data server, which then records the information. When the client wishes to retrieve the information, he accesses the data server
through a Web-based interface. The server then either sends the files back to the client or allows the client to access and manipulate
the files on the server itself. Cloud storage systems generally rely on hundreds of data servers. Because computers occasionally require
maintenance or repair, it's important to store the same information on multiple machines. This is called redundancy. Without
redundancy, a cloud storage system couldn't ensure clients that they could access their information at any given time. Most systems
store the same data on servers that use different power supplies. That way, clients can access their data even if one power supply fails.

[4]



B. ADVANTAGES

Efficient storage and collaboration
Easy Information Sharing
Highly reliable and redundant
Widespread availability
Inexpensive

C. DISADVANTAGES
Possible downtime
Security issues Internet
[5]
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


125
www.ijergs.org

Compatibility
Unpredicted costs
Internet Dependency


IV. PROPOSED SYSTEM

Information security is the most important criterion for any data owner, as the data stored on the cloud will be accessible
not only to him but to many other cloud users. The following proposed system provides a secure yet flexible information
security mechanism which can be implemented easily at the time of data storage as well as data retrieval over the cloud.
The concept of symmetric key is being used where only the data owner, the data retriever and the third party auditor will
be having the access to the keys. Also double encryption will be used to make the system more secure.

A. DATA STORAGE

The proposed system for data storage is flexible as the encryption algorithms used will be of the choice of the user. The
model constitutes two major stages.
The first stage starts when the data owner uploads the data to the center. The owner will be asked to choose from a list of
available algorithms (Encryption Algorithm 1) or upload his own algorithm to encrypt the data. This will lead to the
creation of cipher text along with the primary key (Key 1). The final step of the first stage will be the transferring of the
cipher text on to the cloud.
The second stage starts with the encryption of the Key 1, where again the data owner is asked to choose from list t of
available algorithms (Encryption Algorithm 2) or upload his own algorithm to encrypt the key and create the secondary
key (Key 2). Then the center shares the Key 2 with the third party auditor for future verification. The auditor can verify
the data, and keep track of the shared keys only.
[6]

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


126
www.ijergs.org


Figure 2: Proposed Data Storage Model


B. DATA RETRIVAL

The data retrieval poses a bigger problem than the data storage in cloud computing.
In this proposed model the data retriever has to take data access permission from the data owner by sending in a data
access request. If the request is accepted by the data owner, he sends the secondary key (Key 2) and the information for
further decryption i.e. which all decryption algorithms are to be used for further decrypting and retrieving the final plain
text.
The data retriever sends a data request to the Third party auditor. The Auditor verifies the key send by the retriever with
the database present with him, if the keys match it will allow to take the cipher text from the cloud data storage.
The information given by the data owner to the retriever helps in decrypting key 2 into key 1 using the decryption
algorithm 2. With key 1 in hand the cipher text can be decrypted using decryption algorithm 1 into the final plain text,
which can be used by the retriever.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


127
www.ijergs.org


Figure 3: Proposed Data Retrieval Model

B. BENEFITS

The proposed model in this paper is highly secured because of the use of double encryption technique. The secondary key can be
accessed by data owner, data retriever and the third party auditor, but this only gives access to the cipher text and hence even the third
party auditor also doesnt have direct access to the data. No one can use the data unless he has been given the information by the data
owner about decrypting the secondary key into the primary key and further using it to again access to the plain text.
The proposed model is flexible since the model does not hold any constraints on the use of cryptography algorithms; the data owner
will be allowed to choose from a list of algorithms or given a choice to use his own algorithm for the encryption process.
The proposed model uses symmetric key cryptography technique; symmetric key is faster than asymmetric encryption algorithm. The
model encrypts plain text easily and produces cipher text with less time.
The data is stored as cipher text in the cloud. So even if the attacker hacks into the cloud system and gains access to the data stored
there, he cannot decrypt it and use it further. Hence making data stored in the cloud more secure and less vulnerable to threats.

V. CONCLUSION

Cloud computing is one of the most booming technology in
world right now. But this technology is facing many data security threats and challenges. With the help of the proposed
system which incorporates the use of Double key encryption technique and symmetric cryptography algorithm, one can
manage to keep their data securely in the cloud. It would provide high level speed and security in cloud environment. The
proposed system aims to achieve goals like confidentiality, data integrity and authentication in a simple manner without
compromising on security issues.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


128
www.ijergs.org


REFERENCES:

[1] Aceituno, V, Information Security Paradigms, ISSA Journal, September, 2005.
[2] Gold Reich, Oded, Foundations of Cryptography: Volume 2, Basic Applications. Vol. 2. Cambridge university press,
2004.
[3] Bellare Mihir, Public-Key Encryption in a Multi-user Setting: Security Proofs and Improvements, Springer Berlin
Heidelberg, 2000, Page 1.
[4] Herminder Singh & Babul Bansal, Analysis of Security Issues And Performance Enhancement In Cloud Computing" International
Journal of Information Technology and Knowledge Management, Volume 2, No. 2, pp. 345-349, July-December 2010
[5] B. Rajkumar, C. Yeo, S. Venugopal, S. Malpani, Cloud computing and emerging IT platforms: vision, hype, and
reality for delivering computing as the 5th utility
[6] Snsha Vijayaraghavan, K.Kiruthiga, B.Pattatharasi and S.Sathiskumar, Map-Reduce Function For Cloud Data Storage
and Data Integrity Auditing By Trusted TPA, International Journal of Communications and Engineering, Vol 05,No
5,Issues 03,pp 26-32,March 2012

















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


129
www.ijergs.org

Aerodynamic Characteristics of G16 Grid Fin Configuration at Subsonic and
Supersonic Speeds
Prashanth H S
1
, Prof. K S Ravi
2
, Dr G B Krishnappa
3
1
M.Tech Student, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
2
Associate Professor, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
3
Professor and HOD, Department of Mechanical Engineering, Vidyavardaka college of Engineering, Mysore, Karnataka
e mail: hsprashanth63@gmail.com, Phone No: +91 9916886610

Abstract: Grid fins (lattice fins) are used as a lifting and control surface for highly maneuverable missiles in place of more
conventional control surfaces, such as planar fins. Grid fins also find their applications for air-launched sub-munitions. The main
advantages are its low hinge moment requirement and good high angle of attack performance characteristics. In this paper, one such
grid fin configuration named G16 grid fin was taken for the CFD analysis. The G16 fin was studied under standalone condition at 0.7
& 2.5 Mach number for different Angle of attack (AOA) from 0 to 30

. The aerodynamic characteristics were plotted and discussed.



Keywords: Grid fins, Lift and Drag, Angle of Attack, ANSYS
I. INTRODUCTION

In a modern military, a missile is a self-propelled guided weapon system. Missiles have four system components: targeting and/or
guidance, flight system, engine, and warhead. Missiles come in types adapted for different purposes: surface-to-surface and air-to-
surface (ballistic, cruise, anti-ship, anti-tank), surface-to-air (anti-aircraft and antiballistic), air-to-air and anti-satellite missiles.

Grid fins (or lattice fins) [1] are a type of flight control surface used on missiles and bombs in place of more conventional control
surfaces, such as planar fins. Grid fin looks much like a rectangular box filled with a lattice structure similar to a waffle iron or garden
trellis. The grid is formed by small intersecting planar surfaces that create individual cells shaped like cubes or triangles. The box
structure is inherently strong allowing the lattice walls to be very thin, reducing weight and the cost of materials.


Figure 1.1: Grid Fins and Planar Fins

The primary advantage of grid fins is that they are much shorter than conventional planar fins in the direction of the flow. As a result,
they generate much smaller hinge moments and require considerably smaller servos to deflect them in a high-speed flow [4]. The
small chord length of grid fins also makes them less likely to stall at high angles of attack. This resistance to stall increases the control
effectiveness of grid fins compared to conventional planar fins. Another important aerodynamic characteristic of grid fins concerns
drag, although it can be an advantage or a disadvantage depending on the speed of the airflow.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


130
www.ijergs.org

In general, the thin shape of the lattice walls creates very little disturbance in the flow of air passing through, so drag is often no higher
than a conventional fin. At low subsonic speeds, for example, grid fins perform comparably to a planar fin. Both the drag and control
effectiveness of the lattice fin are about the same as a conventional fin in this speed regime.


Figure 1.2: Flow over the grids at different flow regimes

The same behavior does not hold true at high subsonic numbers near Mach 1. Drag rises considerably higher and the fins become
much less effective in this transonic region because of the formation of shock waves [2]. The flow behavior over the grid fins at
various flow regimes are illustrated in Fig. 1.2.

II. G16 GRID FIN GEOMETRICAL DETAILS, MESH GENERTAION AND BOUNDARY CONDITIONS

The grid fin G16 geometry was taken from the previous existing experimental work [3]. Geometry was designed in CATIA V5 and
Mesh was generated using ANSYS ICEM CFD 14.5. The simulations were carried out in the ANSYS CFX 14.5 solver.



Figure 2.1: Geometric Details of G16 Fin (Dimensions are in mm) and Model created in CATIA V5 software

Wind tunnel like setup is provided by creating a fluid domain over the fin with suitable dimensions in the upstream and downstream
directions. The whole body was imported to ANSYS workbench ICEM CFD 14.5 meshing tool and the unstructured Tetrahedral mesh
(Fig. 2.2) is created. After a grid independence study a mesh with total of 2621519 elements and 495961 nodes were selected for the
analysis. After the mesh, 4 inflation layers (Fig. 2.2) were applied over the surfaces of the fin with growth rate of 1.15 from the initial
length. The insertion of inflation layers to the existing mesh helps to accurate capture of boundary effects near the proximities and
curves of the body and also gives quicker results.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


131
www.ijergs.org



Figure 2.2: Cut plane showing unstructured mesh around Grid fin and Inflation layers over the surface of Grid fins
The imported G16 mesh data is analyzed in the ANSYS CFD code CFX 14.5 solver [5] for the 0.7 and 2.5 Mach Numbers for AOA 0,
5, 10, 15, 20, 25 and 30 degrees. The following boundary conditions were applied. Velocity inlet for Inlet, static pressure condition in
the outlet, opening conditions for the domain walls and no-slip velocity for the fin surface.
For simulation, the k- based Shear Stress Transport (SST) turbulence model was selected. The SST the model accounts for the
transport of the turbulent shear and gives highly accurate prediction of the onset and the amount of flow separation under adverse
pressure gradients. Since the thermal problem was not of importance in the present study, option total energy was selected. Under the
equation class settings the upwind advection scheme was selected for faster results output and the convergence criteria is set to
residual type (RMS). The problem was setup for the standard atmospheric pressure conditions.
III. Results and Discussion
After the simulation achieved desired convergence criteria, the output results were analyzed in the post processor CFD POST 14.5.
The behavior of Velocity and Pressure contours and the body forces (Axial and Normal) were note down for the Aerodynamic
coefficients calculation. The graphs for the same were plotted.

Following figures show the pressure and velocity distribution over the grid fin for AOA 0 & 20 at 0.7 & 2.5 Mach numbers.




Figure 3.1: Velocity contour on a cut plane at Mach 0.7, AOA 0

Figure 3.2: Pressure contour on a cut plane at Mach 0.7, AOA 0


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


132
www.ijergs.org










Following Graphs shows the comparison between different aerodynamic characteristics against AOA for Mach 0.7 and Mach 2.5.

Figure 3.3: Velocity contour on a cut plane at Mach 2.5, AOA 0

Figure 3.4: Pressure contour on a cut plane at Mach 2.5, AOA 0


Figure 3.5: Velocity contour on a cut plane at Mach 0.7, AOA 30

Figure 3.6: Pressure contour on a cut plane at Mach 0.7, AOA 30


Figure 3.7: Velocity contour on a cut plane at Mach 2.5, AOA 30

Figure 3.8: Pressure contour on a cut plane at Mach 2.5, AOA 30


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


133
www.ijergs.org


Figure 3.9: C
N
v/s AOA for Mach 0.7 and Mach 2.5 Figure 3.10: C
A
v/s AOA for Mach 0.7 and Mach 2.5



Figure 3.11: C
L
v/s AOA for Mach 0.7 and Mach 2.5 Figure 3.12: C
D
v/s AOA for Mach 0.7 and Mach 2.5

Figure 3.13: L/D v/s AOA for Mach 0.7 and Mach 2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 10 20 30
F
i
n

n
o
r
m
a
l

c
o
e
f
f
i
c
i
e
n
t
,

C
N
AOA in Degrees
Mach 0.7
Mach 2.5
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0 10 20 30
A
x
i
a
l

F
o
r
c
e

C
o
e
f
f
i
c
i
e
n
t
,

C
A
AOA in Degress
Mach 0.7
Mach 2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 10 20 30
C
o
e
f
f
i
e
c
i
e
n
t

o
f

L
i
f
t
,

C
L
AOA in Degrees
Mach 0.7
Mach 2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0 10 20 30
D
r
a
g

C
o
e
f
f
i
c
i
e
n
t
,

C
D
AOA in Degrees
Mach 0.7
Mach 2.5
0
0.5
1
1.5
2
0 10 20 30
L
i
f
t

t
o

D
r
a
g

R
a
t
i
o
,

L
/
D
AOA in Degrees
Mach 0.7
Mach 2.5
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


134
www.ijergs.org


Inference from the Graphs:
Figure 3.9 shows the graph of Normal force coefficient versus AOA for Mach 0.7 & 2.5. It is seen that C
N
value for Mach 0.7 is
greater than Mach 2.5 for all AOAs and the C
N
for both Mach 0.7 & 2.5 increases as the AOA increases.
Figure 3.10 shows the graph of Axial Force coefficient versus AOA for Mach 0.7 and 2.5. It is that the C
A
for the Mach 0.7 is slightly
greater than Mach 2.5. For both Mach 0.7 and 2.5, the C
A
value slightly decreases as the AOA increases.
Figure 3.11 shows the graph of Coefficient of Lift, C
L
versus AOA for Mach 0.7 and Mach 2.5. It is seen the lift produced for Mach
0.7 is greater than the lift produced at Mach 2.5 for all AOAs and also the C
L
varies linearly with the increase in AOA for both Mach
0.7 & 2.5, except for Mach 0.7 after AOA 20.
Figure 3.12 shows the graph of Coefficient of drag, C
D
versus AOA for Mach 0.7 & 2.5. It is seen that the drag levels at Supersonic
speeds i.e. at Mach 2.5 is considerably reduced compared to subsonic speeds. At higher speeds (supersonic), the drag tends to decrease
due to the smaller oblique shock angle and the shock passes through the grid along the chord length without intersecting it. However,
at low supersonic and subsonic speeds the oblique shocks reflects within the grids producing more drag force which in turn affects the
speed of the moving object. This shows that, the fin performs better at supersonic speeds. However, lift force is considerably low at
supersonic speeds compared to subsonic speeds.

Figure 3.13 shows the graph of Lift to Drag ratio, L/D versus AOA for Mach 0.7 & 2.5. It is seen that, the up to AOA 20, the L/D
ratio is more for Mach 0.7 and beyond 20, the L/D ratio is more of Mach 2.5. Also it is observed that for both Mach 0.7 and 2.5, the
maximum L/D ratio appears at 15 and then it decreases with increase in AOA for both subsonic and supersonic flow regimes.
IV. CONCLUSION
Numerical simulations were successful in predicting the flow behavior at different flow regimes with varying AOAs. The following
inferences can be seen from the analysis.
1. For all AOAs, the normal force coefficient C
N
, axial force coefficient C
A
and lift coefficient C
L
were comparably greater in
subsonic flow than in supersonic flow. Also it is seen that, C
N
& C
L
characters shows increase in the value as the AOA
increases. But, for C
A
it is vice versa.
2. At supersonic speeds, the drag levels were decreased compared to subsonic flows. This is due to the smaller oblique shock angle
at supersonic speeds and the shock passes through the grid along the chord length without intersecting it.
3. The L/D ratio shows that, the performance of G16 fin is better at subsonic speeds up to AOA 20

. At AOA beyond 20

, the fin
show improved performance at supersonic speeds. Also, the maximum L/D ratio occurs at AOA 15

for both flow regimes i.e. at


Mach 0.7 & 2.5.
4. Overall it is concluded that, the G16 fins shows better performance at higher AOA and at higher speeds since, reduction in drag
levels at Mach 2.5. However, the lift is to be improve at supersonic speeds.

REFERENCES:

[1] Scott, Jeff, Missile Grid Fins and Missile Control Systems, URL:http://www.aerospaceweb.org/questions/weapons/.

[2] Zaloga, Steve (2000). The Scud and Other Russian Ballistic Missile Vehicles. New Territories, Hong Kong: Concord
Publications Co.

[3] Washington, W. D., and Miller, M. S., Experimental Investigations of Grid Fin Aerodynamics: A Synopsis of Nine Wind
Tunnel and Three Flight Tests, Proceedings of the NATO RTO Applied Vehicle Technology Panel Symposium on Missile
Aerodynamics, RTO-MP-5, NATO Research and Technology Organization, Cedex, France, Nov. 1998.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


135
www.ijergs.org

[4] Salman Munawar., "Analysis of grid fins as efficient control surface in comparison to conventional planar fins," 27th
International Congress of the Aeronautical Sciences, 2010.

[5] ANSYS 14.5 CFX, Help PDF: Solver Modeling Guide





















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


136
www.ijergs.org

Design, fabrication and Performance Evaluation of Polisher Machine of Mini
Dal Mill
Sagar H. Bagade
1
, Prof. S. R. Ikhar
2
, Dr. A. V. Vanalkar
3
1
P.G. Student, Department of Mechanical Engg, KDK College of Engineering, Nagpur, R.T.M. Nagpur University, Maharashtra,
shbpro1@gmail.com Tel.:- +919673702322
2
Asst. Professor, Department of Mechanical Engg KDK College of Engg. Nagpur, RTM Nagpur University, Maharashtra, India.
sanjay_ikhar@rediffmail.com
3
Professor, Department of Mechanical Engg KDK College of Engg. Nagpur, RTM Nagpur University, Maharashtra, India.
avanalkar@yahoo.co.in

Abstract This paper describes the detail information of design procedure of polisher machine. Pictorial views of fabricated
machine is given. The processed dal sample is tested for reflectivity. Schematic of test apparatus is given. Apparatus consist of LDR ,
which detects incoming light in the form of resistance. Three Dal samples are tested. Surface of Polished Dal samples found more
reflective than unpolished Dal sample.
Keywords design, polishing, pigeonpea, mini dal mill, fabrication, experimentation, test setup.
1.0 INTRODUCTION
The cotyledon of dry seeds excluding seed coat is called dal. In India and many Asian countries, Pigeonpea is mainly consumed as
dhal acceptable appearance, texture, palatability, digestibility, and overall nutritional quality. The polishing is one of the important
value addition steps in Dal processing. The polishing is done to improve the appearance of the Dal, which helps in fetching premium
price to the processor. Whole pulses such as pea, black gram, green gram, and splits(dal) are polished for value adding. Some
consumers prefer unpolished dal, whereas others need dal with attractive colour( polished dal). Accordingly, dal is polished in
different ways such as nylon polish, oil-water polish, colour polish and so on. Polishing is a process of removal of outer layer from a
surface. The cylindrical roller mounted with hard rubber, leather or emery cone polisher and roller mounted with brushes are used for
the purpose. The powder particles are removed by rubbing action. Speed and sizes of these types of polisher are similar to those of the
cylindrical dehusking roller. Another type of machinery provided for this purpose is a set of screw conveyors arranged in battery for
repeated rubbings. The flights and shaft are covered with nylon rope or velvet cloth. The speed of each screw conveyor varies. The
repeated rubbing adds to the luster of the dal, which makes it more attractive. These polishers are commonly known as nylon polisher
or velet polisher, depending on material used and are available in a set of 2, 3, 4 or 5 screw conveyors. The splitting and polishing is
done to increase the shelf life of pigeon pea. The Dal mills are used for splitting of pulse into two cotyledons followed by polishing.
Seed treatment to reduce storage losses is becoming increasingly important.

2.0 DESIGN OF POLISHER MACHINE
This chapter gives the design calculation of major components of the machine, e.g. design calculation of shaft, belt and pulley drive.
2.1 DESIGN CONSIDERATION
Objective is to clean the surface of dal i.e. polish dal grains.
The value of force required to break dal grain is called as bio yield force. The machine is designed considering the bio-yield force of
dal grains. From literature it is found that this force is different for length, breadth and thickness. Among three minimum is at length
i.e. 81.06N.
F = 78.74 N
This much force is imparted on grains against the lower half inner periphery.
Hence,
1hp motor is selected.
2.2 Drive selection
Motor speed, N1 = 1440 rpm
Velocity, Vr = 8
Hence V-belt drive is selected for power transmission.
2.3 Design of V-belt drive
D1 = 58.8mm
Hence,1440 / 180 = D2 / 58.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


137
www.ijergs.org

D2 = 406mm
Checking,
Vp = D1 N1 / 60
= 3.83 m/sec
The peripheral velocity(Vp) for de-hulling i.e. splitting is recommended 10 m/s.
1] Power per belt = (F
w
F
c
)x (e
u/sin(/2)
1)x V
p
= 2.378 kW
e
u/sin(/2)

2] Number of belts , N = Pd / power peer belt = 0.345, N = 1
3] Length of belt , L = /2(D2+D1) + 2C+ (D2 D1 )
2
/ 4C = 1988 mm
Standard length of belt selected ,L = 77 inch
2.4 Design of bigger pulley
1) Width of pulley w = (n-1) e + 2f
For section-A, W = (n-1) e + 2 f = 21mm
2) Pitch diameter ,Dp
For v-groove details , = 38 degree, Recommended Dp = 200mm
3) Type of construction
According to diameter of pulley , D2 = 406mm
Arm construction type selected
No. Of arms = 1,No. Of sets = 1
4) Rim thickness
T = 0.375 D + 3 = 11mm
2.5 DESIGN OF MAIN SHAFT
1) DESIGN TORQUE
T
d
= 60xPxK
l
/(2xxN )------ [K
l
=1.75 for electrical motor and line shaft]
T
d
= 69.259 N-m
2) Forces on belt drive
T
d
= (T
1
T
2
)D
2
/2
(T
1
-T
2
) = 341.177 N-m -------------------------------(1)
T
1
/T
2
= e
u

Where ,Coefficient of friction, = 0.3
Angle of lap on smaller pulley, u= 2.364
T
1
/T
2
= e
0.3x 2.364
T
1
= 2.032 T
2
--------------------------------(2)
From equation (1) and (2)
2.032 T
2
T
2
= 341.177
T
1
= 671.77 N , T
2
= 330.6 N
2.6 FORCE CALCULATION ON MAIN SHAFT

Fig 2.1 Vertical shear force diagram
Weight of pulley , W
pa
= 5.5kg = 54N
Weight of main shaft with rotor , W
sh
= 15kg = 147.15N
At static equilibrium, F = 0
R
vb
+ R
vd
= 1203.62-----------------------------(1)
Taking moment at point B,
Hence, Mb = 0
R
vd
= 144.1N
Substituting above value in equation (1)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


138
www.ijergs.org

Hence , R
vb
= 1.59N

Fig 2.2 vertical bending moment
M
a
= 0, M
b
= - 160.58 N-m , M
c
= -159.45 N-m, M
d
= 0
Selecting maximum moment on shaft
M = 160.58N-m
Selecting shaft material SAE 1030
S
yt
= 296Mpa, S
yt
= 296/2 = 148 Mpa, T
max
=0.3 x S
yt
= 44.4 Mpa
S
ut
= 527Mpa, S
ut
= 527/2 = 263.5Mpa, T
max
= 0.18 xS
ut
= 47.43Mpa
Selecting T
max
= 44.4Mpa
For rotating shaft Selecting gradually applied load , K
b
= 1.5, K
t
=1
For diameter of shaft
T
max
= 16 x 10 x
3
(Kbx M)
2
+ (Ktx Td)
2

x D
sh
3

Hence, D
sh
= 30.63mm
Selecting standard diameter of shaft , D
sh
=32mm
Hub diameter, D
h
= 1.5 D
sh
+ 25 = 73mm
Hub length , L
h
= 1.5 xD
sh
= 42mm
3.0 Fabrication
3.1 Mechanical Components
1) Roller with shaft 2) Upper Half Casing With Hopper

Fig.no 3.1Top View of Roller with Pulley Fig.no.3.2 Upper Half Casing With Hopper

3) Lower Half Casing With Frame 4) Polisher Machine
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


139
www.ijergs.org


Fig.no.3.3 Lower Half with Velvet Bed Fig.no.3.4 Assembled Machine


4.0 EXPERIMENTATION
4.1 MATERIALS AND METHODS
Two samples are selected for testing. The first sample selected for testing is output product of a mini dal mill. This sample is dehusked
and split tur dal. It contains split unhusked grains, split husked grains, broken grains husk and dust. It is processed through oil mixing,
sun drying and dehusking.

Fig. Prepared Dal Samples
4.2 TESTING
4.2.1 TEST APPARATUS
Photo-conductive cell with a potentiometer is used to compare the shine of surface. Reflections of light from a surface of grains is
measured indirectly.

Fig.no.4.1 schematic of test apparatus Fig. no.4.2Test Apparatus
Principle of Working:
When light strikes semi-conductor material, there is decrease in cell resistance.
4.2.3TESTING RESULT
Three samples are tested for light reflectivity.10 tests are done on each sample and mean values are tabulated.
Results of testing are tabulated as below:
Sr. Dal Sample LDR 1 LDR 2
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


140
www.ijergs.org

no. (K) mean value (K) mean value
1 Unpolished 0.2 0.92
2 Oil polished 0.12 0.74
3 Polished without oil 0.12 0.6
Table.no.4.3 Dal Sample Testing

Fig .4.4 Dal sample vs LDR readings

Decrease in resistance indicates increase in intensity of light striking LDR (LIGHT DEPENDENT RESISTOR i.e. Photo-Conductive
cell). Above table clearly indicates that grains samples of Tur Dal processed through Polisher Machine have better shine.
5.0 CONCLUSION
Decrease in resistance indicates increase in intensity of light striking LDR (LIGHT DEPENDENT RESISTOR i.e. Photo-
Conductive cell). Above table clearly indicates that grains samples of Tur Dal processed through Polisher Machine have better
shine. From above study, results shown that there is improvement in the textur of Tur dal.

REFERENCES:
[1] Mutalubi Aremu Akintunde ,Development of a Rice Polishing Machine , AU J.T. 11(2): 105-112 (Oct. 2007)

[2] Gbabo Agidi, Liberty J.T., Eyumah A.G., Design, Construction and Performance Evaluation of aCombined Coffee Dehulling and
Polishing Machine, International Journal of Emerging Technology and Advanced Engineering, Volume 3, Issue 11, November 2013

[3] Oduma O., Femi1 P.O. and Igboke M.E, assessment of mechanical properties of pigeon pea (cajanus cajan (l)millsp) under
compressive loading , International Journal of Agricultural Science and Bioresource Engineering Research Vol. 2(2), pp. 35-46,
October, 2013

[4] Shirmohammadi Maryam, Yarlagadda P. K.D.V., Gudimetla P., Kosse V., Mechanical Behaviours of Pumpkin Peel under
Compression Test, Advanced Materials Research, 337(2011), pp. 3-9

[5] N. V. Shende,Technology adoption and their impact on formers : A Case study of PKV Mini Dal Mill in Vidarbha RegionAsian
Resonance , VOL.-II, ISSUE-IV, OCTOBER-2013

[6] Singh Faujdar andDiwakar B., Nutritive Value and Uses ofPigeonpea and Groundnut ICRISAT, 1993

[7] Mangaraj S, Kapoor T, Development and Evaluation of a Pulse Polisher, Agricultural EngineeringTodayYear : 2007, Volume :
31

[8] Mangaraj S. and SinghK. P., Milling Study of Multiple Pulses Using CIAE Dhal Mill for Optimal Responses,J Food Process
Technol, Volume 2 Issue 21 0.4172/2157-7110.1000110

[9] Kurien P. P., Advances in Milling Technology of Pigeonpea, Proceedings of the InternationalWor k shop onPigeonpeas, Volume
1, 15-19 December 1980

[10] Nwosu J. N, Ojukwu M, Ogueke C. C, Ahaotu I, and Owuamanam C. I., The Antinutritional Properties and Ease of Dehulling
0
0.2
0.4
0.6
0.8
1
UNPOLISHED OIL POLISHED POLISHED
WITHOUT OIL
LDR 1
LDR 2
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


141
www.ijergs.org

on theProximate Compositionof Pigeon pea (Cajanuscajan) as Affected by Malting, International Journal of Life Sciences Vol.2.
No.2. 2013. Pp. 60-67

[11] Opoku A., Tabil1 L., Sundaram J., Crerar W.J. and Park S.J.,Conditioning and Dehulling of Pigeon Peas and Mung
Beans,CSAE/SCGR 2003, Paper No. 03-347

[12] Ghadge P N , Shewalkar S V, Wankhede D B, Effect of processing methods on qualities of instant whole legume: Pigeon
pea(CajanuscajanL.), Agricultural Engineering,International: the CIGR Ejournal. Manuscript FP 08 004. Vol. X. May, 2008.

[13] Shiwalkar B.D.,Design data for machine elements, 2010 Denett& Company.

[14] Rattan S.S, Theory of machine, edition 2012, S.Chand Publication.

[15] Bhandari V.B., Design of machine elements.3
rd
edition,2010 the Tata McGraw Hill Education Private Limited.

[16] Chakraverty A., MujumdarA.S, Ramaswamy H.S.,Handbook of Post-harvest Technology,2013 Marcel Dekker Inc.

[17] Kumar D.S. ,Mechanical Measurement, 5
th
edition,2013, Metropolitan Book Co. Pvt.Ltd















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


142
www.ijergs.org

Design of 16-bit Data Processor Using Finite State Machine in Verilog
Shashank Kaithwas
1
, Pramod Kumar Jain
2

1
Research Scholar (M.Tech) SGSITS
2
Associate Professor, SGSITS
E-mail- shashankkaithwas09@gmail.com
Abstract This paper presents design concept of 16bit Data processor. Design methodology has been changing from
schematic to Hardware Descriptive Language (HDL) based design. Data processor has been proposed using Finite State Machine
(FSM). The state machine designed for the Data Processor can be started from any state and can jump on any state in between. The
key architecture elements of Data processor such as, Arithmetic Logic Unit (ALU), Control Unit and Data-path are being
described. Functionalities are validated through synthesis and simulation process. Besides verifying the outputs, the timing diagram
and interfacing signals are also track to ensure that they adhere to the design specification. The Verilog Hardware Descriptive
Language gives access to every internal signal and designing Data Processor using this language fulfils the needs for different high
performance applications.
Keywords HDL-Hardware Descriptive Language, FSM-Finite State Machine, ALU-Arithmetic Logic Unit,
Control Unit, Data-path, Data Processor, Verilog Hardware Descriptive Language.
INTRODUCTION
Processors are the heart of all smart devices, whether they be electronic devices or otherwise. Their smartness comes as a direct
result of the decisions and controls that processors makes. There are generally two types of processor: general purpose processors and
dedicated processors. General-purpose processors such as the Pentium CPU can perform different tasks under the control of software
instructions. General purpose processors are used in all personal computers. Dedicated processors also known as application-specific
integrated circuits (ASICs) and are designed to perform just one specific task. For example, inside the cell phone, there is a dedicated
processor that controls its entire operation. The embedded processor inside the cell phone does nothing, but controls the operation of
the phone. Dedicated processors are therefore, usually much small and not as complex as general purposes processors.
The different parts and components fit together to form the processor. From transistor, the basic logic gates are built. Logic gates are
combined together to form either combinational circuits or sequential circuits. The difference between these two types of circuits is
only in the way the logic gates are connected together. Latches and flip-flops are the simplest forms of sequential circuits, and they
provide the basic building blocks for more complex sequential circuits. Certain combinational circuits and sequential circuits are used
as standard building blocks for larger circuits, such as the processor. These standard combinational and sequential components usually
are found in standard libraries and serve as larger building blocks for the processors. Different combinational and sequential
components are connected together to form either the data path or the control unit of a processor. Finally, combining the data path and
the control unit together will produce the circuit for either a dedicated or general processor.
However, they are used in every smart electronic device such as the musical greeting cards, electronic toys, TVs, cell phones,
microwave ovens and anti-lock break systems in car. Although the small dedicated processors are not as powerful as the general-
purpose processors, they are being sold and used in a lot more places then the powerful general-purpose processors that are used in
personal computers.
DESIGN OF MODULE
This contains designing of important processor module such as ALU, Data path and Control circuit.


ARITHMETIC AND LOGICAL UNIT (ALU)

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


143
www.ijergs.org

The arithmetic-logic unit (ALU) performs basic arithmetic and logic operation which are controlled by the opcode. The result of the
execution of the instruction is written to the output. Designing of ALU is done for arithmetic operation such as addition, subtraction,
multiplication, increment, decrement etc. The inputs are 16-bit wide with type unsigned. Figure 1 shows ALU Block Diagram:-

Fig 1. Block Diagram ALU
CONTROL CIRCUIT
The control unit is a sequential circuit in which its outputs are dependent on both its current and past inputs. This history of past inputs
is stored in the state memory and is said to represent the state of the circuit. Thus, the circuit changes from one state to next when the
content of memory changes. Depending on the current state of the circuit and the input signals, the next-state logic will determine
what the next state ought to be by changing the content of the state memory. Hence, a sequential circuit executes by going through a
sequence of states. Since the state memory is finite, therefore the total number of different states that the circuit can go to is also finite.
This is not to be confused with the fact that the sequence length can be infinitely long. However, because of the reason of having only
a finite number of states, a sequential circuit is also referred to as a Finite State Machine (FSM). . Figure 2 shows the block diagram of
the control unit.

Fig 2. Block Diagram of Control Circuit
DATAPATH
The design of functional units for performing single, simple data operations, such as the adder for adding two numbers or the
comparator for comparing two values is described. However, for adding a million numbers, there is no need to connect a million
minus one adder together. Instead, take a circuit with just one adder and to use it a million time. A data path circuit allows to do just
that, i.e., for performing operations involving multiple steps. Figure 3 shows a simple data path using one adder to add as many
numbers as desired. In order for this to be possible, a register is needed to store the temporary result after each addition. The
temporary result from the register is feed back to the input of the adder so that the next number can be added to the current sum.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


144
www.ijergs.org


Fig 3. Block Diagram of Datapath
PROCESSING UNIT
Datapath and control unit together makes processing unit. Control circuit provides essential control signals to the Datapath unit for
required operations. And the Datapath is the part concerning the flow of data, to be manipulated, transmitted or received. Functional
Diagram of Processing Unit is shown in figure 4.

Fig 4. Functinal Diagram of Processing Unit
STATE MACHINE DIAGRAM
Figure 5 and 6 shows the State Diagram for processor having four states and eight states respectively. This paper presents the 16
states data processor which can be designed easily sighting these two state diagrams. The two state machines starts from State0 if the
RESET is set to logic 1 and if the RESET is forced to logic 0 then depending on the value of START the state machines changes to
next states also we can switch to any state from the present state of the state machine. If the value of START does not change then the
machine will remain on the present state. For each state a particular opration is assigned.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


145
www.ijergs.org


Fig 5. State Machine Diagram having 4 states

Fig 6. State Machine Diagram having 8 states
DATA PROCESSORS SPECIFICATIONS
Table I:- shows the various specifications according to the opcodes
OPCODE OPERATION SPECIFICATION
0000 a + b zout is assigned the value of a+b
0001 a - b zout is assigned the value of a-b
0010 a + 1 Incremented value of a assign to zout
0011 a - 1 Decremented value of a assign to zout
0100 a OR b zout is assigned the value of a or b
0101 a AND b zout is assigned the value of a and b
0110 NOT a zout is assigned the value of not a
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


146
www.ijergs.org

0111 NOT b zout is assigned the value of not b
1000 a NAND b zout is assigned the value of a nand b
1001 a NOR b zout is assigned the value of a nor b
1010 a XOR b zout is assigned the value of a xor b
1011 a XNOR b zout is assigned the value of a xnor b
1100 a << 1 Shifted left value of a assign to zout
1101 a >> 1 Shifted right value of a assign to zout
1110 b << 1 Shifted left value of b assign to zout
1111 b >> 1 Shifted right value of a assign to zout

RTL (Register-transfer Level ) GENERATION
The RTL (Register-transfer Level) view of the Verilog code is shown in figure 7:-

Fig 7 RTL view of the Data Processor
SIMULATION RESULTS
Design is verified through simulation, which is done in a bottom-up fashion. Small modules are simulated in separate testbenches
before they are integrated and tested as a whole. The results of operation on the test vectors are manually computed and are referred to
as expected results
By simulation for a and b where a=4 and b=2, zout gives following results shown in figure 8:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


147
www.ijergs.org


Fig 8. Simulation Results
ACKNOWLEDGMENT
We gratefully acknowledge the Almighty GOD who gave us strength and health to successfully complete this venture. We wish to
thank lecturers of our college for their helpful discussions. We also thank the other members of the Verilog synthesis group for their
support.
CONCLUSION
In this paper, we have proposed efficient Verilog coding verification method. We have also proposed several algorithms using
different design levels. Our proposal have been implemented in Verilog using Xilinx 9.2a and Alteras Modelsim simulator, the RTL
is generated in Xilinx 9.2a and the functionality has been checked in Modelsim simulator. the Data Processor design using Verilog is
successfully designed, implemented and tested. Currently, we are conducting further research that considers the further reductions in
the hardware complexity in terms of synthesis. Finally the code has been downloaded into Altera SPARTAN-3E: FPGA chip on LC84
package for hardware realization. Figure 9 shows the FPGA implementation of the design:-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


148
www.ijergs.org


Figure 9. FPGA Implementation of the Design

REFERENCES:
[1] Development and Directions in Computer Architecture Lipovski, G.J. Doty, K.L. University of Texas Aug. 1978 Volume: 11,
Issue: 8. (Pp.54-67)
[2] Andrei-Sorin F., Corneliu B., 2010 Savage 16-16 bit RISC Architecture General Purpose Microprocessor in Proc, IEEE
Journal. (Pp.3-8)
[3] Venelin Angelov, Volker L., 2009 The Educational Processor Sweet-16 in Proc. IEEE Conference. (Pp. 555-559)
[4] J. Eyre and J. Bier, DSP Processors Hit the Mainstream, IEEE Micro, August 1988. [5] Gordon Bell, RISC: Back to the
Future? , Datamation, Vol. 32, No. 11, June 1, 1986 (Pp.96-108)
[6] Gin-der Wu Kuei-Ting Kuo, Dual-ALU structure processor for speech reorganization Publication Date: 24-26, April 2006
[7] Tseyun Feng, Dharma P. Agrawal, A Microprocessor-controlled asynchronous circuit switching network, 1979 (Pp.202-215)
[8] Xiao Tiejun, Liu Fang, 2008 16-bit Teaching Microprocessor Design and Application in Proc. IEEE International Symposium
on It in Medicine and Education. (Pp.160-163)
[9] Cross, J.E. and Soetan, R. A., 1988 Teaching Microprocessor Design using the 8086 Microprocessor in Proc. IEEE Journal.
[10] J. Bhaskar, Verilog HDL Synthesis, A Practical primer
[11] Douglas j. Smith, HDL Chip Design: A Practical guide for Designing, Synthesizing and Simulation ASICs and FPGAs using
VHDL or Verilog. JUNE 1996.
[12] James M. Lee, Verilog Quickstart. Hardcover Published by Kluwer Academic Pub. MAY 1997.
[13] Fraunhofer IIS, From VHDL and Verilog to System.www.iis.fraunhofer.de/bf/ic/icdds/arb_sp/vhdl.
[14] Bannatyne, R, 1998 Migrating from 8 to 16-bit Processor in proc. Northcon/98 conference. (Pp. 150-158)



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


149
www.ijergs.org

Therapeutic Properties of Ficus Regligiosa
Shailja Singh
1
, Shalini Jaiswal
1

1
Amity group of Institutions, Greater Noida, U.P 201308
E-mail- shailjadu@gmail.com
Abstract
Medicinal plants have played a vital role in maintaining and improving human health from past thousands of years. History of human
civilization and discovery of herbal medicines are running parallel from ancient time till date. Among hundreds of medicinal plants,
Ficus tree has a significant role in promoting health and alleviate illness. Ficus religiosa commonly known as Peepal tree is regarded
as sacred tree to both Hindus and Buddhists. Itcontains enormous range of pharmacological activities like ant diabetic, antimicrobial,
analgesic, wound-healing etc. The present review describes the morphological, phytochemical and pharmacological aspects of F.
religiosa.

Key words
Medicinal plants, Ficus religiosa, antimicrobial, morphological, phytochemical, Parmacological, Peepal.

I. INTRODUCTION
Plants have been used in treating human diseases for past thousands of years.
1
Since prehistoric times, men and women of
Eurasia and the Americas acquired a tremendous knowledge of medicinal plants.
2
All of the native plant species discussed
in detail in this work was used by native people in traditional medicine. Medicinal plants have curative properties due to
the presence of various complex chemical substances of different composition, which are found as secondary plant
metabolites in one or more parts of these plants. Herbal medicine is based on the principle that plants contain natural
substances that can promote health and alleviate illness. In recent times, focus on plant research has increased all over the
world and a large body of evidence has collected to show immense potential of medicinal plants used in various
traditional systems. Today, we are witnessing a great deal of public interest in the use of herbal remedies.
This review emphasizes the traditional use and clinical potentials of F. religiosa. F. religiosa Linn is commonly known as
Peepal belongs to family Moraceae.
3-5
Six parts of the trees (i.e., seeds, bark, leaves, fruit, latex and roots) are valued for
their medicinal qualities. The only one part not used for therapeutic purposes is the wood because it is highly porous. In
India, since ancient times it has got great mythological, religious, medical importance and considered as the oldest tree in
Indian art literature.
6-8

It is known by several vernacular names, the most commonly used ones being Asvatthah (Sanskrit), Sacred fig (Bengali),
Peepal (Hindi), Arayal (Malayalam), Ravi (Telgu) and Arasu (Tamil).
9
Moreover, the barks of F. religiosa is an important
ingredient in many Ayurvedic formulations, such as Nalpamaradi tailam, Chandanasavam, Nyagrodhadi churna and
Saribadyasavam.
10,11
In medicinal field, F. religiosa is gaining great attention because it has many compounds which are
beneficial in treatment of many diseases like diabetes, skin diseases, respiratory disorders, central nervous system
disorder, gastric problems etc.
12,13
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


150
www.ijergs.org


1. Classification
Domain: Eukaryota
Kingdom: Plantae
Subkingdom: Viridaeplantae
Phylum: Tracheophyta
Subphylum: Euphyllophytina
Infraphylum: Radiatopses
Class: Magnoliopsida
Subclass: Dilleniidae
Superorder: Urticanae
Order: Urticales
Family: Moraceae
Tribe: Ficeae
Genus: Ficus
Specific epithet: Religiosa Linnaeus
Botanical name : Ficus religiosa
2. Vernacular names
Sanskrit: Pippala
Assamese: Ahant
Bengali: Asvattha, Ashud, Ashvattha
English: Pipal tree
Gujrati: Piplo, Jari, Piparo, Pipalo
Hindi: Pipala, Pipal
Kannada: Arlo, Ranji, Basri, Ashvatthanara, Ashwatha, Aralimara, Aralegida, Ashvathamara, Basari, Ashvattha
Kashmiri: Bad
Malayalam: Arayal
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


151
www.ijergs.org

Marathi: Pipal, Pimpal, Pippal
Oriya: Aswatha
Punjabi: Pipal, Pippal
Tamil: Ashwarthan, Arasamaram, Arasan, Arasu, Arara
Telugu: Ravichettu
Morphological characters
F. religiosa (L.) is a large perennial tree, glabrous when young, found throughout the plains of India upto 170m altitude in
the Himalayas.The stem bark and leaves of F. religiosa are reported phytoconstituents of phenols, tannins, steroids,
lanosterol, stigmasterol, lupen-3-one.The active constituent from the root bark F. religiosa was found to be -sitosteryl-D-
glucoside, The seeds contain phytosterolin, -sitosterol, and its glycoside, albuminoids. The fruit of F. religiosa contained
appreciable amounts of total phenolic contents, total flavonoid.
14


3. Botanic description
F. religiosa is a large deciduous tree with or no aerial roots which is commonly found in India. It is native from India to
Southeast Asia which grows up to 5000ft with the trunk which reaches up to 1 meter. Bark is grey with brownish specks,
smooth, exfoliating inirregular rounded flakes.
Leaves alternate, spirally arranged and broadly ovate, glossy, coriaceous(leathery), dark green leaves, 10-18 by 7.5-10 cm,
with unusual tail-liketips, pink when young, stipulate, base-cordate. Petioles is slender and 7.5-10 cm long. Galls on
leaves.
Flowers axillary sessile, unisexual.
Fruits are circular in shape called as Figs which is enclosed in floresences. When fruits are raw, they are green in colour
during summer but after ripening they turn black through rainy season.
15
The specific epithet religiosa alludes to the
religious significanceattached to this tree. The prince Siddhartha is said to have sat andmeditated under this tree and there
found enlightment from which time hebecame a Buddha. The tree is therefore sacred to Buddhists and isplanted beside
temples.

4. Phytochemical analysis
Phytochemistry can be defined as the chemistry of those natural products which can be used as drugs or plant parts with
the emphasis on biochemistry. Preliminary phytochemical screening of F. religiosa barks, showed the presence tannins,
saponins, flavonoids, steroids, terpenoids and cardiac glycosides.
16,17
The barks of F. religiosa showed the presence of
bergapten, bergaptol, lanosterol, -sitosterol, stigmasterol, lupen-3-one, -sitosterol-d-glucoside (phytosterolin), vitamin
k1.
18-21
Apart from this, tannin, wax, saponin, -sitosterol, leucocyanidin-3-0--D-glucopyrancoside, leucopelargonidin-3-
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


152
www.ijergs.org

0--D-glucopyranoside, leucopelargonidin-3-0--L- rhamnopyranoside, lupeol, ceryl behenate, lupeol acetate, -amyrin
acetate, leucoanthocyanidin and leucoanthocyanin are also found in bark.
22

HO
O
O
O
O
O
O
O
OH
Lanosterol
Bergapten
Bergaptol


HO HO
H
sitosterol
cadinene
Stigmasterol
o
|
Hentricontane


Figure 1: Active components of F. religiosa
The fruit of F. religiosa comprises asgaragine, tyrosine, undecane, tridecane, tetradecane, (e)--ocimene, -thujene, -
pinene, -pinene, -terpinene, limonene, dendrolasine, dendrolasine -ylangene, -copaene, -bourbonene, -
caryophyllene, -trans bergamotene, aromadendrene, -humulene, alloaromadendrene, germacrene, bicyclogermacrene, -
cadinene and -cadinene.
23
Leaves contain campestrol, stigmasterol, isofucosterol, -amyrin, lupeol, tannic acid, arginine,
serine, aspartic acid, glycine, threonine, alanine, proline, tryptophan, tryosine, methionine, valine, isoleucine, leucine, n-
nonacosane, n-hentricontanen, hexa-cosanol and n-octacosan.
20-22
Alanine, threonine, tyrosine have been reported in seeds
of F. religiosa.
24
The crude latex of F. religiosa shows the presence of a serine protease, named religiosin. The structures
of active components are exhibited in figure 1.All six parts of the tree i.e., seeds, bark, leaves, fruit, latex and roots are
highly useful for their medicinal properties except wood because of its highly porous nature (Table 1).
Plant parts Traditional uses (as/in)
Bark Diarrhoea, dysentery, anti-inflammatory, antibacterial, cooling, astringent,
gonorrhoea, burns
Leaves Hiccups, vomiting, cooling, gonorrhoea
shoots Purgative, wounds, skin disease
Leaf juice Asthma, cough, diarrhoea, gastric problems
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


153
www.ijergs.org






Table 1: Medical use of different parts of F. religiosa
5. Pharmacological activities present in F. religiosa

The Whole parts of the plant exhibit wide spectrum of activities such as anticancer, antioxidant, antidiabetic,
antimicrobial, anticonvulsant, anthelmintic, antiulcer, antiasthmatic, anti amnesic etc. as shown in figure 2.
Antimicrobial activity: The antimicrobial activity of ethanolic extracts of F. religiosa (leaves) was studied using the agar
well diffusion method. The test was performed against four bacteria: Bacillus subtilis (ATCC 6633), Staphylococcus
aureus (ATCC 6538), Escherichia coli (ATCC 11229), Pseudomonas aeruginosa (ATCC 9027) and against two fungi:
Candida albicans (IMI 349010) and Aspergillus niger (IMI 076837). The results showed that 25mg/ml of the extract was
active against all bacterial strains and effect against the two fungi was comparatively much less.
25

Iqbal et al. explored that F. religiosa bark methanolic extract was 100% lethal for Haemonchus. contortus worms during
in vitro testing.
26
The acetone extracts of seven plant species Tamarindus indica, F. indica, F. religiosa,
Tabernaemontana livaricate, Murraya koenigii, Chenopodium album and Syzygium cuminii were evaluated for their
ovicidal activity. Murraya, Tabernaemontana and Chenopodium showed 70%, 75% and 66.6% ovicidal action at 100%
dose level whereas at the same dose level T. Indica, F. indica, F. religiosa and S. cuminii showed 48.3%, 41.6%, 13.3%,
53.3% ovicidal action respectively.
27
According to Uma et al different extracts (methanol, aqueous, chloroform) of the
bark of F. religiosa has inhibitory effect on the growth of three enteroxigenic E. coli, isolated from the patients suffering
from diarrhoea.
28

5. Wound healing activity: This activity was explored by incision and excision wound models using F. religiosa
leaf extracts which was prepared as lotion (5 and 10%) were applied on Wistar albino strain rats. Povidine iodine 5%
was used as Standard drug. High rate of wound contraction, decrease in the period for epithelialisation, high skin
breaking strength were detected in animals treated with 10% leaf extract ointment when compared to the control group of
animals. It has been reported that tannins possess ability to increase the collagen content, which is one of the factor for
promotion of wound healing.
29, 30

Dried fruit Fever, tuberculosis, paralysis
Fruit Asthma, digestive
Seeds Refrigerant, laxative
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


154
www.ijergs.org


Figure 2: Pharmacological activities of Ficus religiosa
7. Anti-amnesic activity: The anti-amnesic activity was investigated using F. religiosa methanol extract of figs of F.
religiosa. Figs are known to comprise a high serotonergic content and modulation of serotonergic neurotransmission
which plays a crucial role in the pathogenesis of amnesia.
31
Scopolamine (1mg/kg, i.p.) was administered before training
for induction of anterograde amnesia and before retrieval for induction of retrograde amnesia in both models. TL in the
EPM, step down latency (SDL), number of trials, and number of mistakes in the MPA were determined in vehicle control,
F.religiosa figs treated (10, 50, and 100mg/kg, i.p.) and standard groups (piracetam 200mg/kg, i.p.).
32


8. Analgesic activity: Sreelekshmi et al. found the analgesic activity of stem bark of F. religiosa using the acetic acid-
induced writhing (extension of hind paw) model in mice using Aspirin as standards drug.
33
It showed dropping in the
number of writhing of 71.56 and 65.93%, respectively at a dose of 250 mg/kg and 500 mg/kg. Thus, it can be concluded
that extract showed the analgesic effect probably by inhibiting synthesis or action of prostaglandins.

9. Antidiabetic activity: Aqueous extract of F. religiosa in doses of 50 and 100 mg/kg exhibited pronounced reduction in
blood glucose levels. This nature of effect was related with the hypoglycaemic drug glybenclamide. It has been also
proved that F. religiosa significantly increases serum insulin, body weight, glycogen content in liver. Bark of F. religiosa
shows similar effects and exhibits maximum fall of the blood sugar level.
34

10. Anticonvulsant activity
Figs of the plant F. religiosa have been reported to contain highest amount of Serotonin which is responsible for its
anticonvulsant effect.
35
Further, Singh and Goel investigated the anticonvulsant effect of methanolic extract of F. religiosa
figs on Maximal electroshock-induced convulsions (MES), Picrotoxin-induced convulsions, and
pentylenetetrazoleinduced convulsions (PTZ).
7
In Ayurveda it is claimed that leaves of F. religiosa also possess
F.
religio
sa
Antimic
robial
Wound
healing
Anti-
amnesic
Analges
ic
Antidia
betic
Anticon
vulsant
Antiulc
er
Anti-
inflam
matory
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


155
www.ijergs.org

anticonvulsant activity.
36
The anticonvulsant effect of the extract obtained from the leaves ofF. Religiosa was evaluated
against PTZ (60mg/kg, i.p) induced convulsion in albino rats. The study revealed 80 to 100 % protection against PTZ-
induced convulsions when given 30-60minutes prior to induced convulsion, respectively. Patil et al demonstrated that the
anticonvulsant effect of the aqueous aerial root extract of F. religiosa is effective in management of chemically-induced
seizures in rats.
37
The extract was evaluated against strychnine-induced convulsions and pentylenetetrazole-induced
convulsions animal models.
11. Antiulcer activity
F. religiosa is one of the plants that have been traditionally used in the India and Malays folklore medicine to treat gastric
ulcer.
38
The ethanol extract of stem bark showed potential antiulcer activity. The antiulcer activity was evaluated in vivo
against indomethacin and cold restrained stress induced gastric ulcers and pylorus ligation assay. The extract (100, 200 &
400 mg/kg) significantly reduced the ulcer index in all assay used.
39
Administration of F. religiosa significantly reduced
the ulcer index.
40
The hydroalcoholic extract of leaves also presented antiulcer activity. The activity of extract was
evaluated against pylorus ligation-induced ulcers, ethanol-induced ulcers and aspirin-Induced ulcers. Determination of
antiulcer effect was based upon ulcer index and oxidative stress.

12. Anti-inflammatory activity
F. religiosa has found to be potential anti-inflammatory & analgesic property. The mechanism underlying the effect is the
inhibition of PGs synthesis. It was found that the leaf extract of F. religiosa has potential anti-inflammatory activity
against carrageenan induced paw oedema. The inhibitory activity was found due to inhibition of release of histamine,
serotonin (5HT), Kinins and PGs.
41

The methanol extract of stem bark of F. religiosa has inhibitory effect on carrageenan-induced inflammation in rats due to
the inhibition of the enzyme cycylooxygenase (COX) leading to inhibition of PGs synthesis. Further, various studies
revealed that tannin present in the bark possess anti-inflammatory effect.
33
Moreover, it has been shown that methanolic
extract of stem bark of F. religiosa is known to suppress inflammation by reducing both 5-HT & bradykinin
(BK).Mangiferin isolated from drug has anti-inflammatory activity against carrageenan-induced paw oedema.
42
Figure (3)
indicates the activity of various extracts of Ficus religiosa on inflammation. Viswanathan et al investigated the anti-
inflammatory and mast cell proliferative effect of aqueous extract of bark of F. religiosa.
43
The anti-inflammatory effect
was evaluated against acute (carrageenaninduced hind paw oedema) and chronic (cotton pellet implantation) models of
inflammation.
13. Conclusion
Presently enormous research group are showing curiosity and interest in the medicinal properties ofF. religiosa. Although
scientific studies have been carried out on a large number of Indian botanicals, a considerably smaller number of
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


156
www.ijergs.org

marketable drugs or phytochemical entities have entered the evidence-based therapeutics. Efforts are therefore needed to
establish and validate evidence regarding safety and practices of Ayurvedic medicines.
II. Acknowledgement
SS and SJ are thankful to Amity group of Institutions, greater Noida campus for help and support.

REFERENCES:
1. http://www.agr.gc.ca/eng/science-and-innovation/science-publications-and resources/resources/ canadian-
medicinal-crops/general-references/?id=1300823047797
2. M.Shankar,T. Lakshmi Teja, B.Ramesh, D. Roop kumar, D.N.V. Ramanarao, M.Niranjan Babu, Phytochemical
investigation and antibacterial activity of Hydroalcoholic extract of terminalia bellirica leaf, Asian Journal of
Phytomedicine and Clinical Research, 2(1): 33-39, 2014.
3. EJH Corner,Check List of Ficusin Asia and Australasia with keys to identification, Gard. Bull. Singapore, 21:1-
186, 1965.
4. C.C. Berg,Classification and Distribution of Ficus, Experientia, 45(7):605-611, 1989.
5. C.C. Berg,EJH Corner, Moraceae-Ficus,Flora Malesiana Series I (Seed Plants), 17: 1-730, 2005.
6. A.Ghani, Medicinal plants of Bangladesh with chemical constituents and uses, Asiatic Society of Bangladesh,
Dhaka, 236, 1998.
7. DamanpreetSingh, Rajesh KumarGoel, Anticonvulsant effect of Ficus religiosa: role of serotonergic pathways,
J. Ethanopharmacol., 123: 330-334, 2009.
8. P.V.Prasad, P.K.Subhaktha, A.Narayana, M.M. Rao,Medico-historical study of Asvattha (sacred fig
tree),Bull. Indian Inst. Hist. Med.Hyderabad, 36: 1-20, 2006.
9. P.K.Warrier, Indian medicinal plants-A compendium of 500 species, Orient Longman Ltd., Chennai, Vol. III,
38-39, 1996.
10. V.V.Sivarajan, I.Balachandran, Ayurvedic drugs and their sources, Oxford & IBH Publishing Co. Pvt. Ltd.,
New Delhi, 374-376, 1994.
11. KRG Simha, V.Laxminarayana, Standardization of Ayurvedic polyherbal formulation, Indian J. Trad. Know.,6:
648-652, 2007.
12. N.Sirisha, M.Sreenivasulu, K.Sangeeta, C.M.Chetty,Antioxidant Properties of Ficus Species-A review,
International Journal of PharmTech Research, 3:2174-2182, 2010.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


157
www.ijergs.org

13. B.Vinutha, D.Prashanth, K.Salma, S.L.Sreeja, D.Pratiti, R.Padmaja, S.Radhika, A.Amit, K.Venkateshwarlu,
M.Deepak, Screening activity of selected Indian medicinal plant for acetylcholinesterase inhibitory activity,
Journal of Ethnopharmacology, 109: 359-363, 2007.
14. Ayurvedic pharmacopoeia of India, Ministry of health and family welfare, department of Ayush,New Delhi, 17-
20, (2001).
15. C.Orwa, A.Mutua, R.Kindt,R.Jamnadass, S.Anthony, Agroforestry Database 4.0, Ficus religiosa, 1-5, 2009.
16. K.Babu, S.G.Shankar, S.Rai, Turk,Comparative pharmacognostic studies on the barks of four Ficus species, J.
Bot., 34: 215-224, 2010.
17. S.A.Jiwala, M.S.Bagul, M.Parabia, M.Rajani,Evaluation of free radical scavenging activity of an ayurvedic
formulation, Indian J. Pharm. Sci., 70, 31-35, 2008.
18. K.D.Swami, N.P.S.Bisht, Constituents of Ficus religiosa and Ficus infectoria and their biological activity, J.
Indian Chem. Soc.,73: 631, 1996.
19. K.D.Swami, G.S.Malik, N.P.S.Bisht,Chemical investigation of stem bark of Ficus religiosa and Prosopis
spicigera", J. Indian Chem. Soc., 66: 288-289, 1989.
20. B. Joseph, S.R. Justin, Phytopharmacological and Phytochemical Properties of three FicusSpecies-an overview,
International Journal of Pharma and Bio Sciences, 1(4): 2010.
21. B.C.G.Margareth, J.S.Miranda, Biological Activity of Lupeol, International journal of biomedical and
pharmaceutical sciences, 46-66, 2009.
22. A.Husain, O.P.Virmani, S.P.Popli, L.N.Misra, M.M.Gupta, G.N.Srivastava,Z.Abraham, A.K.Singh, Dictionary of
Indian Medicinal Plants, CIMAP, Lucknow, India, , 546 (1992).
23. L.Grison, M.Hossaert, J.M.Greeff,J.M.Bessiere, Fig volatile compounds: basis for the specific Ficus-wasps
interactions, Phytochemistry, 61: 61-71, 2002.
24. M.Ali, J.S.Qadry, Amino acid composition of fruits and seeds of medicinal plants, J. Indian Chem. Soc., 64:
230-231, 1987.
25. G.P.Choudhary,Evaluation of ethanolic extract of Ficus religiosabark on incision and excision wounds in rats,
Planta Indica, 2(3):17-19, 2006.
26. Z.Iqbal, Q.K.Nadeem, M.N.Khan, M.S.Akhtar, F.N.Waraich, Int. J. Agr. Biol., 3: 454-457, 2001.
27. S.C.Dwivedi, Venugopalan, Evaluation of leaf extracts for their ovicidal action against Callosobruchus chinensis
(L.),Asian J. Exp. Sci., 16: 29-34, 2001.
28. B.Uma, K.Prabhakar, S.Rajendran,In vitro antimicrobial activity and phytochemical analysis of Ficus religiosa
L. and Ficus bengalensis L. against diarrhoeal enterotoxigenic E. Coli, Ethnobotanical Leaflets, 13:472-474,
2009.
29. R.M.Charde, H.J.Dhongade, M.S.Charde, A.V.Kasture, Evaluation of antioxidant, wound healing and anti-
inflammatory activity of ethanolic extract of leaves of F. religiosa,Int. J. Pharm. Sci. Res., 1: 72- 82, 2010.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


158
www.ijergs.org

30. K.Roy, H.Shivakumar,S.Sarkar, Wound Healing Potential of Leaf Extracts of F.religiosa on Wistar albino strain
rats.Int. J. Pharm. Tech. Res., 1: 506-508, 2009.
31. D.C.Williams,Proteolytic activity in the genus Ficus, Plant Physiology, 43:1083-1088, 1968.
32. H.Kaur, D.Singh, B.Singh, R.K.Goel,Anti-amnesic effect of Ficus religiosain scopolamine induced anterograde
and retrograde amnesia, Pharmaceutical Biology; 48:234-240, 2010.
33. R.Sreelekshmi, P.G.Latha, M.M.Arafat, S.Shyamal, V.J.Shine, G.I.Anuja, S.R.Suja, S.Rajasekharan,Anti-
inflammatory, analgesic and anti-lipid peroxidation studies on stem bark of Ficus religiosaLinn., Natural
Product Radiance, 6(5): 377-381, 2007.
34. R.Pandit, A.Phadke, A.Jagtap,Antidiabetic effect of Ficus religiosaextract in streptozotocin induced diabetic
rats, Journal of Ethnopharmacology, 128:462-466, 2010.
35. J.N.Bliebtrau,The Parable of the Beast, Macmillan Company, New York, 74, 1968.
36. N.S.Vyawahare, A. R.Khandelwal, V.R.Batra,A.P.Nikam,Herbal anticonvulsants, Journal of Herbal Medicine
and Toxicology, 1(1): 9-14, 2007.
37. M.S.Patil, C.R.Patil, S.W.Patil, R.B.Jadhav,Anticonvulsantactivity of aqueous root extract of Ficus religiosa, J.
Ethanopharmacol, 133: 92-96, 2011.
38. B.Ravishankar, V.Shukla, J. Indian Systems of Medicine: a brief profile, African Journal of Traditional,
Complementary and Alternative Medicines, 4: 319-337, 2007.
39. M.S.A.Khan, S.A.Hussain, A.M.M.Jais, Z.A.Zakaria, M.Khan,Anti-ulcer activity of Ficus religiosastem bark
ethanolic extract in rats, J Med Plants Res., 5(3): 354-359, 2011.
40. S.Saha, G.Goswami,Study of anti-ulcer activity of Ficus religiosaL. on experimentally induced gastric ulcers in
rats, Asian Pacific Journal of Tropical Medicine, 791-793, 2010.
41. R.M.Charde, H.J.Dhongade, M.S.Charde,A.V.Kasture,Evaluation of antioxidant, wound healing and anti-
inflammatory activity of ethanolic extract of leaves of Ficus religiosa, International Journal of Pharma Sciences
and Research, 1: 73-82, 2010.
42. N.Verma, S.Chaudhary, V.K.Garg, S.Tyagi,Antiinflammatory and analgesic activity of methanolic extract of
stem bark of Ficus religiosa, International Journal of Pharma Professional's Research, 1: 145-147, 2010.
43. S.Viswanathan, P.Thirugnanasambantham, M.K.Reddy, S.Narasimhan, G.A.Subramaniam,Anti-inflammatory
and mast cell protective effect of Ficus religiosa,Ancient Sci Life.; 10: 122-125, 1990





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


159
www.ijergs.org

Remote Power Generating Systems WHIT Using Low Frequency
Transmission
Mohammad Ali Adelian,Narjes Nakhostin Maher, Farzaneh Soorani
Ma_adelian@yahoo.com 00917507638844, Maher_narges@yahoo.com , ferisoorani@gmail.com

Abstract the goal of this research is to evaluate alternative transmission systems from remote wind farms to the main grid using
low-frequency AC technology. Low frequency means a frequency lower than nominal frequency (60/50Hz). The low-frequency AC
network can be connected to the power grid at major substations via cyclo-converters that provide a low-cost interconnection and
synchronization with the main grid. Cyclo-converter technology is utilized to minimize costs which result in systems of 20/16.66 Hz
(for 60/50Hz systems respectively). Low frequency transmission has the potential to provide an attractive solution in terms of
economics and technical merits. The optimal voltage level selection for transmission within the wind farm and up to the
interconnection with the power grid is investigated. The proposed system is expected to have costs substantially lower than HVDC
and conventional HVAC systems. The cost savings will come from the fact that cyclo-converters are used which are much lower in
cost than HVDC. Other savings can come from optimizing the topology of the wind farms. Another advantage of the proposed
topologies is that existing transformers designed for 60 Hz can be used for the proposed topologies (for example a 345kV/69 kV,
60Hz transformer can be used for a 115 kV/23kV, 20 Hz system). The results from this research indicate that the use of LFAC
technology for transmission reduces the transmission power losses and the cost of the transmission system.

Keywords Low frequency, Cyelo Converter, Wind Farm Connections, wind frame topology, wind system configuration, series
and parallel wind frame, voltage level selection.
INTRODUCTION
Renewable sources of energy are widely available and proper utilization of these resources leads to decreased dependence on the fossil
fuels. Wind is one such renewable source available in nature and could supply at least a part of the electric power. In many remote
locations the potential for wind energy is high. Making use of the available wind resources greatly reduces the dependence on the
conventional fuels and lowers the emission rates. There are a few problems associated with the wind which makes the wind energy
more expensive than other forms of electric power generation. The two main issues are: (a) Large wind farms are located in remote
locations which make the cost of transmission of wind power costly, and (b) the intermittent supply of power due to the
unpredictability of the wind that results in lower capacity credits for the operation of the integrated power system. These issues are
addressed by designing alternative topologies and transmission systems operating at low frequency for the purpose of decreasing the
cost of transmission and making the wind farm a more reliable power source. The use of DC transmission within the wind farm
enables the output of wind generators to be rectified via a standard transformer/rectifier arrangement to DC of appropriate kV level.

Research Objectives

Literature study of previous research on low frequency AC transmission and wind farm topologies.
Design of alternate topologies.
Calculation of optimal transmission voltage levels for different topologies.
Modeling the system using WinIGS-F software.

Technologies for Wind Farm Power Transmission
The possible solutions for transmitting power from wind farms are HVAC, Line commutated HVDC and voltage source based HVDC
(VSC-HVDC). Low frequency AC transmission (LFAC) is particularly beneficial in terms of cost savings and reduction of line losses
[4] in cases where the distance from the power generating stations to the main power grid is large. The use of fractional frequency
transmission system (FFTS) for offshore wind power is discussed in [6]. The author proposes LFAC as an alternative to HVAC and
HVDC technologies for a short and intermediate transmission distances. HVAC is more economical for short transmission distances.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


160
www.ijergs.org

For longer distances, HVAC has disadvantages like increase in the cable cost, terminal cost and charging. HVDC transmission
systems and wind farm topologies are discussed in [12]. HVDC being a matured technology is used for longer distances. Compared to
HVDC, the LFAC system reduces the usage of an electronic converter terminal which reduces the investment cost. HVDC technology
is used only for point-to-point transmission [11], and LFAC can be used for similar networks as AC transmission. Further, VSC-
HVDC replaces the thyristors with IGBTs and is considered to be the most feasible solution for long distance transmission. However,
addition of converter stations on both sides of the transmission line increase the investment cost of the VSC-HVDC system [7]
compared to LFAC. Hence, due to the limitations of the HVAC and HVDC the proposed LFAC is used in the design of transmission
systems. The use of LFAC can be extended to long transmission distances. Cyclo converter technology is used for converting the AC
of nominal frequency to AC of one third frequency i.e. 16.67 Hz/20 Hz for a 50 Hz/ 60 Hz transmission system`. Several advantages
of the LFAC are identified. The transmission system used for conventional AC system can be used for LFAC without any
modifications and the LFAC system increases the transmission capacity.

Wind system configuration 1: AC wind farm, Nominal frequency, Network connection:Two different types of AC wind
farms referred in this thesis are radial and network connections. Radial wind farms are suitable for small wind farms with a short
transmission distance. In a small AC wind farm, the local wind farm grid is used both for connecting all wind turbines in a radial
fashion and transmitting the generated power to the wind farm grid interface. Network connected wind farms are usually large AC
wind farms where the local wind farm grid has a lower voltage level than the transmission system. The wind system configuration 1
shown in figure 3.2.1 has network connection of wind turbines and AC power collection system.
Wind System Configuration 2: AC Wind Farm, AC/DC Transmission, And Network Connection: The wind system
configuration 2 shown in figure 3.2.2 is similar to the wind system configuration 1 except for the transmission part from the collector
substation to the main power grid. AC transmission is replaced by DC transmission in this configuration. Nominal frequency
transmission is adopted within the wind farm. This wind farm is referred to as AC/DC wind farm. This type of system does not exist
today, but is frequently proposed when the distance to main grid is long.


Figure 3.2.1: Wind system configuration1 Figure 3.2.2: Wind system configuration 2

Wind system configuration 3: Series DC Wind farm, Nominal frequency, Network connection: The wind system
configuration 3 has a DC power collection system. Wind turbines are connected in series and each set of series connected array is
connected to the collection point. Using DC/AC converters, AC of suitable voltage level and nominal frequency is generated. Voltage
is stepped up and the power is transmitted to the interconnection point at the power grid by a high voltage transmission line.
Wind System Configuration 4: Parallel DC Wind Farm, Nominal Frequency, Network Connection: Wind system
configuration 4 differs from the wind system configuration 3 in the local wind farm design. Here a number of wind turbine systems
are connected in parallel and each set of parallel connected wind turbines are connected to a collection point. Using DC/AC
converters, AC of suitable voltage level and nominal frequency is generated. At the collection point voltage is stepped up by means of
a transformer and the power is transmitted to the interconnection point at the power grid by a high voltage transmission line. Two
small sized wind farms are interconnected via a transmission line to ensure reliable supply of power to the main grid in the event of
fault or maintenance shut down in any one of the wind farms by transferring power generated from the other wind farm.


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


161
www.ijergs.org


Wind system configuration 3: Series DC Wind farm Wind system configuration 4: Parallel DC Wind farm

Wind System Configuration 5: Series DC Wind Farm, Low Frequency Radial AC Transmission: The wind system
configuration shown in figure 3.2.5 has a DC wind farm. Here a number of wind turbine systems are connected in series and each
series string is connected to a collection point. An inverter is used to convert DC to AC of low-frequency preferably one third the
nominal power frequency at the collection point. The voltage is raised to higher kV levels by means of a transformer (standard
transformers are used with appropriately reduced ratings for the low frequency). The power is transmitted to the main power grid via
lines operating at low frequency. Using cyclo-converters the low frequency is converted to power frequency before connecting to the
main power grid.
Wind System Configuration 6: Parallel DC Wind Farm, Low Frequency, Radial Transmission: Wind system
configuration 6 is similar to the wind system configuration 5. Here the difference is that the wind turbines are connected parallel to
each other and to the collection point. Parallel connection of wind turbines leads to same voltage across the terminals of all the wind
turbine systems. The generated power is converted to low-frequency AC using inverter and transmitted over long distances to the
power grid. Cyclo-converter technology is used to convert the low frequency to nominal frequency before connecting the system to
the main power grid.

Wind system configuration 5: Series DC wind farm Wind system configuration 6: Parallel DC Wind Farm
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


162
www.ijergs.org


Wind system configuration 7: Series DC wind Farm Wind system configuration 8: Parallel DC wind Farm
Wind System Configuration 7: Series DC Wind Farm, Low Frequency AC Transmission Network: Here a number of
wind turbine systems are connected in series and each set of series connected array is connected to a collection point. At the collection
point DC is converted to low frequency AC by means of inverters. The transmission of power up to the main power grid is by means
of a network of transmission lines operated at low frequency. The low frequency AC system is connected to the power grid by means
of cyclo-converters.
Wind System Configuration 8: Parallel DC Wind Farm, Low Frequency AC Transmission Network: Wind system
configuration 8 has a number of wind turbine systems connected in parallel and each set of parallel connection of wind
turbine systems are connected to a collection point. From the collection point to power grid system is identical to wind
system configuration 7.
VOLTAGE LEVEL SELECTION: This section provides analysis and results that determine the optimal transmission voltage
used in the alternative wind transmission systems up to the main DC bus. The optimal kV level for transmission within the wind farm
is selected by evaluation of the total costs consisting of operational costs (mainly losses) and annualized investment cost. The cost of
the auxiliary equipment is not considered. The optimal kV level is selected on the basis of minimal total cost consisting of operating
costs (mainly transmission loss) and investment cost.
Voltage calculation-Wind system configuration 5: Series DC wind farm, Low frequency radial AC transmission:
Wind system configuration 5 has a series DC wind farm as shown in figure 4.1 where mi wind turbines are connected in series to
obtain the suitable transmission voltage. The wind turbine systems are assumed to be identical, thus resulting in same voltage and
current through them. A wind farm rated 30 MW consisting of 10 wind turbines each rated 3 MW is considered. The transmission
voltage for calculation purpose is selected as 35 kV. Thus, the nominal high side transformer voltage for the wind turbines is 3.5 kV.
The optimal transmission voltage is obtained by plotting the values obtained from the calculation of loss and the annual investment
cost of the cable and converter for different values of the transmission voltage. The resistance of the chosen cable is approximately
0.0153 ohm/ 1000 ft.

Figure 4.1: Wind farm configuration 1: Series DC wind farm, radial connection

Calculation of transmission loss (up to the main DC bus) ($/yr): The following equations are used to determine the
transmission loss with in the wind farm.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


163
www.ijergs.org


This formula assumes that the wind farm operates continuously at maximum power which is unrealistic. The capacity factor of a wind
turbine is approximately 30% [1]. Hence the resultant Loss in $/yr is multiplied by 0.3. Therefore, Loss = $ 30,110 /yr.
Calculation of Cost of cable and the converter equipment in $/yr:The acquisition cost of the cable is $ 18.5 /ft. To calculate
the cost of cable required for the entire wind farm, the length of the cable is calculated. Multiplying the acquisition cost of cable by the
total length of the cable gives the acquisition cost of the cable for the entire wind farm in $.

Therefore, acquisition cost of
the cable, CCost = $ 188,700/yr.
The cost of converter is given by
The above calculation gives the acquisition cost of converters to
be $238,907. Assuming an interest rate of 6% and the life time of the cable and the converter to be 20 years, the annual amortization is
calculated. Acquisition cost of the cable and the converters is $ 427,607.
Thus, the annual investment for
cable and converter is $ 37,244 /yr. For determining the optimal operating voltage, Vdc vs. Loss ($/yr) and Vdc vs. Annual investment
for cable and converter ($/yr) are plotted. The optimal voltage level is determined by the lowest point of the curve obtained by adding
the Loss ($/yr) and Annual investment cost ($/yr) as shown in figure 4.2.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


164
www.ijergs.org


Figure 4.2: Plot of voltage at Che main DC bus vs. total cost for mi=10, Pt = 3MW

From the plot in figure 4.2 it can be seen that for the wind farm rated 30 MW having 10 wind turbines the optimal voltage would be
around 35kV. As the voltage level further increases the transmission power loss decreases but the cost of the cable and the converter
increases. The plots of annual investment cost vs. Vdc and loss vs. Vdc intersect 23 at 32 kV, from that point the annual investment
cost goes on increasing. The optimal voltage is obtained by determining the lowest point on the graph obtained by adding the annual
investment cost and the loss in $/yr. The x coordinate of the point is the optimal transmission voltage which is 35 kV in this case. For
different wind turbine ratings, cable size and the wind farm size the optimal level of voltage is calculated in a similar fashion as shown
above.
STEADY STATE ANALYSIS
Wind Farm Modeling
Performance of a multiphase system under steady state conditions is analyzed using WinIGS-F program. Wind system configurations
1, 4 and 8 are modeled to analyze the system performance.
Wind system configuration 1_ model 1: In the system shown in figure 5.1 the wind farm is connected to a transmission line 54
miles long. Wind farm consists of 3 radial feeders with 4 wind turbines on each radial feeder. All the wind turbines are identical and
rated 2.7 MW each. A three phase two winding transformer rated 3 MVA is connected to each wind turbine to raise the voltage to 25
kV. The power generated at each wind turbine is collected at the collector substation. A transformer rated 36 MVA with primary
voltage of 25 kV and secondary voltage of 115 kV is installed at the collector substation. At the end of the transmission line a three
phase constant electric load and a slack generator are connected.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


165
www.ijergs.org

Figure 5.1 Wind system configuration 1 (60 Hz transmission)

Total power loss during transmission is obtained by running the model and it is 1.1925 MW for this case.

Wind system configuration 4_ model 2: The wind system configuration 4 is modeled as shown is figure 5.3. It has two small
wind farms located at a distance away from each other. This model is similar to a scenario where there are two wind farms in
different geographical areas and power is collected at the collector substation and transferred over long distances to the main grid.
Under any disturbance to the generation of power in one of the wind farms the other wind farm supplies the power.

Figure 5.3 Wind system configuration 4 (60 Hz transmission)
Each wind turbine is rated 2.7 MW with 25 kV operating voltage within the wind farm. The operating voltage of the long distance
transmission line from the collector substation to the main grid considered here is 69 kV.
Wind system configuration 8 _ model 3: The wind system configuration 8 is modeled as shown in figure 5.4. A 20 Hz
transmission line, 54 miles long and operating at 69 kV is modeled in this case.

Figure 5.4 Wind system configuration 8 (20Hz transmission)
the transmission line parameters used for 60 Hz transmission can be used for 20 Hz transmission.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


166
www.ijergs.org


Table 5.3 Transmission power loss for wind system configuration 8 (20 Hz transmission)
ACKNOWLEDGMENT
I want to say thank you to my family, specially my mother for supporting me during my study in M tech and my entire friend which
help me during this study. I have to also thanks for my college to support me during my m tech in electrical in Bharati Vidyapeeth
deemd university college of engineering.

CONCLUSIONS
Geographical locations that are suitable for wind farm development are in remote locations far from the main transmission grid and
major load centers. In these cases, the transmission of wind power to the main grid is a major expenditure. The potential benefit of the
LFAC technology presented in this study is the reduction in the cost of the transmission system. This makes the economics of the wind
energy favorable and increases the penetration of wind power into the system. LFAC technology is used for transmission from the
collector substation to the main power grid. The thesis presents alternate topologies suitable for various geographical locations and
configurations of the wind farm. The optimal operating voltage of the transmission lines within the wind farm is calculated for all the
cases. The optimal voltage is computed considering the cost of the cable, converter equipment and the power loss due to transmission.
The preliminary study results show that higher the operating voltage lower will be the transmission losses and with the increase of the
transmission distance the transmission losses on the line increase. The results obtained by modeling the wind system configurations
point towards higher transmission losses in 60 Hz transmission compared to 20 Hz transmission.

REFERENCES:
[1] X. Wang, H. Dai, and R. J.Thomas, Reliability modeling of large wind farms and associated electric utility interface systems
IEEE Transactions on Power Apparatus and Systems, Vol. PAS-103, no. 3, March, 1984, pp. 569-575.
[2] R.J.Thoma, Phadke, A.G. Phadke, C. Pottle, Operational Characteristics of A Large Wind-Farm Utility- System with A
Controllable AC/DC/AC Interface IEEE Transaction on Power Systems, Vol. 3, No.1. February 1988.
[3] An-Jen Shi, Thorp J., Thomas R., An AC/DC/AC Interface Control Strategy to Improve Wind Energy Economics, IEEE
Transactions on Power Apparatus and Systems, Vol. PAS-104, No. 12. December 1985.
[4] T. Funaki, Feasibility of the low frequency AC transmission, in Proc. IEEE PES Winter Meeting, Vol. 4, pp. 26932698, 2000.
[5] W. Xifan, C. Chengjun, and Z. Zhichao, Experiment on fractional frequency transmission system, IEEE Trans. Power Syst., Vol.
21, No. 1, pp. 372377, Feb. 2006.
[6] N. Qin, S. You, Z. Xu, and V. Akhmatov, Offshore wind farm connection with low frequency AC transmission technology, in
Proc. IEEE PES General Meeting, Calgary, Alberta, Canada, 2009.
[7] S. Lundberg, Evaluation of wind farm layouts, in Nordic Workshop on Powerand Industrial Electronics (NORPIE 2004),
Trondheim, Norway, 14-16 June, 2004.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


167
www.ijergs.org

[8] S. Lundberg, "Wind farm configuration and energy efficiency studies series DC versus AC layouts," Thesis, Chalmers University
of Technology 2006.
[9] N. Kirby, L. Xu, M. Luckett, and W. Siepmann, HVDC transmission for large offshore wind farms, Power Engineering Journal,
vol. 16, no. 3, pp. 135 141, June 2003.
[10] C. Skaug and C. Stranne, HVDC wind park configuration study, Diploma thesis, Chalmers University of Technology,
Department of Electric Power Enginering, Goteborg, Sweden, October 1999.
[11] Lazaros P. Lazaridis, Economic comparison of HVAC and HVDC solutions for large offshore wind farms under special
consideration of reliability, Thesis, KTH.
[12] F. Santjer, L.-H. Sobeck, and G. Gerdes, Influence of the electrical design of offshore wind farms and of transmission lines on
efficency, in Second International Workshop on Transmission Networks for Offshore Wind Farms, Stockholm, Sweden, 30-31
March, 2001.
[13] R. Barthelmie and S. Pryor, A review of the economics of offshore wind farms, Wind enginering, vol. 25, no. 3, pp. 203213,
2001.
[14] J. Svenson and F.Olsen, Cost optimising of large-scale offshore wind farms in the danish waters, in 1999 European Wind
Energy Conference, Nice, France, 1-5 March, 1999, pp. 294299



















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


168
www.ijergs.org

Impact of Network Size & Link Bandwidth in Wired TCP & UDP
Network Topologies


Mrs. Meenakshi.
Assistant Professor, Computer Science & Engineering Department,
Nitte Meenakshi Institute of Technology, Bangalore 560064
kmeenarao@gmail.com

AbstractThe transmission of information in a network relies on the performance of the traffic scenario (application traffic agent
and data traffic) used in a network. The traffic scenario determines the reliability and capability of information transmission, which
necessitates its performance analysis.
The objective of this paper is to calculate and compare the performance of TCP/FTP and UDP/CBR traffic in wired
networks. Study has been done using NS-2 and AWK scripts. Exhaustive simulations have been done to analyze results, which are
evaluated for performance metrics, such as link throughput, and packet delivery ratio. The effect of variations in link bandwidth,
number of nodes on the network performance is analyzed over a wide range of their values. Results are shown in terms of graphs and
tables.

Keywordsprotocol stack, TCP, UDP, NS-2, agent, performance metrics, throughput, packet delivery ratio, bandwidth.

I. INTRODUCTION
Introduction section gives brief knowledge about TCP/IP protocol stack, feature, application, advantages and disadvantages of TCP
and UDP protocols respectively.

1.1TCP/IP Protocol Stack
It is based on the two primary protocols, namely, TCP and IP, is used in the current Internet [1]. These protocols have proven very
powerful, and as a result have experienced widespread use and implementation in the existing computer networks. Figure1 is TCP/IP
Protocol Stack.



















Figure1. TCP/IP Protocol Stack


Figure2. TCP and UDP headers

1.2. Transmission Control Protocol (TCP)
Application
TCP
IP
Hardware
interface
OSI 5-7
OSI 4
OSI 3
OSI 1-2
Application
UDP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


169
www.ijergs.org

TCP is a connection-oriented protocol [2]. Function as a message makes its way across the internet from one computer to another.
This is connection based. TCP is suited for applications that require high reliability, and transmission time is relatively less critical.
TCP is usedby other protocols like HTTP, HTTPs, FTP, SMTP, and Telnet. TCP rearranges data packets in the order specified. The
speed for TCP is slower than UDP.

TCP is reliable in the sense there is absolute guarantee that the data transferred remains intact and arrives in the same order in which it
was sent. TCP header size is 20 bytes as shown in Figure2 namely TCP and UDP headers. Data is read as a byte stream, no
distinguishing indications are transmitted to signal message (segment) boundaries. TCP is heavy-weight. TCP requires three packets to
set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control. TCP does error checking.
SYN, SYN-ACK and ACK are three handshake related messages.

1.3. User Datagram Protocol or Universal Datagram Protocol (UDP)
UDP is a connectionless protocol. UDP is also a protocol used in message transport or transfer. This is not connection based which
means that one program can send a load of packets to another and that would be the end of the relationship.
-
UDP is suitable for applications that need fast, efficient transmission, such as games. UDP's stateless nature is also useful for servers
that answer small queries from huge numbers of clients. DNS, DHCP, TFTP, SNMP, RIP and VOIP protocols use UDP. UDP has no
inherent order as all packets are independent of each other. If ordering is required, it has to be managed by the application layer. UDP
is faster because there is no error-checking for packets. There is no guarantee that the messages or packets sent would reach at all.

UDP Header size is 8 bytes as in Figure 1.2: TCP and UDP headers [3]. Source port, Destination port and Check Sum are common in
both TCP and UDP. Packets are sent individually and are checked for integrity only if they arrive. Packets have definite boundaries
which are honored upon receipt, meaning a read operation at the receiver socket will yield an entire message as it was originally sent.

UDP is lightweight. There is no ordering of messages and no tracking connections. It is a small transport layer designed on
top of IP. UDP does not have an option for flow control. UDP does error checking, but no recovery options. No Acknowledgment and
also no handshake since it is connectionless protocol.
1.4. TCP/IP Application Protocols
FTP (File Transfer Protocol), HTTP (Hyper Text Transfer Protocol), NNTP (Network News Transfer Protocol), Remote Login
(rlogin), Telnet, X Window System depends on TCP to guarantee the correct and orderly delivery of data across the network.
SNMP sends traffic through UDP because of its relative simplicity and low overhead.When NFS (Network File System) runs over
UDP the RPC implementation must provide its own guarantees of correctness. When NFS runs over TCP, the RPC layer can depend
on TCP to provide this kind of correctness.
DNS uses both UDP and TCP. It used UDP to carry simple queries and responses but depends on TCP to guarantee the correct and
orderly delivery of large amounts of bulk data (e.g. transfers of entire zone configurations) across the network.

II. MATERIAL AND METHODOLOGY
The network performance can be measured with many metrics. Following sections give brief about few of those metrics and
simulation setup of the experiment done in this paper.

2.1 Performance metrics
The performance of any system needs to be evaluated on certain criteria, these criteria then decide the basis of performance of any
system. Such parameters are known as performance metrics [4], [5], [6]. The different types of performance metrics used to evaluate
performance of any networks are described below:

2.1.1 Throughput
The throughput is the measure of how fast we can actually send data through the network. It is the measurement of number of packets
that are transmitted through the network in a unit of time. It is desirable to have a network with high throughput.
Throughput = P
R
/ (t
sp
- t
st
)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


170
www.ijergs.org

P
R
Received Packet Size,
t
st
Start Time,
t
sp
Stop Time.
Unit Kbps (Kilobits per second)

2.1.2 Link Throughput
In computer technology, throughput is the amount of work that a computer can do in a given time period. In communication networks,
such as Ethernet or packet radio, network throughput is the average of successful message delivery over a communication channel.
Transmission Time = File Size / Bandwidth (sec).
Throughput = File Size / Transmission Time (bps).
Link throughput say from node S to D is given by the following formula:
= N
b
/ t
N
b
- Number of bits transmitted from node S to D
t - Observation duration.

2.1.3 Packet Delivery Ratio (PDR)
It is the ratio of number of packets received at the destination to the number of packets generated at the source. A network should
work to attain high PDR in order to have a better performance. PDR shows the amount of reliability offered by the network. The
greater value of packet delivery ratio means the better performance of the protocol.
PDR= ( N
R
/ N
G
) * 100
N
R
Number of Received Packets,
N
G
Number of Generated Packets,
Unit Percentage ratio (%).

2.1.4 Average End to End Delay (AED)
This is the average time delay consumed by data packets to propagate from source to destination. This delay includes the total time of
transmission i.e. propagation time, queuing time, route establishment time etc. A network with minimum AED offers better speed of
communication.
AED = t
PR
- t
PS

t
PR
Packet Receive Time,
t
PS
Packet Send Time,
Unit milliseconds (ms).
2.2 Simulation
Simulation of wired as well as wireless network functions and protocols (e.g., routing algorithms, TCP, UDP) can be done using NS2
[7], [8], [9]. Network Simulator, Version-2, widely known as NS2, is simply an event-driven simulation tool that has proved useful in
studying the dynamic nature of communication networks. Figure 3.1 and 3.2 show simple network topologies used for experiments
carried out in this paper.


Figure3. A sample network topology: TCP

Figure4. A sample network topology: UDP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


171
www.ijergs.org


In figure 3.1 and 3.2 N
1,
N
2
N
N
are nodes. FTP and CBR are applications of TCP and UDP respectively. N
x
and N
y
are nodes of
bottleneck link whereas N
y
is final destination of packets generated from all sources. Corresponding sender-agents have to be attached
to all sending nodes. TCP/Sink is agent to be attached to TCP-destination and Null agent is for UDP-receiver.



III. RESULTS AND CHARTS
A systematic study and analysis of all the aspects of wired networks is carried out by executing ns2 and AWK scripts. Comparison is
made for link throughput and packet delivery ratio. Followings are tables and graphs obtained from executing AWK scripts [10] for
which ns2 trace files were input. In first scenario, number of nodes was varying and corresponding changes in throughput had been
observed. In second scenario again number of nodes was varying and corresponding PDR had been noted down. In last scenario
bandwidth is the varying factor and different throughput had been tabled.

Nodes

tcp/tp(kbps) udp/tp(kbps)
5 4640.95 2188.05
25 4613.26 4882.98
50 4733.14 4882.98
100 4745.9 4882.98
200 4750.14 4882.98
300 4744.03 4882.98







Nodes

tcp/PDR udp/PDR
5 99.81 100
10 99.64 100
20 99.21 77.92
30 98.66 68.61
40 98.06 63.96
50 97.46 61.17
100 95.62 55.58



Bandwidth

tcp/tp(kbps) udp/tp(kbps)
0.5 487.809
488.314
1 954.95 976.596

1.5 1440.2 1464.88
2 1926.27 1953.16
2.5 2411.64 2187.55
3 2897.37 2187.57
3.5 3396.37 2187.58
4 3880.92 2187.59



Figure5. Node Vs Bottleneck link-throughput

0
2000
4000
6000
5 25 50 100 200 300
L
i
n
k

T
h
r
o
g
h
p
u
t
(
k
b
p
s
)
Number of Nodes
Node Vs Link Throughput
tcp/tp
udp/tp
0
20
40
60
80
100
120
5 10 20 30 40 50 100
P
a
c
k
e
t

D
e
l
i
v
e
r
y

R
a
t
i
o

%

Number of Nodes
Node Vs Packet Delivery Ratio(PDR)
pdr/tcp
pdr/udp
Table2. Nodes Vs packet delivery ratio

Table1. Node Vs Bottleneck-link
throughput

Table3. Bandwidth Vs Bottleneck-link
throughput
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


172
www.ijergs.org

Figure6. Node Vs Packet Delivery Ratio

Figure7. Bandwidth Vs Link Throughput
0
1000
2000
3000
4000
5000
0.5 1.5 2.5 3.5
L
i
n
k

T
h
r
o
u
g
h
p
u
t
(
k
b
p
s
)
Bandwidth(Mb)
Bandwidth Vs Link Throughput
tcp/tp
udp/tp
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


173
www.ijergs.org

IV.CONCLUSION
Bottleneck link throughput and packet delivery ratio have been calculated using ns2 and AWK scripts. Link bandwidth and nodes
are varying factors respectively. Packet delivery ratio is much better in TCP than of UDP. In case of link band width, TCP shows
better link throughput than that of UDP.
Depending on application requirement one has to decide the suitable protocols. This study can be extended for other traffic
generators namely exponential On/Off, Pareto On/Off and Traffic Trace. More over experiment can be carried out for
wireless networks as a future work.
ACKNOWLEDGEMENT
I would like to express my gratitude and appreciation toInternational Journal of Engineering Research and General Science team
who gave me the opportunity to publish this report. Special thanks to Nitte Meenakshi Institute of Technology Bangalore,
management, Computer Science HOD and all staffs whose stimulating suggestions and mainly encouragement helped me to write
this report.I would also like to acknowledge with much appreciation my family members especially my husband Mr.Ajith and my
sons Aadithya and Abhirama for their cooperation and support.

REFERENCES:
[1] Miss. SoniSamprati, Next Generation of Internet Protocol for TCP/IP Protocol Suite, International Journal of Scientific
and Research Publications, Volume 2, Issue 6, June 2012.
[2] Santosh Kumar and SonamRai, Survey on Transport Layer Protocols TCP & UDP, International Journal of Computer
Applications 46(7):20-25, May 2012.
[3] Fahim A. Ahmed Ghanem, Vilas M. Thakare, Optimization Of IPv4 Packets Headers, IJCSI International Journal of
Computer Science Issues, Vol. 10, Issue 1, No 2, January 2013.
[4] YogeshGolhar ,R.K.Krishna and Mahendra A. Gaikwad, Implementation & Throughput Analysis of Perfect Difference
Network (PDN) in Wired Environment, IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 1, No 1,
January 2012.
[5] Performance Measurements and Metrics, http://webstaff.itn.liu.se/~davgu/tnk087/Fo_8.pdf.
[6] Computer Networks, Text Book byAndrew S. Tanenbaum.
[7] TeerawatIssariyakul and Ekram Hossain, Text Book Introduction to Network Simulator NS2, Second Edition.
[8] The ns manual, Kevin Fall and KannanVaradhan.
[9] Ns simulator, Wikipedia.org.
[10] AWK scripts, http: //wing.nitk.ac.in/ resources/ Awk.pdf











International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


174
www.ijergs.org

Design of Substrate Integrated Waveguide Antennas for Millimeter Wave
Applications
Y. Bharadwaja
1

1
Assistant professor,SreeVdyanikethanEngineerng College, E-mail:bharadwaja502@gmail.com,
E-mail- shailjadu@gmail.com
AbstractThe paper presents a new concept in antenna design, whereby a photo-imageable thick-film process is used to
integrate a waveguide antenna within a multilayer structure. This has yielded a very compact, high performance antenna working
at high millimeter-wave (mm-wave) frequencies, with a high degree of repeatability and reliability in antenna construction.
Theoretical and experimental results for 70 GHz mm-wave integrated antennas, fabricated using the new technique, are presented.
The antennas were formed from miniature slotted waveguide arrays using up to 18 layers of photo-imageable material. To enhance
the electrical performance a novel folded waveguide array was also investigated. The fabrication process is analyzed in detail and
the critical issues involved in the fabrication cycle are discussed. The losses in the substrate integrated waveguide have been
calculated. The performance of the new integrated antenna is compared to conventional metallic, air-filled waveguide antennas,
and also to conventional microstrip antenna arrays operating at the same frequencies.
Index TermsMillimeter wave antenna arrays, substrate in- tegrated waveguides (SIW), photo-imageable fabrication, slotted
waveguide antenna arrays.
I. INTRODUCTION
Substrate integrated circuits (SICs) are a new concept for high-frequency electronics, which yields high perfor- mance from
very compact planar circuits [1]. The basic idea behind the technique is that of integrating nonplanar 3-D structures within a
multilayer circuit. However, existing integration techniques using precision machining cannot economically achieve the required
precision for millimeter-wave (mm-wave) components, particularly for mass production. In the last few years a number of papers
based on substrate integrated circuits and waveguides (SICs, SIWs) on planar microstrip substrates have appeared in the literature,
but only for frequencies up to X-band. Most of the integrated waveguides that have been reported used VIA fenced sidewalls,
realized using relatively elementary fabrication techniques. With these techniques the diameter and spacing of the individual VIAs
will affect the loss and bandwidth of the waveguide [2], [3]. Such integrated structures cannot be regarded as homogeneous
waveguide, but will be similar in performance to an artificial periodic waveguide.
However, there have been a number of successful attempts to form substrate integrated waveguides using micro-machining
techniques. McGrath et al. [4] formed an air-filled waveguide channel in silicon, and reported measured losses of around 0.02
dB/mm at 100 GHz. In [5], Digby et al., used a different micro-machining process to form a substrate integrated 100 GHz air-
filled waveguide. Their measured data, around 0.05 dB/mm at 100 GHz was slightly higher than that of McGrath, but it was
suggested by the authors that the high attenuation might have been due to some of the waveguide walls being only one skin depth
thick. A further variation of the air-filled SIW structure was reported by Collins et al. [6], who used a micro-machining approach
to form the waveguide trough on one substrate, and this was combined with a second substrate using a snap-together- technique,
to form the final enclosed waveguide. This was a somewhat simpler fabrication approach than that used by McGrath, and by
Digby, and this was reflected in the higher measured attenuation of around 0.2 dB/mm at 100 GHz. The key differences between
the present work, and that of authors using micro-machining, are that a very low cost technique was used to form dielectric-filled
waveguides, leading to structures that were inherently robust and cheap.
The primary objective of the present paper is to provide an in-depth analytical investigation of the fabrication techniques that
could be employed to integrate efficiently novel 3-D wave- guide structures within ceramic circuit modules. However, the
necessary inclusion of dielectric within the waveguide restricts the use of these circuits above 100 GHz, with this frequency limit
being mainly decided by the loss tangent of the integrated substrate material.
This paper describes the techniques for integrating mm-wave antennas within ceramic modules using a relatively new process,
namely photo-imageable thick-film [7], [8]. Since this type of process enables the circuit structure to be built up layer-by-layer, it
is ideal for forming 3-D structures. The work described in the paper demonstrates the viability and potential of photo-image- able
fabrication technology through the measured, practical performance of novel mm-wave integrated antennas arrays working around
70 GHz.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


175
www.ijergs.org

II. FABRICATION METHODOLOGY
Photo-imageable thick-film conductors and dielectric contain a
photo vehicle with the pastes. This enables layers of conductor or
dielectric to be printed and then directly imaged using UV
radiation. The system enables fine lines and gaps to be fabricated
with dimensions down to 10 m. Moreover, because structures
can be built up layer-by-layer, it is easy to provide
interconnections between planar and non-planar circuits within a
single ceramic circuit. This scheme can be used to design low-
cost, high-performance passive circuits such as resonators, filters,
power dividers, etc. [9], [10]. A further advantage is that the
technology is compatible with many fabrication processes such as
thin film, HTCC, and LTCC. A particular advantage of photo-
imageable materials for the work being reported here is that the
sidewalls of the integrated waveguides can be made from
continuous metal, rather than using a VIA fence. The process of
making such a sidewall is simply to develop channels in the
dielectric layer, and then to subsequently fill them with metal.
a) Photo-Imageable Fabrication
The process mainly consists of four main steps as shown in the
Fig. 1.
Step 1) The thick film paste is screen printed on alumina
substrate, leveled at room temperature and dried at
80C for 45 min.
Step 2) The printed paste is exposed to UV through photo
patterned chrome masks and in the exposed region
the paste polymerizes and hardens.
Step 3) The unexposed material is removed by spraying the
circuit with developer, and finally dried with an air
spray.
Step 4) The circuit is fired at 850C for 60 min to burn off
the binders in the paste and leave the final pattern
of conductor or dielectric.

Unlike the conventional metal etching process the photo-
imageable fabrication does not require the intermediate photo-
resist spinning and developing as the photo-vehicle required for
UV exposure and hardening is contained in the material itself. The
advantage of using this fabrication is the ability of the process to
achieve the fine geometries demanded by mm-wave circuits.
b) Waveguide Integration
The 3-D waveguide structures were built up, layer-by-layer, using the photo-imageable thick-film process. The layers were
printed onto an alumina base to give rigidity to the final structure. A layer of silver conductor (Fodel 6778) paste was first printed
onto the alumina to form the bottom broad wall of the waveguide [Fig. 2(a)]. Next, a layer of dielectric (Fodel QM44F) is screen
printed, photo-imaged and fired to form vertical trenches. Conductor paste is then screen printed and photo-imaged to fill the
trenches, so forming the sidewalls of the waveguide [Fig. 2(b)]. These last two steps were repeated a number of times to build up
the required height of the waveguide. Finally, the top layer of conductor is printed, and radiating slots are photo-imaged and fired
to form the top wall of the waveguide [Fig. 2(c)]. The schematic view of the cross-section of the integrated waveguide is shown in
Fig. 2(d). It was found to be necessary to have the registration of intermediate layers accurate to within 1m, which required a
sophisticated mask aligner for exposing each layer. The uniformity of the sidewalls is a critical factor in the integration process, as
nonuniform sidewalls will lead to significant loss in the structure.
c) Fabrication Analysis
1) Fabrication Quality: Clearly, with antennas operating at very high frequencies, and consequently very small wave-
lengths,the quality and accuracy of the fabrication process is a key issue. In our case it was important that the radiating slots were

Fig. 1.Steps in a photo-imageable process (a) printing (b)
exposure (c) devel- oping (d) firing.

Fig. 2.Steps in a waveguide integration process, printing (a)
bottom wall (b) side walls (c) top wall and radiating slots. (d)
Cross-sectional view of the inte- grated waveguide on
alumina substrate.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


176
www.ijergs.org

being formed with precise dimensions and high quality edges. To demonstrate the quality of the fabrication process, an enlarged
version of one of the radiating slots is shown in Fig. 3(a). To further indicate the quality achievable with the photo-
imageableprocess,50- GSG coplanar probe pads with 30m spacing between the signal line and ground pads is shown in Fig.
3(b), and a fabricated miniature branch line coupler in Fig. 3(c).
2) Fabrication Issues:
a) Shrinkage: The main problem encountered with the photo-imageable fabrication process was shrinkage in the conductors
and dielectrics during firing. In particular, the amount of shrinkage was different for the conductors and dielectrics. Also, it was
found that the degree of shrinkage varied with the area of conductor or dielectric being fired. The difference in the rates of
shrinkage for conductors, dielectrics, and circuits of different geometries and areas are given in Table I. The significance of the
data in Table I is that it shows that it is not uniformly observed throughout the fabrication cycle and therefore cannot be taken into
account at the design stage. The shrinkage was a serious issue when trying to fill VIAs and trenches. An SEM picture of a trench
filled with conductor at an intermediate stage in the waveguide fabrication is shown in Fig. 3(d). After firing the inner conductor
shrinks creating spaces on either side of the wall.

Fig. 3. (a)(c) Photographs showing the quality and capability of the fabrication process under careful control of the processing paramet ers. (d) SEM picture
showing the shrunk conductor strip inside the trench after firing.

TABLE I
RATE OFSHRINKAGEONCONDUCTORS, DI ELECTRI CS ANDCIRCUITS OFDIFFERENT GEOMETRIES


Itwasfoundthattheonlywaytoovercome fabricationis- suesrelatedtoshrinkageistocarefullycontroltheprocess.The
fabricationparameters(developmentand exposuretimes)need to be refined for different layers, and for different circuit geometries.
In the integration process described in this paper the shrinkage in VIAs and trenches was compensated in the Z-direction by
printing extra conductor layers. Correct compensation in the X-Y plane was achieved by increasing the exposure time and
decreasing the development time. To illustrate the effectiveness of this technique Fig. 4(a) shows the trenches filled before
compensation for shrinkage, and Fig. 4(b) shows the conductor - filled trenches after compensation.
Inordertoachievetherequireddegreeofinterlayer resolution,aQuintelQ7000maskalignerwasused.Toachieve
optimumresolutionitwasfoundthatsomecarewasneededinthe choiceofalignmentmarkstoensuretheywerecompatiblewith
themaskaligner beingused.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


177
www.ijergs.org


Fig. 4. Photographs of conductor surface to show (a) shrinkage after firing, (b) and after optimizing the fabrication process.

Fig. 5. The dielectric layer of thickness 60m printed and dried without inter- mediate firing steps showing cracks at the corners after firing; (a) track corner;
(b) VIA corners.
b) ProcessingTime:Inthisstudy,250meshstainlesssteel screenswereusedtoprintthedielectric, givingapost firing
thicknessofaround15m.Theconductorthickness,using325 meshscreensforprinting,wasaround8mafterfiring. The
totalinnerheightoftheintegrated waveguideshowninFig.2. was60m.Thiswasformedfromfourlayersofdielectric.
Inall,eightlayersofconductor wereneeded,includingtrench filling andcompensatingforshrinkage.Usingthistechnique,
thetimerequiredtofinish alayerwasoneday,themosttime consumingaspectbeingthefiringandcoolingin a single chamberfurnace.So
integratingawaveguidesectionof 60m occupied aroundoneandahalfweek.Hence,itwasattractive
totryandsaveprocessingtimebyprintinganddryinganumber oflayersandthenco-firing inonestep.Ourexperiencewas thatcircuits
woulddevelop cracksatthecorners afterfiring. Theresultsofanunsuccessful attempttobuilda60m-thick
dielectricpriortofiringareshowninFig.5,wheresignificant crackingisevident.

III. INTEGRATED ANTENNA DESIGNS
This section discusses the design, simulation and theoretical analysis of two different antenna topologies operating around75
GHz and integrated into a single ceramic structure. Two structures were considered:
1) A simple substrate integrated waveguide antenna consisting of a 24 array of slots.
2) A novel folded waveguide antenna array.

Fig.6.Schematicshowingtheantennastructure.
A. Simple Integrated Waveguide Antenna Arrays
1) Antenna Structure: Fig. 6 shows the structure of a simple integrated waveguide antenna, consisting of a
24array of radiating slots. The feed consisted of a 50- microstrip line with a tapered transition to provide impedance matching
between the microstrip and the integrated waveguide section [11]. The input power is split equally into two linear arrays each
having four slots, using a conventional side-fed H-plane divider [12], where the separation between the two inductive walls can be
adjusted for maximum coupling into both sections. This feeding technique introduces a phase difference of 180 between the two
linear arrays. Hence, the slots either side of the dividing wall array were positioned on opposite sides of the respective waveguides
to give a further 180 phase difference. This ensured that all the eight slots of the antenna radiated in phase.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


178
www.ijergs.org

The end slots were positioned a distance of

4 from the shorted ends of the waveguide, as shown in Fig. 6, with the
remaining slots separated by

2 , so that all the slots would be exited by maxima in the standing wave patterns. Thus the slot
positions ensured maximum radiation from the antenna.
The slot lengths were

2 to ensure good radiation, without causing end-to-end mutual coupling between adjacent slots. The
physical lengths

of the slots can be calculated from

2
=

0
2

+1

Where
0
is the free space wavelength which is the permittivity of the dielectric.
2) Antenna Dimensions: The dielectric waveguide antenna array with radiating slots was designed using conventional
dielectric waveguide theory [13], [14]. The design was then simulated and optimized using 3-D electromagnetic simulation
software HFSS to obtain maximum radiation. The optimized dimensions for a 2 3 slot array are shown in Fig. 7, where all
dimensions are in millimeters. The simulation results are shown in Fig. 8 for a representative SIW antenna; it should be noted that
the simulation was performed for at 76 GHz using Hybridas HD1000 thick-film dielectric, whereas our SIW antennas were
integrated using a similar but slightly different dielectric namely Dupont QM 44 F due to the unavailability of the earlier paste in
the market.
3) Experimental Results: The return loss and radiation pattern for the integrated waveguide 2 3 array are plotted in Fig. 9;
the return loss shows a good match at the design frequency, and there is a well-defined radiation pattern, with the cross-polar level
more than 20 dB down on copolar level.

Fig.7. Schematicshowingintegratedslottedwaveguideantenna dimensions: alldimensionsgiven hereareinmillimeters

Fig.8.(a)HFSSmodel,(b)fieldpattern,(c)returnloss(d)radiationpatternat E-planeandH-plane,fortheSIWantennaoptimizedfor76GHz andobtained fromsimulation.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


179
www.ijergs.org

B. Folded Waveguide Antenna Arrays
The concept of an antenna array using a folded waveguide was proposed to extend the substrate integration strategy to lower
frequencies [15]. The
10
mode in a folded waveguide resembles that of a conventional rectangular waveguide. As a result of
this folding, as shown in Fig. 10, the width (a) of the guide is reduced by 50%, and the height of the guide is doubled (b). But, the
height has got little effect on the propagation characteristic and can be set as small as required. So the overall effect is to reduce
the substrate area occupied by the antenna.

Fig. 9. Experimental results for a 2 2 3 antenna array (a) return loss (b) radiation pattern at 73.5 GHz.

Fig. 10. Folded Waveguide antennasBasic Concept.

Fig. 11. Design dimensions of a four slot folded waveguide antenna, all dimensions are in millimeters.


Fig. 12. Integrated folded waveguide antenna (a) top conductor layer, (b) in-termediate conductor layer.
1) Antenna Structure and Dimensions: The dimensions of a74-GHz, 4-slot folded waveguide antenna were optimized using
HFSS and the results are shown in Fig. 11.
The antenna was fabricated using the photo imageable process that has been described previously. The photographs in Fig.
12(a) and (b) show the top and the intermediate layers during the fabrication of a folded waveguide antenna. As well as showing
the structure of the antenna, these photographs are a further indication of the quality of the photo imageable thick-film process. It
can be seen from Fig. 13 that the measured return loss of the back-to-back transition is very good, greater than 20 dB, in the
vicinity of the working frequency, showing that the transition as behaving as expected.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


180
www.ijergs.org

2) Experimental Results: The return loss and radiation pat-tern for a 4-slot folded waveguide antenna are shown in Fig. 14.
The antenna shows good cross-polar level and a good match close to the resonant frequency.


Fig. 13. Measured return loss of a back-to back folded waveguide transition.

Fig. 14. Folded Waveguide Antenna (a) return loss (b) radiation pattern.
IV. INTEGRATED WAVEGUIDE LOSS ANALYSIS
Since the antenna were fabricated using integrated wave guides, it was important to gain some insight into the practical losses of
the waveguide. To achieve this, waveguide lines of different lengths, but with the same cross sections, were fabricated and the line
loss measured using a vector network analyzer (HP 8510 XF), which had previously been calibrated using an on-wafer calibration
kit. The ends of the wave-guide sections were tapered to connect with the coplanar probing pads. Each tapered section had an axial
length of 2 mm. The return loss and insertion loss of integrated waveguides of length 1.9 mm and width 1.266 mm are plotted in
Fig. 15.
The results show that the integrated waveguide structure, including the tapered sections, has relatively low insertion loss up to
100 GHz, with a value of ~2dB at the antenna design frequency (74 GHz). The losses tend to increase with frequency due to
increase in dielectric loss and conductor surface losses. The losses in the tapered feeds, and also the probe-circuit mismatch losses,
were de-embedded by computing the difference in the insertion losses of two wave-guide structures of different lengths. After de-
embedding, the magnitude of the loss in the SIW was calculated and the loss is plotted as a function of frequency in Fig. 16. Fig.
17 shows the wave number and guided wavelength deduced from the measured phase data. The loss was calculated to be
~1

at 74 GHz. It can be seen that the losses are relatively small and this indicates that the integrated waveguide structure is
a usable interconnection technology up to high millimeter-wave frequencies.
Similar loss measurements were carried out for folded waveguides and it was found that the losses increased by around 20%.
This relatively small increase in loss, compared with the simple unfolded structure, indicates that the folded waveguide concept is
viable in practical situations where substrate area is at a premium. Fig. 18.shows images of the folded waveguide structures used
for the insertion loss measurements.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


181
www.ijergs.org


Fig. 15.S-parameters plotted for substrate integrated waveguide and a simple microstrip line.

Fig. 16. Loss plotted in dB/mm and dB/ g of a substrate-integrated waveguide of width 1.26 mm.

Fig. 17. The wave number and guided wavelength of a substrate in-tegrated waveguide plotted against frequency.
V. REPEATABILITY AND TOLERANCE ANALYSISOF THICK FILM PROCESS
The section details the repeatability and tolerance involved in the thick film fabrication process. The frequency response of the
substrate integrated waveguide antenna fabricated on three different supporting ceramic substrates is shown in Fig. 19. The plot
shows almost similar results for the same structure, which has gone through different printing and firing process and illustrates the
repeatability of the thick film process in constructing substrate integrated waveguide structures. Table II gives the 3-D tolerance
measured on the critical SIW dimensions. The percentage values shown in the table are calculated by measuring the dimensions of
the fabricated geometry after the process modifications to account for shrinkage. The results indicate that the geometrical
dimensions could be achieved within 5% under well-controlled process.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


182
www.ijergs.org


Fig. 18.Folded waveguide sections of different length for insertion loss measurement (dimensions in millimeters).

Fig. 19.The measured frequency response of three identical SIW antennas to illustrate the repeatability of thick-film processing.
TABLE II
THREE-DIMENSIONAL TOLERANCES MEASURED ON THE CRITICAL DIMENSIONS OF SIW AND SIW ANTENNAS AFTER PROCESS
MODIFICATION

TABLE III
PERFORMANCE COMPARISON TABLE FOR SIW AND A CONVENTIONAL METALLIC ANTENNA ARRAY AT 74 GHZ

VI. ANALYSIS OF INTEGRATED WAVEGUIDE PERFORMANCE
The primary aim of the current study was to establish the potential of photo imageable thick-film technology for fabricating
miniature mm-wave components. An antenna, using novel techniques, was chosen for the investigation because it was relatively
demanding in terms of the required quality of fabrication and also because of the small dimensions that were needed. A further
benefit of the choice of an antenna was that there was performance data available in the literature [16] for antennas fabricated
using other technologies, against which the performance of the integrated substrate approach could be compared.
Obviously, it was important to obtain some indication of the efficiency of the SIW antenna in comparison with the more
conventional microstrip patch array at mm-wave frequencies. For this efficiency analysis, the total loss (dielectric and conductor)
for a section of waveguide is compared to that of an equivalent microstrip line. Direct comparisons are difficult, because
microstrip interconnections normally have an impedance of 50 , whereas waveguide has a somewhat higher impedance How-
ever, if we compare microstrip having the same overall dimensions as the integrated waveguide, i.e., occupying the same substrate
area, then we find that the microstrip has a loss around 50% higher than that of the integrated waveguide. Moreover, for an array
giving similar radiation performance the total area of the substrate integrated waveguide antenna will be ~1 10 of that
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


183
www.ijergs.org

occupied by a microstrip [16]. Therefore, the substrate integrated waveguide structure will offer an advantage in terms of reduced
surface area and efficiency that will be significant for highly integrated millimeter-wave circuits, where substrate area is at a
premium.
The three-slot substrate integrated waveguide antenna performance has been compared with that from a conventional metallic
air-filled waveguide antenna, as shown by the data in Table III. In this table, the gain for a conventional metallic waveguide
antenna was calculated from [17] and the total loss is calculated from [18]. The minimum physical area was calculated for both
antennas, and for the SIW the physical area was reduced by ~85% compared to the metallic waveguide air-filled antennas.
VII. CONCLUSION
The results have demonstrated that photo imageable thick-film technology is a viable approach for the fabrication of circuits
working at high millimeter-wave frequencies, offering both low-loss interconnections and the potential to realize fine circuit
geometries. The techniques of using the technology to fabricate 3-D integrated waveguides within a planar circuit proved
successful, and led to the development of a high performance, miniature antenna working at 74 GHz. The technique could be
extended to LTCC, which would permit parallel processing of the layers and avoid the need for the time consuming sequential
processing of each layer.
REFERENCES:
[1] W. Menzal and J. Kassner, Millimeter-wave 3-D integration tech-niques using LTCC and related multi-layer circuits, in
Proc. 30th Eur.Microwave Conf. Proc., Paris, France, 2000, pp. 3353.
[2] D. Deslandes and K. Wu, Design consideration and performance anal-ysis of substrate integrated waveguide components,
in Eur. Microw.Conf., Milan, Italy, Sep. 2002, pp. 881884.
[3] Y. Cassivi, L. Perregrini, P. Arcoini, M. Bressan, K. Wu, and G. Con-ciauro, Dispersion characteristics of substarte
integrated rectangular waveguide, IEEE Microw. Wireless Compon. Lett., vol. 21, no. 9, pp. 333335, Sep. 2002.
[4] W. R. McGrath, C. Walker, M. Yap, and Y.-C. Tai, Silicon microma-chined waveguides for millimeter-wave and
submillimeter-wave fre-quencies, IEEE Microw. Guided Wave Lett., vol. 3, no. 3, pp. 6163, Mar. 1993.
[5] C. E. Collins et al., A new micro-machined millimeter-wave and tera-hertz snap-together rectangular waveguide
technology, IEEE Microw.Guided Wave Lett., vol. 9, no. 2, pp. 6365, Feb. 1999.
[6] J. W. Digbyet al., Fabrication and characterization of micromachind rectangular waveguide components for use at
millimeter-wave and ter-ahertz frequencies, IEEE Trans. Microwave Theory Tech., vol. 48, no. 8, pp. 12931302, Aug.
2000.
[7] M. Henry, C. E. Free, B. S. Izquerido, J. Batchelor, and P. Young, Photo-imageable thick-film circuits up to 100 GHz, in
Proc. 39th Int.Symp. Microelectron. IMAPS, San Diego, CA, Nov. 2006, pp. 230236.
[8] D. Stephens, P. R. Young, and I. D. Robertson, Millimeter-wave sub-strate integrated waveguides and filters in
photoimageable thick-film technology, IEEE Trans. Microwave Theory Tech., vol. 53, no. 12, pp. 38223838, Dec. 2005.
[9] C. Y. Chang and W. C. Hsu, Photonic bandgap dielectric waveguide filter, IEEE Microw. Wireless Compon. Lett., vol.
12, no. 4, pp. 137139, Apr. 2002.
[10] Y. Cassivi, D. Deslandes, and K. Wu, Substrate integrated waveguide directional couplers, presented at the Asia, Pacific
Conf., Kyoto, Japan, Nov. 2002.
[11] D. Deslandeset al., Integrated microstrip and rectangular waveguide in planar form, IEEE Microw. Wireless Compon.
Lett., vol. 11, no. 2,pp.6870, Feb. 2001.
[12] K. Song, Y. Fan, and Y. Zhang, Design of low-profile millimeter-wave substrate integrated waveguide power
divider/combiner, , Int. J.Infrared Millimeter Waves, vol. 28, no. 6, pp. 473478, 2007.
[13] R. M. Knox, Dielectric waveguide microwave integrated circuits-an overview, IEEE Trans. Microwave Theory Tech.,
vol. 24, no. 11, pp. 806814, Nov. 1976.
[14] H. Jacobs, G. Novick, G. M. Locascio, and M. M. Chrepta, Measur-ment of guide wavelength in rectangular dielectric
waveguides, IEEETrans. Microwave Theory Tech., vol. 24, no. 11, pp. 815820, Nov.1976.
[15] N. Grigoropoulos and P. R. Young, Compact folded waveguide, in Proc. 34th Eur. Microwave Conf., Amsterdam, The
Netherlands, 2004,pp.973976.
[16] F. Kolak and C. Eswarappa, A low profile 77 GHz three beam antenna for automotive radar, IEEE MTT-S Dig., vol. 2,
pp. 11071110, 2001.
[17] C. A. Balanis, Antenna Theory Analysis and Design, 2nd ed. New York: Wiley.
[18] E. V. D. Glazier and H. R. L. Lamont, The Services Textbook ofRadio. London, U.K.: Her Majestys Stationery Office
(HMSO),1958, vol. 5, Transmission and Propagation


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


184
www.ijergs.org

Risk Factor Analysis to Patient Based on Fuzzy Logic Control System
M. Mayilvaganan
1
, K. Rajeswari
2

1
Assiociate professor,Department of Computer Science, PSG College of Arts and Science, Coimbatore, Tamil Nadu, India
2
Assistant Professor: Department of Computer Science, Tiruppur Kumaran College for women, Tiruppur, Tamil Nadu, India
E-mail- vkpani55@gmail.com

Abstract Fuzzy logic has proved in this paper, a medical fuzzy data is introduced in order to help users in providing accurate
information when there is inaccuracy. Inaccuracy in data represents imprecise or vague values (like the words use in human
conversation) or uncertainty in using the available information required for decision making handle the uncertainty of critical risk
for human health. In this paper involved to diagnosis the health risk which is related to Blood Pressure, Pulse rate and Kidney
function. The confusing nature of the symptoms makes it difficult for physicians using psychometric assessment tools alone to
determine the risk of the disease. This paper describes research results in the development of a fuzzy driven system to determine
the risk levels of health for the patients. The system is implemented and simulated using MATLAB fuzzy tool box.
KeywordsFuzzy logic control system, Risk analysis, Sugeno-type, Fuzzy Inference System, MATLAB Tool, ANFIS,
Defuzzification
INTRODUCTION
In the fields of medicine area diagnosis, treatment of illnesses and patient pursuit has highly increased. Despite the fact that these
fields, in which the computers are used, have very high complexity and uncertainty and the use of intelligent systems such as
fuzzy logic, artificial neural network and genetic algorithm have been developed. In the other word, there exists no strict boundary
between what is Healthy and what is diseased, thus distinguish is uncertain and vague [2]. Having so many factors to analyze to
diagnose the heart disease of a patient makes the physicians job difficult. So, experts require an accurate tool that considering
these risk factors and show certain result in uncertain term. Motivated by the need of such an important tool, in this study, it
designed an expert system to diagnose the heart disease. The designed expert system based on Fuzzy Logic. This fuzzy control
system that deals with diagnosis has been implemented in MATLAB Tool. In this paper introduced fuzzy control system to design
fuzzy rule base to analyse the risk factor of patient health and the rule viewed by surface view.
FUZZY INFERENCE SYSTEM
In this study, it present a Fuzzy control System for the diagnosis risk factor from the collection of Blood pressure value, pulse
rate and kidney function are used as a several parameter to determine risk analysis by fuzzy rule respectively. A typical
architecture of FLC is shown below, which comprises of four principal comprises such as a fuzzifier, a fuzzy rule base, inference
engine, and defuzzifier. In fuzzy inference process, Blood pressure value, pulse rate and kidney function value are the inputs to
transmit for making decision on basis of pattern discerned. Also involves all pieces that are described in Membership Functions
and If-Then Rules.
METHODOLOGY BACKGROUND
INPUT DATA
Medical diagnosis is a complicated task that requires operating accurately and efficiently. Such complicated databases are
supported of uncertain information is called a fuzzy database [7] [8]. Neuro-adaptive learning techniques provide
to learn information about a data set for modeling the operation in procedure. Using a given input/output data set, the toolbox
function adaptive neuro-fuzzy inference system (ANFIS) constructs a fuzzy inference system (FIS) whose membership function
parameters are adjusted using either a back propagation algorithm. The inputs of linguistic variable are put into the measurement
for performing to the Sugeno member function method and assigned the rule base refer the Table I, Table II, and Table III using
If.. Then rule insert into the tool to analyse the risk factor of patient.
Kidney function was measured by several classified Glomerular Filtration Rate (GFR) such as Normal, problem started
GFR, Below GFR, Moderate GFR, Below Moderate GFR, Damage GFR and Kidney failure. Blood pressure (BP) values
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


185
www.ijergs.org

also classified by different ranges such as Low normal, Low BP, Very Low BP, Extreme Low BP, Danger Low BP, Very
Danger Low BP, Danger too Low BP, Normal BP, High Normal BP, Border line BP, High BP, Very High BP, Extreme very
High BP, Very danger High BP. Pulse values are derived from systolic and diastolic Blood pressure value. Such Blood
pressure values to be analyzing to the kidney function for determine the risk factor.
TABLE I. Analysis the variable in Rule Base












TABLE II. Analysis the variable in Rule Base (Contd)














TABLE III. Analysis the variable in Rule Base (Contd)
Cases Comment
Blood Pressure
60-40
Very
Danger
Low BP
50-30
Danger
Too
Low BP
120-80
Normal
BP
130-85 High
Normal BP
140-90
Border Line
BP
Kidney
Function
[Glomerular
Filtration
Rate]
Normal (> 90)
Very
Danger
Low BP
++
Low
BP ++
Normal
BP
High Normal
BP
Border
Line BP
Below GFR
(80-89)
Moderate GFR
(45-59)


Very
Danger
Low
Bp +
Low
BP +
High Normal
BP + Below Moderate
GFR
(30-44)
Damage GFR
(15-29)


Very
Danger
Low BP
Low BP
High Normal
BP ++
Kidney Failure
GFR<15
Cases Comment
Blood Pressure
115-75
Low
Norma
l
100-65
Low BP
90-60
Very
Low BP
80-55
Extreme
Low BP
70-45
Danger Low BP
Kidney
Function
[Glomerular
Filtration
Rate]
Normal (>
90)
Low
Normal
+ +
Low

BP ++
Very
Low BP
Extreme Low
BP ++
Danger Low BP
Below GFR
(80-89)
Moderate
GFR
(45-59)


Low
Normal
+
Low
BP +
Extreme Low
BP +
Below
Moderate
GFR(30-44)
Damage
GFR(15-29)

Low

Normal
Low
BP
Extreme Low
BP Kidney
Failure
GFR<15
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


186
www.ijergs.org













SUGENO FIS METHOD
Adaptive neuro fuzzy inference system (ANFIS) represent Sugeno e Tsukamoto fuzzy models. A typical rule in a Sugeno fuzzy
model has the form an example, If Input 1 = x and Input 2 = y, then Output is z = ax + by + c
For a zero-order Sugeno model, the output level z is a constant (a=b=0).The output level z
i
of each rule is weighted by the firing
strength w
i
of the rule.
Typical membership function is followed by the formula,
( )
1
x =
A 2b
i
x-c
i
1+
a
i
(1)
Where parameters are referred as premise parameters. Every node in this layer is a fixed node labeled; an output of factor will
produce by all incoming signals of given parameter. An i
th
node calculates the ratio of the i
th
rules by firing strength to the sum of
all rules firing strengths. The outputs are called normalized firing strength. An overall output computes by the summation of all
incoming signals such as

w f
i
i i
w f =
i
i i
w
i
i

(2)
Through this Sugeno method gives a crisp output f(u) generated from the fuzzy input. Under the process Fuzzification was handle
for first step a proper choice of process state variables and control variables is essential to characterization of the operation of a
fuzzy logic control system. In decision making logic, If...Then rule base follow for measuring the membership values obtained.
Finally the defuzzification is processed for combining the fuzzy outputs of all the rules to give one crisp value [2].
Cases Comment
Blood Pressure
160-100
High BP
180-110
Very
High BP
210-125 Extreme
Very High BP
240-140
Very danger
Low BP
Kidney
Function
[Glomerular
Filtration
Rate]
Normal
(> 90) High
BP ++
Very High
BP ++
Extreme Very
High BP
Very danger
Low BP
Below
GFR (80-89)
Extreme Very
High BP+
Very danger
Low BP+
Moderate GFR
(45-59)


High
Bp +
Very High
BP +
Extreme Very
High BP++
Very danger
Low BP++
Below Moderate
GFR
(30-44)
Extreme Very
High BP+++
Very danger
Low BP+++
Damage GFR
(15-29)


High BP
Very High
BP
Extreme Very
High BP++++
Very danger
Low BP++++
Kidney Failure
GFR<15
Extreme Very
High BP+++++
Very danger
Low BP+++++
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


187
www.ijergs.org


Fig.1. Member function of Blood Pressure

Fig.2. Final plot of Member function - Blood Pressure
From fig.1 and fig.2 represents the member function of blood pressure are constructed for finding the risk factor
based on the rule base inputs[4] [5].
INFERENCE ENGINE
The domain knowledge is represented by a set of facts about the current state of a patient. The inference engine compares each
rule stored in the knowledge base with facts contained in the database. When the IF (condition) part of the rule matches a fact, the
rule is fired and its THEN (action) part is executed. The condition is check blood pressure is mf1, pulse value represents mf2, and
kidney function is representing as mf3. The inference engine uses a system of rules to make decisions through the fuzzy logical
operator and generates a single truth value that determines the outcome of the rules. This way, they emulate human cognitive
process and decision making ability and finally they represent knowledge in a structured homogenous and modular way.




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


188
www.ijergs.org












Fig.3. Logic gate for finding the risk rate
From the figure 3 describe, X and Y are pressure value which represent as S, pulse values are derived from given
pressure value which represent S1, c and C1 represent as a carrys out which is used to XOR calculation data wants to
carrys the value for getting the result and z are Kidney function value which represent X2.The pulse rate was
analysed by given Systolic and Diastolic values. Finally risk factor was analysed by Blood Pressure, Pulse rate and
GFR rate of kidney functions.
DEFUZZIFICATION
Defuzzification is the process of converting the final output of a fuzzy system to a crisp value. For decision making purposes, the
output fuzzy sets must be defuzzifier to crisp value in the real life domain. Finally the process of defuzzification converts the
conclusion of the mechanism into the actual inputs for the process. The health risk are determines the level of severity of
depression risk given the input variables. The fuzzy system provides an objective process for obtaining the depression risk level.
After determining the fuzzy membership functions, for the purpose of the study a standard rule base is developed to generate rules
for the inference engine. A total of 250 rules were generated representing three fuzzy linguistically designed inputs. The
simulation of the fuzzy system was carried out with MATLAB tool. The severity level is obtained as output response for the input
values (blood pressure= 120/80, pulse value =40, kidney function = below moderate). New input values generate new depression
risk output responses. Also the inputs can be set explicitly using the edit field and this will again produce a corresponding output
that is consistent with the fuzzy rule base. Finally the health risk was observed by the relationship between those attributes in the
determination of risk levels as shown in fig. 4.
X
Y
S
C
X1
Y1
S1
C1
Risk
Z
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


189
www.ijergs.org


Fig.4 Plot of Surface view of health risk

Fuzzy system is used to obtain the severity level which is the only output variable of the system. The risk determines
the level of severity of risk given the inputs.

RESULT AND DISCUSSION

The patient health risk was found from the given input of linguistic variable of Blood pressure, Pulse rate and kidney functions.
Using Sugeno FIS method to construct the membership function for assigned the linguistic variable for fuzzification process.
Using If ... Then rule and inference strategies are chosen for processing the rule base to determine the risk factor among the blood
pressure, kidney function and pulse rate by logical decision making analysis. Through the defuzzification, fuzzy system provides
an objective process of risk factor, also to view the surface view of the risk determination using simulation.
CONCLUSION

It can be concluded in this paper, the fuzzy system accurately predicts depression risk severity levels based on expert knowledge
embedded as fuzzy rules and supplied patients stage retrieve by given parameters. The use of this approach is contributed to
medical decision-making and the development of computer-assisted diagnosis in medical domain and identifies the major risk of
the patient in earlier.

REFERENCES:
Abraham A. Rule-based Expert Systems. Handbook of Measuring System Design, John Wiley & Sons, 909-919, 2005.
Mahfouf M, Abbod MF & Linkens DA, A survey of fuzzy logic
monitoring and control utilization in medicine, Artificial Intelligence in
Medicine 21, pp 27-42, 2001.
Agbonifo, Oluwatoyin C. , Ajayi, Adedoyin O. Design of a Fuzzy Expert Based System for Diagnosis of Cattle Diseases,
International Journal of Computer Applications & Information Technology.
Ahmed M. Ibraham, Introduction to Applied Fuzzy Electronics, 1997.
http://en.wikipedia.org/wiki/MATLAB.
Adlassing, K.P. Fuzzy set theory in medical diagnostics, IEEE Trans. On Systems, Man, and Cybernetics, Vol. SMC-16(1986)
260-264.
Seising, R, A History of Medical Diagnosis Using Fuzzy Relations,. Fuzziness in Finland'04, 1-5, 2004.
Tomar, P.P., Saxena, P.K. 2011. Architecture For Medical Diagnosis Using Rule-Based Technique. First Int. Conf. on
Interdisciplinary Research & Development, Thailand, 25.1-25.5
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


190
www.ijergs.org

A Smart Distribution Automation using Supervisory Control and Data
Acquisition with Advanced Metering Infrastructure and GPRS Technology
A.Merlin Sharmila
1
, S.Savitha Raj
1

1
Assistant Professor, Department of ECE, Mahendra college of Engineering, Salem-636106- TamilNadu, India
E-mail- ece.coolrocks@gmail.com
Abstract To realize some of the power grid goals, for the distribution system of the rural area, which builds up a real-
time, wireless, multi-object monitoring remote system of electrical equipment depending on GPRS network, with a feeder
automation based on Advanced Metering Infrastructure (AMI) is proposed. GRID uses Supervisory Control and Data Acquisition
(SCADA) to monitor and control switches and protective devices. This will improve routine asset management, quality of service,
operational efficiency, reliability, and security. The three parts of the system are assimilated with the advanced communication
and measurement technology. As a added advantage to the existing system, the proposed methodology can monitor the operating
situation, easily detect and locate the fault of the feeders and status of breakers,. The information from the system helps in
apprehending the advanced distribution operation, which includes improvement in power quality, loss detection and state
estimation .

Keywords ADVANTAGE METER INFRASTRUCTURE (AMI),SUPERVISORY CONTROL AND DATA ACQUISITION
(SCADA),SMART DISTRIBUTION GRID (SDG), DISTRIBUTION AUTOMATION (DA),GEOGRAPHY
INFORMATIONSYSTEM (GIS), GENERAL PACKET RADIO SERVICE(GPRS),ACCESS POINT NAME(APN),VIRTUAL
PRIVATE NETWORK(VPN).
INTRODUCTION
The distribution systems face to the customers directly, as the key to guarantee the power supply quality and enhance
operating efficiency [1], they are the largest and most complex part of the entire electrical system. While the productivity of the
power system is rather low (about 55%, according to the statistics of USA) now days, the massive fixed asset investment were
wasted [2]; More than 95% of power outage suffered by consumers is due to the electrical power distribution system (except the
reason of generation insufficiency) [3]. Therefore, the advanced distribution automation should be developed to realize flexible
load demand characteristics, the optimum of the assets management and utilization through the communication between utility and
terminal customers.
Substation automation is an integrated system which can realize real-time remote monitoring, coordination and control,
substation remote monitoring system has become one of the key parts of substation automation system because it takes advantage
of the wireless communication technology which has several overwhelming advantages such as convenience, fast and low cost
transmission. Furthermore, GPRS networking has already covered the whole country and become an important sustainable
resource (for utilization and development).
Recently, the smart grid is the focus topic of the power industry, and the network model of it can change the future of the
power system. The smart grid includes smart transmission grid and smart distribution grid (SDG). The unindustrialized smart grid
system necessitates high speed sensing of data from all the sensors on the system within a few power cycles. Advanced Metering
Infrastructure is a meek illustration of a structure where all the meters on a grid must able to provide the necessary information to
the controlling (master) head end within a very short duration [3].With AMI, the distribution wide area measurement and control
system consists the information exchange and integrated infrastructure [4]-[6].
Distribution system plays an important role in power systems. After many years of construction, the most of distribution
system are equipment with SCADA, but, some distribution lines in rural areas with long distance cant be monitored and
controlled at all. The optical fiber communication also suits to power systems for its insensitivity to electromagnetic interference,
but it limits its usage in the whole power systems due to its high cost [2]. Therefore, for these distribution lines in rural area and
lines with long distance, the solution with other kind of communication should be applied.
In this study, the wireless communication technology based on the AMI system, the measurement and control system for
distribution system is proposed to monitor and control feeders with long distance in rural area, realize the management of the
whole distribution system, furthermore, it will shorten the fault time, enhance the utilization rate of the power system asset, match
the requirements of the smart distribution grid.

SDG AND AMI

SDG gives us a intergraded grid of all kinds of new technology emerging in distribution network, with perfect working
distributive system. According to the operation of SDG, the one-way communication is replaced by the two-way, the customers
can know the real-time price, to make a plan to use their own distribution generation to support themselves or supply the spare
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


191
www.ijergs.org

electrical power to power system and charge at the period of high-price or they make decision to turn on electrical applications at
the low price period.
The SDG requires high speed communication to the customers on the system. So, the two-way communication system is
used to realize some function of SDG [3]. While SCADA infrastructure is typically limited due to cost, so that AMI is
implemented.
AMI is the deployment of a metering solution with two-way communications to enable time stamping of meter data,
outage reporting, communication into the customer premise, service connect/disconnect, on-request reads, and other functions.
The AMI systems provide a unique opportunity to gather more information on voltage, current, power, and outages
across the distribution grid to enable new and exciting applications.
In the proposed system, the data gathered from AMI is used to monitor the operation status of the distribution feeder, if a
fault occurred, the software will detect its location and send the command to switch off the relevant switches, furthermore, after
the fault disappeared, those switches can be switched on remotely. That is a task required to realize a smart distribution grid.

PROPOSED ARCHITECTURE OF AMI SYSTEM

The measurement and control system consists of three parts, the measurement and controlling device (M&C device),
communication network, and data processing center. Fig.1 shows the block-diagram of the proposed system. There are two 10kV
feeders, which have 2 section switches (or recloser) each, a loop switch connects the two feeder together. The 4 section switches
(S1-S4) and the loop switch (LS) are equipped with GPRS communication FTU, which is called GFTU.

Fig.1 The architecture of the AMI system


MEASUREMENT AND CONTROL DEVI CE
The measurement and controlling device consists of GPRS communication module and FTU, the GFTU is connected
with switches, reclosers or breakers, gathers data from meters or switches. The configuration of the GFTU is shown in Fig.2.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


192
www.ijergs.org


Fig.2 The diagram of the GFTU
The microcontroller collects and packages the data of the switch, and then sends them to the control center by GPRS
network. The data collected includes voltage, current, power factor and so on. On the other hand, the GFTU receives the command
from the control center and controls switches on or off.

COMMUNI CATI ON NETWORK

In the proposed system with AMI, there are two levels of communication, the first level is from switches or meters to
GFTUs by RS-232 or RS-485. The second level is the communication between GFTUs and the center. The GPRS communication
network includes GPRS modules, which is embedded in the GFTUs, GPRS network and the server.
In this system, APN (Access Point Name) private tunnel and VPN (virtual private network) encryption and authentication
technology were adopted in the control center. Each GFTU has a static IP address, they register and send data to APN (given by
mobile department), and then the data are sent to the server. For the adoption of the technology of exchanging tunnel, the user can
be identified. The users data would be transmitted through the public network with high security and speed. The scheme of the
GPRS network is shown in Fig.3.


Fig.3 Schemes of the GPRS network in the AMI system
PROCESSI NG SOFTWARE
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


193
www.ijergs.org

The processing software includes database and GIS (Geography Information System) the software gathers, process and
transmit data to realize such functions:
-Display the current operating situation of the feeder onScreen;
-The fault detecting, locating, insulating and restoration in distribution systems.
In the system, data transmission between the switches and the GFTU, the GFTU and the center are bidirectional. The
operator in the center monitors the operating situation of the feeder and also controls the switches in the feeder when a fault
occurs.

FAULT DETECTI ON AND LOCATI ON

The distribution feeder enhances the outage management systems, to enable the fault diagnose capabilities of the
software, which will not only lead to improved outage restoration times and also provides support for moreeffective restoration
procedures.
The GFTUs collects the operating information of the switches such as voltage, current and its status. If a fault occurs, the
CB willswitch off immediately receiving the current operating information from all GFTUs. In the center,the fault can be located
with the help of the data from every GFTU on feeders, according to the fault current.
If it is a transient fault, the CB would reclose successfully after several seconds. If not, it would open again. The action of
switches will be recorded, to make the decision to switch off the relevant switches to insulate the fault, and to reconstruct the
distribution feeder to make the loss low.
For example, as shown in Fig.4, the hand to hand circle between the substation A and B, the loop switch is in off state.
When a permanent fault occurs between S2 and S3, the CB at line A trips and at the same time GFTUs at the line A sents the
voltage and current parameter to the control center, the fault current will be found in data from S1 and S2, otherwise from S3 and
S4.
According to the topological structure, the fault was located between S2 and S3 should be switched off and the CB
should be switched on remotely to restore the power supply of the feeder circuit.

AI D TO ADO
To work with other systems, the monitor and control system helps in running the advanced distribution operation (ADO)
of the whole distribution system, the connection of the proposed system with other software is shown as Fig 6.

MAKETHEI NVI SI BLEVI SI BLE

The geography information system (GIS) receives the data and display on the map, so the operating situation of feeders
can monitor clearly. When the fault occurs, it is displayed on the map along with its location. This helps workers to find easily.

Red: ON Green: OFF
Fig.5 Fault location and restoration.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


194
www.ijergs.org

I MPROVE POWER QUALI TY

The GFTUs collects the voltage value at the different point of the feeder, the power parameters like harmonic value and
reactive power can be analysed quickly helping the utility to adopt relent technology and improving the power quality of the
device and also avoids the unnecessary investment.

LOSS DETECTION

It is very difficult to know the actual losses on thedistribution network. Generally, rules of thumb are used to estimate the
power losses. It is probable to calculate the system losses by relating information nodes at the distribution feeder and distribution
transformers. This empowers better tracking and efficacy on the distribution network.

STATEESTIMATI ON

The measurements are only available at thedistribution substations. The power flowing on the distribution grid are
unclear, they are typically allocated using the generic models or transformer kVA rating.
By utilizing the information from the beginning and end of the distribution feeders,accurate load models can be
computed allowing accurate load estimation on the distribution grid.
This data is perilous to understand the impact and benefits of connecting renewable energy sources to the distribution
grid.




APPLICATION CASES

The proposed monitor and control system has run for more than one year with success in Qingdao utilities Shandong
Province, China. The distribution feeder lines, line Y and line Q, are located in the north urban area of the Qingdao city, far away
from the central office, about 30km. Before the installation, the lines were patrolled manually every day or week and those
switches were on or off manually, it was difficult to find the fault location. 7 GFTUs were equipped in the two feeders.
The application of the proposed system helped in gathering the data almost immediately and shortened the time used to
location the fault. When the measurement and controlling system based AMI was coming into use, a comprehensive test carried,
as shown in the Table 1.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


195
www.ijergs.org


Table 1. The Test Item and the Result of the system
Item Result

Switch on/off the loop
switch 3 times

Correct
Success rate: 100%
Response time:1.2seconds

Remote reading every 5
minutes

GPRS code loss rate: less
than 1%

The proposed system is cost-effective. To the information from the company, the power supply of the two lines are
12.3026MkWH and 6.70MkWH, if two fault occurs in each line, the benefit of the measurement and control system for
distribution feeder is listed in Table 2.
Table 2.The benefit of the system
Item Benefit

Saved time to seek the
fault

More than 10 hours

Saved Transportation fee

500,000Yuan RMB
Avoided Electricity Loss

44,000KWH
Saved Device Loss

2530,000Yuan RMB
Total Benefit 3610,000Yuan RMB

CONCLUSION
Smart grid is a throng of perceptions that includes newpower delivery components, control and monitoring throughout
the power grid and more informed customer options about the next generation power system. Smart distribution grid is an
important part in it. To realize the smart distribution grid, an AMI based measurement and control system for the distribution
system is proposed in this paper.
It enables utilities to run the advanced distribution operation in a cost-effective manner. The adoption of advanced multi-
communication media, such as GPRS or 3G, enables the AMI system to collect the meter data quickly with accuracy
automatically.
The proposed system can work on existed feeder automation system with other communication types, and integrate distribution
automation system to reduce the labour cost, accurate loss detection and load forecasting. It will be a very important installation to
the realization of the smart distribution grid.

REFERENCES:
[1] XIAO Shijie, Consideration of technology for constructing Chinesesmart grid, Automation of Electric Power Systems,
vol.33, pp.124,2009 .
[2] MCGRANAGHAN M, GOODMAN F., Technical and systemrequirement s for advanced distribution automation,
Proceedings ofthe 18th International Conference on Electricity Distribution, June 6-9 ,2005 , Turin , Italy : 5p.
[3] XU Bingyin , LI Tianyou , XUE Yong duan2, Smart Distribution Gridand Distribution Automation, Automation of
Electric Power Systems,vol.33, pp. 38-41, 2009.
[4] YU Yixing, LUAN Wenpeng, Smart Grid, Power System Technology,vol.25, pp. 7-11, 2009.
[5] YU Yi-xin, Technical Composition of Smart Grid and itsImplementation Sequence, Southern Power System
Technology, vol.3,pp.1-5 ,2009
[6] Collier S E. Ten Steps to a Smarter Grid, Rural Electric PowerConference, REPC09, IEEE.2009.
[7] Sumanth S., Simha V., Bapat J., and et al., Data Communication overthe Smart Grid, International Symposium on
Power LineCommunications and Its Applications, pp.273 279, 2009.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


196
www.ijergs.org

[8] Hart, D.G._Using AMI to realize the smart grid, Power and Energy Society General Meeting-Conversion and Delivery
of Electrical Energy in the 21
st
Century at Pittsburgh in Pennsylvania_USA, 2008.
[9] SUI Hui-bin, WANG Hong-hong, WANG Hong-wei, and et. al.,Remote meter reading system based on GPRS used in
substations,Telecommunications for Electric Power System, vol.28, pp. 57-59,65,Jan.2007.
[10] ZHAO Bing, ZOU He-ping, LV Ying-jie, Access Security Technology in Monitoring System of Distribution
Transformer Based on GPRS, .Advances of Power System &Hydroelectric Engineering, vol.26(3),pp.16-19, 2010.
[11] Hu Hongsheng, Qian Suxiang, Wang Juan, Shi Zhengjun, Application of Information Fusion Technology in the Remote
State On-line Monitoring and Fault Diagnosing System for Power Transformer, Electronic Measurement and
Instruments, 2007. ICEMI07. 8th International Conference on , pp:3-550 - 3-555.oct.2007
[12] Nie Huaiyun, Research and design of C/OS- II\GPRS-based remote ship supervision system, Nanjing university of
Science and technology:Academic,2006,pp.6-11,14-23,37-44. (in Chinese)
[13] B. Berry, A Fast Introduction to SCADA Fundamentals and Implementation, DSP Telecom, retrieved on July 28, 2009,
from http://www.dpstelecom.com/w_p.
[14] Electricity Company of Ghana (ECG), Automation of ECGs Power Delivery Process (SCADA), retrieved on July28,
2009, http://www.ecgonline.info/Projects/CurrentProjects/Engineering Projects/SCADA, 2008




















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


197
www.ijergs.org

Performance Comparison of AODV, DSDV and ZRP Routing Protocols
Ajay Singh
1
, Anil yadav
2
, Dr. mukesh Sharma
2

1
Research Scholar (M.Tech), Department of Computer Science, T.I.T&S, bhiwani
1
Faculty, Department of Computer Science, T.I.T&S, bhiwani
E-mail- ajays.cs@gmail.com
Abstract: Mobile Ad Hoc Networking (MANET) is a group of independent network mobile devices that are connected
over various wireless links. It is relatively working on a constrained bandwidth. The network topologies are dynamic
and may vary from time to time. Each device must act as a router for transferring any traffic among each other. This
network can operate by itself or incorporate into large area network (LAN). In this paper, we have analyzed various
Random based mobility models: Random Waypoint model, Random Walk model, Random Direction model and
Probabilistic Random Walk model using AODV,DSDV and ZRP protocols in Network Simulator (NS 2.35). The
performance comparison of MANET mobility models have been analyzed by varying number of nodes, type of
traffic (CBR, TCP) and maximum speed of nodes. The comparative conclusions are drawn on the basis of
various performance metrics such as: Routing Overhead (packets), Packet Delivery Fraction (%), Normalized
Routing Load, Average End-to-End Delay (milliseconds) and Packet Loss (%).

Keywords:
Mobile Ad hoc, AODV, DSDV,ZRP, TCP, CBR, routing overhead, packet delivery fraction, End-to-End delay,
normalized routing load.
1 Introduction:
Wireless technology came into existence since the 1970s and is getting more advancement every day. Because of
unlimited use of internet at present, the wireless technology has reached new heights. Today we see two kinds of
wireless networks. The first one which is a wireless network built on-top of a wired network and thus creates a reliable
infrastructure wireless network. The wireless nodes also connected to the wired network and these nodes are connected
to base stations. An example of this is the cellular phone networks where a phone connects to the base-station with the
best signal quality.
The second type of wireless technology is where no infrastructure [1] exists at all except the participating mobile
nodes. This is called an infrastructure less wireless network or an Ad hoc network. The word Ad hoc means something
which is not fixed or not organized i.e. dynamic. Recent advancements such as Bluetooth introduced a fresh type of
wireless systems which is frequently known as mobile Ad-hoc networks.
A MANET is an autonomous group of mobile users that communicate over reasonably slow wireless links. The
network topology may vary rapidly and unpredictably over time because the nodes are mobile. The network is
decentralized where all network activity, including discovering the topology and delivering messages must be
executed by the nodes themselves. Hence routing functionality will have to be incorporated into the mobile nodes.
Mobile ad hoc network is a collection of independent mobile nodes that can communicate to each other via radio
waves. The mobile nodes can directly communicate to those nodes that are in radio range of each other, whereas others
nodes need the help of intermediate nodes to route their packets. These networks are fully distributed, and can work at
any place without the aid of any infrastructure. This property makes these networks highly robust.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


198
www.ijergs.org

In late 1980, within the Internet [1] Engineering Task Force (IETF) a Mobile Ad hoc Networking (MANET) Working
Group was formed to standardize the protocols, functional specification, and to develop a routing framework for IP-
based protocols in ad hoc networks. There are a number of protocols that have been developed since then, basically
classified as Proactive/Table Driven and Reactive/On-demand Driven routing protocols, with their respective
advantages and disadvantages, but currently there does not exist any standard for ad hoc network routing protocol and
the work is still in progress. Therefore, routing is one of the most important issues for an ad hoc network to make their
existence in the present world and prove to be divine for generations to come. The area of ad hoc networking has been
receiving increasing attention among researchers in recent years. The work presented in this thesis is expected to
provide useful input to the routing mechanism in ad hoc Networks.
2 Protocol Descriptions
2.1 Ad hoc On Demand Distance Vector (AODV)
AODV routing algorithm is a source initiated, on demand driven, routing protocol. Since the routing is on demand,
a route is only traced when a source node wants to establish communication with a specific destination. The route
remains established as long as it is needed for further communication. Furthermore, another feature of AODV is its use
of a destination sequence number for every route entry. This number is included in the RREQ (Route Request) of
any node that desires to send data. These numbers are used to ensure the freshness of routing information. For
instance, a requesting node always chooses the route with the greatest sequence number to communicate with its
destination node. Once a fresh path is found, a RREP (Route Reply) is sent back to the requesting node. AODV also
has the necessary mechanism to inform network nodes of any possible link break that might have occurred in the
network.

2.2 Destination Sequenced Distance Vector (DSDV)
The Destination Sequenced distance vector routing protocol is a proactive routing protocol which is a medications
of conventional Bellman-Ford routing algorithm. This protocol adds a new attribute, sequence number, to each
route table entry at each node. Routing table is maintained at each node and with this table; node transmits the
packets to other nodes in the network. This protocol was motivated for the use of data exchange along changing
and arbitrary paths of interconnection which may not be close to any base station.

2.3 Zone Routing Protocol (ZRP)
ZRP is designed to address the problems associated with proactive and reactive routing. Excess bandwidth
consumption because of flooding of updates packets and long delay in route discovery request are two main problems
of proactive and reactive routing respectively. ZRP came with the concept of zones. In limited zone, route
maintenance is easier and because of zones, numbers of routing updates are decreased. Nodes out of the zone can
communicate via reactive routing, for this purpose route request is not flooded to entire network only the border node
is responsible to perform this task. ZRP combines the feature of both proactive and reactive routing algorithms. The
architecture of ZRP consists of four elements: MAC-level functions, Intra-Zone Routing Protocol(IARP),Inter-Zone
Routing Protocol (IERP) and broadcast Routing Protocol(BRP). The proactive routing is based within limited
specified zones and beyond the zones reactive routing is used. MAC-level performs neighbour discovery and
maintenance functions. For instance, when a node comes in range a notification of new neighbour is sent to IARP
similarly when node losses connectivity, lost connectivity notification is sent to IARP. Within in a specified zone,
IARP protocol routes packets. IARP keeps information about all nodes in the zone in its routing table. On the other
hand, if node wants to send packet to a node outside the zone, in that case IERP protocol is used to find best path.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


199
www.ijergs.org

That means IERP is responsible to maintains correct routes outside the zone. If IERP does not have any route in its
routing table, it sends route query to BRP. The BRP is responsible to contact with nodes across Ad Hoc networks and
passes route queries. Important thing in bordercasting mechanism of BRP is it avoids packets flood in network. BRP
always passes route query request to border nodes only. Since only border nodes transmit and receive packets

3 Simulation
Both routing techniques were simulated in the same environment using Network Simulator (ns-2). AODV, DSDV &
ZRP were tested by the traffic i.e. TCP. The algorithms were tested using 50 nodes. The simulation area is 1000m by
1000m where the nodes location changes randomly. The connection used at a time is 30. Speed of nodes varies from
1m/s to 10m/s. by using TCP traffic we calculate performance of these two protocols for different random based
mobility model. i.e.:
(i) Random Waypoint (RWP)
(ii) Random walk(RW)
(iii) Random direction(RD)
(iv) Prob. Random Walk(PRW)

4 Simulation result
The results of our simulation will be presented in this section. First we will discuss the results of both AODV, DSDV
& ZRP protocol for different matrices and after that we make the comparison between the two protocols.

4.1 Pause Time Model Result
This test studied the effects of increasing pause time on the performance of three routing protocols. As pause time
increases, mobility in terms of change in directions (movement) decreases. When a pause time occurs, node stops for a
while and select another direction to travel. If speed is deifned as constant then for every occurance of pause time,
speed of node remains constant. In this model pause time changes from 0s to 400s while other parameters (nodes=50,
speed=10 m/s, data sending rate=16kbps and no. CBR flows=10) are constant.


Fig.3(a): Varying pause time vs packets delivery fraction (%)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


200
www.ijergs.org



Fig. 3(b) Varying pause time vs average network end-to-end delay (in seconds)


Fig. 3(c): Varying pause time vs routing cost(in packets)

The figures 3(a),3(b) and 3(c) demonstrate packets delivery fraction, avg. Network delay and routing cost when pause
time varies from 0s to 400s. Figure 3(a) shows difference in packets delivery fractions of protocols. The performance
of AODV is almost 100%. We recorded an average of 99% packets delivery for AODV during the whole simulation.
DSDV was closed behind to AODV and showed second best performance. With smaller pause time (higher nodes
movement) DSDV delivered 90% of data packets successfully. As pause time increased (nodes movement decrease
d)DSDV packets delivery ration also increased and during pause time 300s and 400s DSDV gave similar performance
as AODV. Same happened with ZRP. At pause time0s, 80 % of packets delivery fraction is recorded. We observed
slightly low packets delivery fraction value of ZRP at pause time 100s. Although the value of packets delivery at this
point should have been higher than the previous one. We check the NAM file but didnt find anything going wrong.
One possible reason could be the far placement of sources and destinations before the pause time 100s occurred.
Figure 3(b) shows average end-to-end network delay. In high nodes movement , delay of ZRP is recorded 0.1s. As
nodes movement slowed down till pause time 400s, delay offered by ZRP also moved down and approached to near
AODV as shown in fig. 3(b). DSDV and AODV showed nearly similar performance in terms of delay. But DSDV is
bit smoother and offered lower delay compare to AODV. An average of 0.011s is recorded for DSDV, AODV
possessed the second best position with an average delay of 0.014 s. While ZRP offered an average delay of 0.4 s.



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


201
www.ijergs.org

4.2 Speed Model Simulation Results


Fig. 4(a) Varying speed vs packets delivery fraction (%)


Fig. 4(b) Varying speed vs average network end-to-end delay (in seconds)
Figure 4(b) shows tha average end-to-end network delay. We didnt see much difference between the delay values of
AODV and DSDV. But DSDV performed slightly better than AODV and showed a constant performance with and
average delay of 0.01s. Although AODV showed similarity with DSDV but at maximum speed of 50 m/s delay
increased from 0.017 to 0.05s. Comparatively, ZRP showed high delay values. At speed 20 m/s delay slightly went
down and again increased as nodes speed increased. ZRP maintains an average of 0.1s delay.



Fig. 4(c ) Varying speed vs routing cost (in packets)

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


202
www.ijergs.org

Figure 4 (c) illustrates routing cost introduced in network. DSDV maintained an average of 12 control packets per data
packets throughout the simulation. As speed increased, routing overhead of AODV also increased and reached up to
54 control packets per data packets. ZRP showed a high routing overhead. The maximum recorded routing load at high
mobility was 2280 control packets.



4.3 Network Model Simulation Results



Fig.5(a) Varying nodes vs packets delivery fraction (%)

Figure 5(a) 5 (b) and 5(c) show protocols performance in network model. We recorded consistent packets delivery
fraction values of AODV in different network seize. In contrast, ZRP achieved consistent packet delivery till network
size of 30 nodes. An average of 96% delivery ratio is recorded. In network size of 40 nodes, ZRP packets delivery
fraction fell down from 95% to 91%. While in network size of 50 nodes the lowest value of packets delivery fraction is
recorded (69%).DSDV showed the 3
rd
best performance in network model in terms of packets delivery fraction. As
size of network increased , packets delivery fraction value of DSDV also increased and reached up to 91%. Packets
delivery fraction comparison of protocols can be seen in figure 5(a). In terms of delay, figure 5(b), DSDV showed
slightly consistent performance with an average delay of 0.01s. But delay of AODV varies in between 0.012s and
0.026s during whole simulation. ZRP, on the other hand, gave lowest delay as compared to AODV and DSDV until
network size of 30 nodes. From network size of 30 nodes to 40 nodes, we saw slight increase in delay value of ZRP

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


203
www.ijergs.org


Fig. 5(b) Varying nodes vs average network end-to-end delay



Fig. 5 (c ) Varying nodes vs routing cost ( in packets)
Figure 5(a) 5(b) and 5(c) show protocols performance in network model. We recorded consistent packets delivery
fraction values of AODV in different network seize. In contrast, ZRP achieved consistent packet delivery till network
size of 30 nodes. An average of 96 % delivery ratio is recorded. In network size of 40 nodes, ZRP packets delivery
fraction fell down from 95% to 91%. While in network size of 50 nodes the lowest value of packets delivery fraction is
recorded (69%).DSDV showed the 3
rd
best performance in network model in terms of packets delivery fraction. As
size of network increased, packets delivery fraction values of DSDV also increased and reached up to 91%. Packets
delivery fraction comparison of protocols can be seen in figure 5(a). In terms of delay, figure 5(b), DSDV showed
slightly consistent performance with an average delay of 0.01s. But delay of AODV varies in between 0.012 and
0.026s during whole simulation. ZRP, on the other hand, gave lowest delay as compared to AODV and DSDV until
network size of 30 nodes. From network size of 30 nodes to 40 nodes, we saw slight increase in delay value of ZRP
and from nodes 40 to 50, there was a drastic increase in delay value. The maximum delay we calculated for ZRP at
this point is 0.095s.
Figure 5 (c) demonstrates routing cost offered by protocols. From the figure , it is quite visible that routing load of
ZRP is much higher than of AODV and DSDV. As network became fully dense the routing load of ZRP reached up to
1915 control packets per data packets. AODV and DSDV also showed the same behaviour . However, DSDV
comparatively gave low routing load and an increased of 3 to 4 control packets are calculated as network size
increased. AODV seemed to approach to DSDV when network size was 20 nodes but just after this point the load
raised and reached up to 22 control packets. After the network size of 40 nodes we saw a consistent performance of
AODV.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


204
www.ijergs.org

4.4 Load Model Simulation Results
In this testing model, varying parameter is data sending rate. With 10 CBR sources we offered different workload.
Load increased from 4 to 20 data packets/second while pause time is null, nodes speed is 10 m/s and number of nodes
are 50.



Figures 6(a), 6(b) and 6(c) highlight relative performance of three protocols on load model. As seen in figure 6(a)
packets delivery fraction of all protocols are affected as data sending rate increased. DSDV looked closer to AODV.
Both maintained consistent delivery ratio till rate of 8 data packets/s. As sending rate increased from that point both
protocols started droping data packets. At sending rate of 20 packets/s, AODV and DSDV gave lowest packets
delivery fraction i.e 63% and 66% respectively. ZRP suffered badly when load increased and gave worst packets
delivery fraction at sending rate of 8,12,16 and 20 packets/s.




International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


205
www.ijergs.org



ZRP delivered only 18% f data packets at sendinf rate of 20 packets/s. Network delay can be found in figure 6(b). As
figure highlights , ZRP maintained an average delay of 0.3s against increasing load. AODV and DSDV initially
showed small delay value under low sending rate. As offered load increased from 8 packets/s to on ward, both AODV
and DSDV reported high delay values.AODV however showed a rapid increased in delay and reported highest delay
value of 1.064s when transmission rate was 16 packets/s Routing cost of protocols in load model is presented in fugure
6(c). As shown in figure the routing cost of DSDV is lower than AODV. As load in the network increases DSDV
generates less routing packets. AODV gave slightly higher overhead than DSDV. For AODV, from offered load of 4
packets/s to 8 packets/s.
Finally at maximum applied load AODV generated 10 control packets. ZRP in this model again generated high
number of control packets But this time as compare to figures 5(c) and 4(c). ZRP showed variation in routing load.
From the sending rate of 8 to 16 packets/s. ZRP generated an average of 1540 control packets. At highest sending rate
of 20 packets/s ZRP generated 1756 control packets.
4.5 Flow Model Simulation Results
In this testing model each CBR flows generated 16 kbps of traffic. Number of flows (connections) varied from 5 to 25.
This model evaluates the strength of protocols in various source connections.
Figure 7(a), 7(b) and 7(c) show results we drawn after simulation. As shown in figure 7(a) packets delivery fraction of
ZRP is lower than other two protocols. As number of flows increased from 5 to 25 sources, packets delivery fraction
of ZRP also suffered and moved down fastly. For 5 sources both ZRP and DSDV delivered almost same number of
packets to destination. But as number of CBR sourcesincreased DSDV maintained its packets delivery (an average of
90%) continuesly till the end of simulation while ZRP started dropping packets. Finally for 25 number of CBR sources
ZRP only delivered 38% of data packets to destination. AODV outperformed here and delivered 99% of data packets
against increasing number of CBR sources. Average network delay is shown in figure 7(b). AODV and DSDV , both
showed small delay values and almost same values till 20 number of CBR sources. Only a slight increase in delay
(near to 0.1s) of both protocols happened for 25 number of CBR sources. From the start till end, delay of ZRP
countinuesly moved up as number of CBR sources increases and reached up to highest value fo 0.543s. ZRP offered
high delay as compared to AODV and DSDV.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


206
www.ijergs.org











Routing cost of all the protocols reduced when number of CBR sources increased as shown in figure 7(c). If we see
AODV and DSDV, initially for 5 number of sources AODV generated 18 control packets while DSDV generated 23
control packets. As CBR sources changed from 5 to 25, both protocols generated small number of control packets.
0
1000
2000
3000
5

1
0

1
5

2
0

2
5

R
O
U
T
I
N
G

C
O
S
T
(
P
A
C
K
E
T
S
)
NO. OF FLOWS(CONNECTIONS)
ZRP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


207
www.ijergs.org

Although performance of DSDV is more satisfactory as it generated an average of 9 control packets, while AODV
generated an average of 15 control packets. For ZRP the value of routing cost is very high (figure 7c). As we can see
for 5 number of CBR sources ZRP generated the maximum routing packetsthat is 2646. Although , routing overhead
decreased as number of sources increased and reached up to its lowest value of 1364 routing packet for 24 CBR
sources. But still the routing load of ZRP is very much higher than DSDV and AODV
6 Future works:
In this paper four Random mobility models have been compared using AODV, DSDV and ZRP protocols. This work
can be extended on the following aspects:
I nvest i ga t i on of other MANET mobility models using different protocols under different types of traffic like
CBR.
Di f f er ent number of nodes and different node speeds.

REFERENCES:
[1] E.M. Royer & C.E. Perkins, An Implementation Study of the AODV Routing Protocol, Proceedings of the IEEE
Wireless Communications and Networking Conference, Chicago, IL, September 2000
[2] B.C. Lesiuk, Routing in Ad Hoc Networks of Mobile Hosts, Available Online:
http://phantom.me.uvic.ca/clesiuk/thesis/reports/adhoc/ adhoc.html#E16E2
[3]Andrea Goldsmith; Wireless Communications; Cambridge University Press, 2005.

[4]Bing Lin and I. Chlamtac; Wireless and Mobile Network Architectures; Wiley, 2000. [5] S.K. Sarkar, T.G.
Basawaraju and C Puttamadappa; Ad hoc Mobile Wireless Networks: Principles, Protocols and Applications;
Auerbach Publications, pp. 1, 2008.
[5] C.E. Perkins, E.M. Royer & S. Das, Ad Hoc On Demand Distance Vector (AODV) Routing, IETF Internet draft,
draft-ietf-manet-aodv-08.txt, March 2001
[6] C.E. Perkins & E.M. Royer, Ad-hoc On-Demand Distance Vector Routing, Proceedings of the 2nd IEEE
Workshop on Mobile Computing Systems and Applications, New Orleans, LA, February 1999, pp. 90- 100
[6] E.M. Royer & C.K. Toh, A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks, IEEE
Personal Communications Magazine,
April 1999, pp. 46-55.
[8] D. Comer, Internetworking with TCP/IP Volume 1 (Prentice Hall, 2000).



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


208
www.ijergs.org

Analysis of Thick Beam Bending Problem by Using a New Hyperbolic Shear
Deformation Theory
Vaibhav B. Chavan
1
, Dr. Ajay G. Dahake
2

1
Research Scholar (PG), Department of Civil Engineering, Shreeyash College of Engineering and Technology, Aurangabad (MS),
India
2
Associate Professor, Department of Civil Engineering, Shreeyash College of Engineering and Technology, Aurangabad (MS),
India
E-mail- vaibhav.chavan25@yahoo.com
Abstract: A new hyperbolic shear deformation theory for bending of deep beams, in which number of variables is same as that in
the hyperbolic shear deformation theory, is developed. The noteworthy feature of theory is that the transverse shear stresses can be
obtained directly from the use of constitutive relations with efficacy, satisfying the shear stress free condition on the top and bottom
surfaces of the beam. Hence, the theory obviates the need of shear correction factor. The fixed-fixed isotropic beam subjected to
varying load is examined using the present theory .Governing differential equation and boundary conditions are obtained by
using the principle of virtual work. Results obtained are discussed critically with those of other theories.

Keywords: thick beam, new hyperbolic shear deformation, principle of virtual work, equilibrium equations, displacement.

I. INTRODUCTION
1.1 Introduction
It is well-known that elementary theory of bending of beam based on Euler-Bernoulli hypothesis disregards the effects of the shear
deformation and stress concentration. The theory is suitable for slender beams and is not suitable for thick or deep beams since it is
based on the assumption that the sections normal to neutral axis before bending remain so during bending and after bending,
implying that the transverse shear strain is zero. Since theory neglects the transverse shear deformation, it underestimates
deflections in case of thick beams where shear deformation effects are significant. Thick beams and plates, either isotropic or
anisotropic, basically form two-and three dimensional problems of elasticity theory. Reduction of these problems to the
corresponding one- and two-dimensional approximate problems for their analysis has always been the main objective of research
workers. As a result, numerous refined theories of beams and plates have been formulated in last three decades which approximate
the three dimensional solutions with reasonable accuracy.
1.2 Literature survey
Rayleigh [9] and Timoshenko [10] were the pioneer investigators to include refined effects such as rotatory inertia and shear
deformation in the beam theory. Timoshenko showed that the effect of transverse shear is much greater than that of rotatory
inertia on the response of transverse vibration of prismatic bars. This theory is now widely referred to as Timoshenko beam theory
or first order shear deformation theory (FSDT) in the literature. The rst order shear deformation theory (FSDT) of Timoshenko
[11] includes rened eects .such as the rotatory inertia and shear deformation in the beam theory. Timoshenko showed that the
eect of transverse shear is much greater than that of rotatory inertia on the response of transverse vibration of prismatic bars. In
this theory transverse shear strain distribution is assumed to be constant through the beam thickness and thus requires shear
correction factor to appropriately represent the strain energy of deformation. Cowper [3] has given rened expression for the
shear correction factor for dierent cross-sections of the beam.
Heyliger and Reddy [6] presented higher order shear deformation theories for the static and free vibration The theories based on
trigonometric and hyperbolic functions to represent the shear de-formation eects through the thickness is the another class of
rened theories. However, with these theories shear stress free boundary conditions are not satised at top and bottom surfaces of
the beam. This discrepancy is removed by Ghugal and Shimpi [4] and developed a variationally consistent rened trigonometric
shear deformation theory for exure and free vibration of thick isotropic beams. Ghugal and Sharma [5] developed the
variationally consistent hyperbolic shear deformation theory for exure analysis of thick beams and obtained the displacements,
stresses and fundamental frequencies of exure mode and thickness shear modes from free vibration of simply supported beams.
In this paper, a variationally consistent hyperbolic shear deformation theory previously developed by Ghugal and Sharma [5] for
thick beams is used to obtain the general bending solutions for thick isotropic beams. The theory is applied to uniform isotropic
solid beams of rectangular cross-section for static exure with various boundary and loading conditions. A refined theory
containing the trigonometric sine and cosine functions in thickness coordinate, in the displacement field is termed here as
trigonometric shear deformation theory (TSDT). The trigonometric functions involving thickness coordinate are associated with
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


209
www.ijergs.org

q(x)

transverse shear deformation effects and the shear stress distribution through the thickness of the beam. This is another class of
refined theories in which number of displacement variables in the simplest form can be same as those in FSDT the results are
compared with those of elementary, rened beam theory to verify the credibility of the present shear deformation theory.
In this paper development of theory and its application to thick fixed beam is presented.

II. DEVELOPMENT OF THEORY
The beam under consideration as shown in Figure1 occupies in 0 x y z Cartesian coordinate system the region:

0 ; 0 ;
2 2
h h
x L y b z s s s s s s
(1)

where x, y, z are Cartesian coordinates, L and b are the length and width of beam in the x and y directions respectively, and h is the
thickness of the beam in the z-direction. The beam is made up of homogeneous, linearly elastic isotropic material.





Fig. 1 Beam under bending in x-z plane
2.1 The displacement field
The displacement field of the present beam theory is of the form:
( , ) cosh sinh
2
( , ) ( )
dw h z
u x z z z
dx h
w x z w x
o o
o
= +
=
| | | |(
| |
(
\ . \ .

(2)
where u the axial displacement in x direction and w is the transverse displacement in z direction of the beam. The sinusoidal
function is assigned according to the shear stress distribution through the thickness of the beam. The function | represents
rotation of the beam at neutral axis, which is an unknown function to be determined. The normal and shear strains obtained within
the framework of linear theory of elasticity using displacement field given by Eqn. (1) are as follows.
Shear strain: cos
zx
u dw z
z dx h
t
|
c
= + =
c
(3)

The stress-strain relationships used are as follows:

2
2
= sin
x x
d w Eh z d
E Ez
dx h dx
t |
o c
t
= +


cos
zx zx
z
G G
h
t
t | = =
(4)
2.2 Governing equations and boundary conditions
Using the expressions for strains and stresses (2) through (4) and using the principle of virtual work, variationally consistent
governing differential equations and boundary conditions for the beam under consideration can be obtained. The principle of
virtual work when applied to the beam leads to:

( )
.
/2
0 /2 0
( ) 0
x x zx zx
x L z h x L
x z h x
b dxdz q x wdx o oc t o o
= =+ =
= = =
+ =
} } }
(5)
where the symbol o denotes the variational operator. Employing Greens theorem in Eqn. (4) successively, we obtain the coupled
Euler-Lagrange equations which are the governing differential equations and associated boundary conditions of the beam. The
governing differential equations obtained are as follows:
L
b
h
z, w
z
x, u y

z



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


210
www.ijergs.org


( )
4 3
4 3 3
24 d w d
EI EI q x
dx dx
|
t
=
(6)
3 2
3 3 2 2
24 6
0
2
d w d GA
EI EI
dx dx
|
|
t t
+ =
(7)

The associated consistent natural boundary condition obtained is of following form:
At the ends x = 0 and x = L


3 2
3 3 2
24
0
x
d w d
V EI EI
dx dx
|
t
= = or w is prescribed (8)

2
2 3
24
0
x
d w d
M EI EI
dx dx
|
t
= = or
dw
dx
is prescribed (9)
2
3 2 2
24 6
0
a
d w d
M EI EI
dx dx
|
t t
= = or | is prescribed (10)
2.3 The general solution of governing equilibrium equations of the Beam
The general solution for transverse displacement w(x) and warping function| (x) is obtained using Eqns. (6) and (7) using method
of solution of linear differential equations with constant coefficients. Integrating and rearranging the first governing Eqn. (6), we
obtain the following equation

( )
3 2
3 3 2
24
Q x
d w d
EI dx dx
|
t
= + (11)
where Q(x) is the generalized shear force for beam and it is given by ( )
1
0
x
Q x qdx C = +
}
.
Now the second governing Eqn. (7) is rearranged in the following form:

3 2
3 2
4
d w d
dx dx
t |
| | = (12)
A single equation in terms of| is now obtained using Eqns. (11) and (12) as:

2
2
2
( ) d Q x
EI dx
|
|
o
= (13)
where constantso , | and

in Eqns. (11) and (12) are as follows

3
2
3
24
, and
4 48
GA
EI
t t |
o |
o t
| |
| |
= = =
| |
\ .
\ .

The general solution of Eqn. (13) is as follows:

2 3
( )
( ) cosh sinh
Q x
x C x C x
EI
|
|
= + (14)

The equation of transverse displacement w(x) is obtained by substituting the expression of | (x) in Eqn. (12) and then integrating it
thrice with respect to x. The general solution for w(x) is obtained as follows:

( )
3 2
2 1
2 3 4 5 6 3
( ) sinh cosh
6 4 2
C x EI x
EI w x qdxdxdxdx C x C x C C x C
t
|

| |
= + + + + + +
|
\ .
} } } }

(15)
where
1 2 3 4 5 6
, , , , and C C C C C C are arbitrary constants and can be obtained by imposing boundary conditions of beam.
III. ILLUSTRATIVE EXAMPLE
In order to prove the efficacy of the present theory, the following numerical examples are considered. The material properties for
beam used are: E = 210 GPa, = 0.3 and = 7800 kg/m
3
, where E is the Youngs modulus, is the density, and is the
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


211
www.ijergs.org


Poissons ratio of beam material. The kinematic and static boundary conditions associated with various beam bending problems
depending upon type of supports
Fixed end:

0
dw
w
dx
| = = = at x = 0, L







Fig. 2: A fixed beam with varying load
General expressions obtained are as follows:
2 2 5 3 2 2 2 2 2 3 2
0 0 0
5 3 2 2 2 2 2 3 2
0 0 0
( )
3 sinh cosh 1 cosh 1 cosh 1 1 1 1
3 2 5
2 sinh sinh 2 3 2
w x
A A B x x x E h x x x x E h x x E h x x
C G L L L L C G L L A G L L L L L L L L L


=
| | | | | + |
+ + +
| | |
. \ \ . \ .

( )
2 2 2
0
2 2 2
0
2 3 4 2 2
0 0 0
2
3 4 2 2
0 0 0
2
2
cosh sinh 1
sinh cosh
1 2 9 3 1 1 1 3
cosh sinh
7
2 5 10 20 2 10 2 20
3
x
A x x E h E h
u x x
L G L L L
x x
A B A z L x x E h x x z z E L
x
h C G C L A G L h h C G h h L L L
L
z L
h h

o

o o
o
=
=
| |
| | | | | | | | | |
|
+ +
` | | | | |
|
+ \ . \ . \ . \ . ) \ . |
\ .
( ) ( )
2
0
2
0
2 3
2 2 2
3
0 0 0
2 2 2
0 0 0
2
2
2
2
2
5
9 3
Lcosh sinh
2
1 1 1 3
5 20
2 1 cosh sinh
20
2 10 2 20
L sinh cosh 3
6
1
4 1
8
EE
zx
x E h
L C G L
A x
x L x
A B A E h E h x z z E
L
x
C G A G L h h C G L L
x x L
x
L
z L
h h
t

o o
o

+
=

| |

| | | | | | | |
|

` | | | |
|

\ . \ . \ . | \ .

\ .
)

| |

|
\ .
( )
2
2
2
2
0
2
0
2 2 0
2
0 2 2 0
2 2 2
0
39
1
4 1 cosh
20
8 2 3 20
sinh cosh
20 3 cosh 1 1 1
L cosh cosh
sinh 5 2
cosh
2
CR
zx
E h
G L
A
z
C
h A E h
L x x
C G L x B E h z
x A G L h
o

o o

o o
o
t
( | |
| |
+

( | |
\ . | |
\ .
(

`
|
(
| | | | \ .
| | | |
( +
| | | |

( \ . \ . \ . ) \ .
|
=

\
2
0
2
0
3 7
cosh sinh cosh
20 3
A z L x
x x
h C h L
o

| | | | | | |

| | | |
. \ . \ . \ .

IV. RESULTS AND DISCUSSION
The results for maximum transverse displacement and maximum transverse shear stresses are presented in the following non
dimensional form for the purpose of presenting the results in this paper,
3
4
10
, , ,
x zx
x zx
b b Ebu Ebh w
u w
qh q q qL
o t
o t = = = =

TABLE-I
NON-DIMENSIONAL AXIAL DISPLACEMENT ( u ) AT (X = 0.75L, Z = H/2), TRANSVERSE DEFLECTION ( w) AT (X =
0.75L, Z=0.0) AXIAL STRESS ( x o ) AT (X = 0, Z = H/2) AND MAXIMUM TRANSVERSE SHEARS STRESSES
CR
zx
t
(X=0.01L, Z =0.0) and

EE
zx
t at (x =0.01L, z =0.0) of the Fixed Beam Subjected to Varying Load for Aspect Ratio 4


0
( )
x
q x q
L
=
x, u
z, w
L
q
0

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


212
www.ijergs.org

Source Model u

x o

CR
zx
t

Present NHPSDT -2.3243 4.5932 -0.7303 3.2118
Ghugal and Sharma [70] [71] HPSDT -2.2480 6.5984 -1.1052 0.5229
Dahake TSDT -2.2688 5.1300 -0.7546 0.4426
Timoshenko [11] FSDT -1.5375 3.2000 0.9000 0.0962
Bernoulli-Euler ETB -1.5375 3.2000 0.9000 -



EE
zx
t
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


213
www.ijergs.org


Fig. 4(a): Variation of maximum axial displacement (u) Fig. 4 (b): Variation of maximum axial stress (
x
)

-3 -2 -1 0 1 2 3
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
-10 -8 -6 -4 -2 0 2 4 6 8 10
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
u
Z/h Z/h

x o

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


214
www.ijergs.org




Fig. 4 (c): Variation of transverse shear stress (
zx
)

AR
Fig. 4 (d): Variation of maximum transverse displacement (w) of fixed beam at (x=0.75L, z = 0) when subjected to varying load
with aspect ratio AR.
V. DISCUSSION OF RESULTS
-3 -2 -1 0 1 2 3
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
0 1 2 3 4
-0.50
-0.25
0.00
0.25
0.50
Present NHPSDT
HPSDT
TSDT
FSDT
0 10 20 30 40 50
0.10
0.20
0.30
0.40
0.50
0.60
0.70
Present NHPSDT
HPSDT
TSDT
FSDT
ETB
W
0.00 10.00 20.00 30.00 40.00 50.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70
Present NHPSDT HPSDT FSDT ETB
Z/h
h

Z/h

CR
zx
t

CR
zx
t

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


215
www.ijergs.org

The results obtained by present new hyperbolic shear deformation theory are compared with those of elementary theory of beam
bending (ETB), FSDT of Timoshenko, HPSDT of Ghugal and Sharma and TSDT of Dahake and Ghugal. It is to be noted that the
exact results from theory of elasticity are not available for the problems analyzed in this paper. The comparison of results of
maximum non-dimensional transverse displacement and shear stresses for the aspect ratios of 4 and 10 is presented in Table-I for
beam subjected to varying load. Among the results of all the other theories, the values of present theory are in excellent agreement
with the values of other refined theories for aspect ratio 4 except those of classical beam theory (ETB) and FSDT of Timoshenko.

VI. CONCLUSIONS
The variationally consistent theoretical formulation of the theory with general solution technique of governing differential
equations is presented. The general solutions for beam with varying load are obtained in case of thick fixed beam. The
displacements and shear stresses obtained by present theory are in excellent agreement with those of other equivalent refined and
higher order theories. The present theory yields the realistic variation of transverse displacement through and shear stresses the
thickness of beam. Thus the validity of the present theory is established.

ACKNOWLEDGEMENT
I am greatly indebted forever to my guide Dr. A.G. Dahake, Asso. Prof. Shreeyash College of Engineering and Technology,
Aurangabad for his continuous encouragement, support, ideas, most constructive suggestions, valuable advice and confidence in
me. I sincerely thank to Prof. M.K. Sawant, Shreeyash Polytechnic, Aurangabad for their encouragement and kind support and
stimulating advice.

REFERENCES:
[1] Baluch, M. H., Azad, A. K. and Khidir, M. A. Technical theory of beams with normal strain, ASCE J. of Engineering
Mechanics,1984, 0110(8), p.123337.
[2] Bhimaraddi, A., and Chandrashekhara, K. Observations on higher order beam Theory, ASCE J. of Aerospace Engineering,
1993, 6(4), p.408-413.
[3] Cowper, G. R. On the accuracy of Timoshenko beam theory, ASCE J. Engineering Mechanics Division. 1968, 94 (EM6),
p.144753,
[4] Ghugal, Y. M. and Shmipi, R. P. A review of refined shear deformation theories for isotropic and anisotropic laminated
beams, J. Reinforced Plastics And Composites, 2001, 20(3), p. 25572.
[5] Ghugal, Y. M. and Sharma, R. A hyperbolic shear deformation theory for flexure and vibration of thick isotropic
beams,International J. of Computationa1 Methods, 2009, 6(4), p.585604.
[6] Heyliger, P. R. and Reddy, J. N. A higher order beam finite element for bending and vibration problems, J. Sound and
Vibration, 1988, 126(2), p.309326.
[7] Krishna Murthy, A. V. Towards a consistent beam theory, AIAA Journal, 1984, 22(6), p.81116.
[8] Levinson, M. A. new rectangular beam theory, J. Sound and Vibration, 1981, 74(1), p.8187.
[9] Lord Rayleigh J. W. S. The Theory of Sound, Macmillan Publishers, London, 1877.
[10] Timoshenko, S. P. On the correction for shear of the differential equation for transverse vibrations of prismatic bars,
Philosophical Magazine, 1921, 41 (6), p. 74246.
[11] Timoshenko, S. P. and Goodier, J. N. Theory of Elasticity, McGraw-Hill, 3
rd
Edition, Singapore. 1970.
[12] Dahake A. G. and Ghugal Y. M., A Trigonometric Shear Deformation Theory for Thick Beam, Journal of Computing
Technologies, 2012, 1(7), pp. 31-37







International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


216
www.ijergs.org

Mobile Tracing Software for Android Phone
Anuradha Sharma
1
, Jyoti Sharma
4
, Dipesh Monga
2
, Ratul Aggarwal
3

1
Information Technology, College of Technology and Engineering, Udaipur
2
Electronic and Communication, College of Technology and Engineering, Udaipur
3
Electronic and Communication,vellore institute of technology, vellore, Tamilnadu
4Information Technology, vellore institute of technology, vellore, Tamilnadu
E-mail- anuradha9462@gmail.com

ABSTRACT: The goal of this document is to give description of how to use the Mobile Security System (MSS) (release 1.0). It
gives complete information about functional and nonfunctional requirements of system Mobile security system is security
software tocapture missing mobile phones or tablet pcs. The main purposes behind this project are to reduce some of the
vulnerabilities in existing security systems, providing user friendly authentication mechanisms (for organizations resources such
as domains, networks and so on) and location identification. This will be useful for business organizations and for individuals as
well to keep track their mobile. For example the strategies, stability, configurations and etc. are can be considered as some of the
confidential information. And it can also provide some management capabilities as well. The project is carried out within two
phases.
Phase 1: Client Application which will be installed on any Mobile Devices.
Phase 2: Admin Application which will be installed on Any Server or Mobile.
(2) Introduction
I hereby declare that the seminar titled Mobile Tracing Software for Android phone has been presented by me and is not
reproduce as it is from any other source. The objective of this project is to develop an Android application which provides location
tracking functionality for Android device using SMS. This project supports the Android OS only and makes communication with
the phone through SMS messages only. The
Architecture Security and the accuracy of tracking unit itself are the scope of this project.
(3)Abbreviation

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


217
www.ijergs.org

(4) Existing System
1. Ringer
A silent phone can be extremely tricky to find. If you're in the habit of losing a silent cell phone, you may wish to
invest in a phone sensor, also known as a phone detector. These are tools that, when placed near a cell phone, will
actually pick up the call signal and make sounds to indicate that thephone is somewhere within proximity. If the phone
is lost, all you need to do is have someone call you as you walk around with the sensor until the device begins to
indicate that a call signal is nearby. When you hear the signal, you then have a basic idea of where to start looking for
your cell phone.
2. Phone Tracking Using IMEI Number:Every phone comes with a unique International Mobile Equipment Identify
Number which can come in useful to track it in case of loss or theft. This number can be accessed by dialing *#06#
and it is advisable to make a note of it as soon as you purchase your handset. In case the phone gets stolen, file an
FIR with the police and give them its identity number Pass on a copy of the FIR and IMEI number to your service
provider who will then be able to track your handset. With its IMEI number, a device can be traced even if it is
being used with another SIM. Once the handset is located, request your service provider to block it from being used
till you are able to get your hands on it again.
3. Proposed System
Using simple SMS commands so you can ring your Android Device even if it is in silent mode and thus locate your
device locally.


(5)SoftwareRequirement
Specification
Introduction:
The Software Requirement Specication document itself states in precise and explicit language those
functions and capabilities a software system (i.e., a software application, an e-Commerce Web site, etc.) must provide,
as well as states any required constraints by which the system must abide.
The SRS contains Functional and Non-functional requirements
(6)FunctionalRequirements
a. Be able to recognize the attention word received through SMS.
b. Be able to handle the phone state to ring automatically.
c. Be able to detect the current location of Android device.
d. Be able to retrieve the device, sim card & location details.
e. Be able to send retrieved details through SMS
(7)Non-functionalRequirements
a. Functionality
b. Performance
c. Environment
d. Usability
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


218
www.ijergs.org

e. Other Constraints
(8)Software&HardwareRequirements
a. Hardware Requirements
b. Processor Pentium IV or above.
RAM 2 GB RAM or more.
c. Hard Disk Space Minimum of 40 GB.
d. GPS enabled Android 4.0 devices.
e. Software Requirements
f. Microsoft Windows (XP or later)
g. The Android SDK starter package
h. Java Development Kit (JDK) 5 or 6
i. Eclipse (Indigo)
(9)State Diagram

(10)Use case Diagram

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


219
www.ijergs.org

10.1 Use case related to Installation
Use case 1. Installation
Primary Actor: Admin App / Client App
Pre-condition: Android Device Internet Connection
Main scenario 1. User imitates installation project.
2. System asks the Admin home directory in which all the working file
will be created.
3. Admin specifies the home directory and username/password.
4. Client App ask for admin code for authentication

10.2 Use case related to system authorization
Use case 2. Login
Primary Actor: Super Admin, Admin, User
Pre-condition: User need to be pre-registered with a Admin
Main scenario 1. Start the application. User prompted for login and password.
2. User gives the login and password.
3. System does authentication.
4. Main screen is displayed.
Alternate scenario 1. Prompt the user that the wrong entered password and username.
2. Allow user to reenter the password and user name. Give 3 chances.
10.3 Use case related to change password

Use case 3. Change password
Primary Actor: User
Pre-condition: user logged in
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


220
www.ijergs.org

Main scenario 1. User initiates the password change command.
2. User performed for old password, new password and confirm new
password.
3. User gives for old password, new password and confirm new
password.
4. System does authentication.
5. New password is registered with the system.
Alternate scenario 1. Prompt the user that the entered password and username.
2. Allow user to renter the password and user name. Give 3 chances.

10.4 Use case related to Admin
Use case 4. Manage Devices
Primary Actor: Admin
Pre-condition: Online Connection or SIM or GPS Connection
Main scenario 1. Admin initiates the manage user device option.
2. Admin can to add, edit or delete any client mobile device.

10.5Use case related to Admin

Use case 5. Search for a lost client device
Primary Actor: Admin
Pre-condition: Internet Connection or GPS or SIM
Main scenario 1. Admin initiates the Search function.
2. System asks the Admin to select its registered device.
3. System displays the Report of the found device with location and
other informations.
10.6 Use case related to Super Admin
Use case 6. Super Admin
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


221
www.ijergs.org

Primary
Actor:
Super Admin
Pre-
condition:
Internet Connection
Main
scenario
1. Super Admin login
and initiates the list of
users function.
2. It can search for any
client device in range.
3. It can manage each
device.

(11)Implementation
a. Implementation is the stage in the project where the theoretical design is turned into a working system. The
implementation phase constructs, installs and operated the new system. The most crucial stage in achieving a
new successful system is that it works effectively and efficiently.
b. Implementation is the process of assuring that the information system is operational and then allowing user to
take over its operation for use and evaluation
(12)IMPLEMENTATION OF MODULES
1.Broadcast receiver that alerts application when each new SMS arrived.
a. Step 1: START
b. Step 2: SMS received.
c. Step 3: Checks attention word.
d. Step 4: If attention word matches with Add Client added by admin then starts Tracing activity and
abort broadcasting.
e. Step 5: If attention word matches with getlocation then starts ringing activity and abort
broadcasting.
f. Step 6: If attention word not matched then allow broadcasting.
g. Step 7: End
2. Enable device ringing and acknowledges the user.
a. Step 1: START
b. Step 2: Checks device it in silent or vibrate mode.
c. Step 3: If it is in silent or vibrate mode than set device to ringing mode.
d. Step 4: Enable device ringing.
e. Step 5: Acknowledges user that device ringing by sending device status information to user.
f. Step 6: End

3.Get location And Acknowledges user.
Step 1: START
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


222
www.ijergs.org

Step 2: Checks that internet is available.
Step 3: If internet is available then get location details from Network Provider.
Step 4: If internet is not available then Checks is GPS turned on.
Step 5: If GPS is available then get location details.
Step 6: Send location information to user.
Step 7: End
(13)DATA FLOW DIAGRAM
The data flow diagram is a graphical representation that depicts information flow and the transforms that are applied as
data moves from input to output. The DFD may be used to represent a system or software at any level of abstraction.
In fact DFD may be partitioned into levels that represent increasing information flow and functional detail.
Leve0


Level1


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


223
www.ijergs.org


Level2

Entity Relation Ship Modeling
P.P. Chen introduced the E- R model. Entity - Relationship modeling is a details logical representation of the entities,
associations and data elements for an organization or business area.

Entities
An Entity is a person, place, thing or event of interest to the organization and about which data are captured, stored or
processed.

Attributes
Various types of data items that describe an entity are known as attributes.

Relationship
An association of several entities in an Entity-Relation model is called relationship.

Entity Relationship Diagram
The overall logical structure of a database can be expressed graphically by an entity relationship diagram
ER DIAGRAM
It is an abstract and conceptual representation of the data. Entity Relationship modeling is a Database Modeling
Method, used to produce a types of conceptual schema. Entities:


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


224
www.ijergs.org

(14)Testing
1. Unit Testing
a. Try to detect if all Application functions work correct individually.
2. Integration Testing
b. Try to detect if all these functions are accessible in our application and they are properly integrated.

3.Integration Testing
a. Application starts on SMS receive.
b. Contents of SMS read and matched with the attention word.
c. Acknowledges the phone status to the requesting phone through SMS.
d. If it is GPS attention word then retrieves current location details and sends back to the requesting phone
without the knowledge of device user.
e. Application Stops.

(15)Snapshots


(16) DEPLOYMENT
Software deployment is all of the activities that make a software system available for use.
Android application can be deployed multiple ways:
a. If you are using eclipse, first you have to create Android virtual device manager and then right click on your
project and run as android application.
b. You can export your package to your android device And then browse to it to install.

(17)Future Enhancement
a. SMS/Call Filtering.
b. Allowing user to specify his own attention words(Database Connectivity).
c. Lock device, wipe memory to keep your private data safe.
d. Control your Android remotely via a web-based interface through DroidLocator
(18)Conclusion
Lost android mobile phone tracker is a unique & efficient application, which is used to track the lost/ misplaced
android phone.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


225
www.ijergs.org

All the features work on SMS basis. Therefore, incoming SMS format plays a vital role. Our android
application running in the cell monitors all the incoming messages. If the SMS is meant for the application, it reads the
same and performs the expected task.
We have created features, which will enhance the existing cell tracking system. Application stands different
from the existing system as its not only the GPS value it makes use of but it works on GSM/ text messaging services
which makes application a simple & unique one.

REFERENCES:
a. E. Burnette, Hello Android, The Pragmatic Programmers (2009).
b. R. Meier, Professional Android 2 Application Development, Wiley (2010).
c. M. Murphy, Beginning Android 2, Apress (2010).
d. Android Developer Guide: http://developer.android.com/guide/index.html.
e. Android API: http://developer.android.com/reference/packages.html
f. V-Model: http://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/Process/V-Model

















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


226
www.ijergs.org

VLSI Based Fluid Flow Measurement Using Constant Temperature Hot
Wire Anemometer
Anuruddh Singh
1
, Pramod Kumar Jain
1

1
Research Scholar (M.Tech), Scholars of SGSITS
E-mail- anuruddh.singh@yahoo.co.in
Abstract The performance of a hot-wire anemometer configuration is affected by variation in the fluid temperature. The
classical temperature compensation techniques in such anemometers employ two sensors. The performance of a temperature-
compensated hot-wire anemometer configuration using a single sensor alternating between two operating temperatures and
proposed for constant fluid velocity is investigated under conditions of time-varying fluid velocity. The measurement error
introduced is quantified and can be practically eliminated using a low-pass digital filter.
Keywords Electrical equivalence, fluid temperature compensation, hot-wire anemometer and
thermoresistive sensor, measurement error,op-amp,CMRR.
INTRODUCTION
Constant Temperature hot-wire anemometer (CTA) circuit based on a feedback self-balanced Wheatstone bridge containing a
thermo-resistive sensor is known to exhibit a relatively wide bandwidth [1]. The compensation of the effect of fluid temperature T
f

is usually done by employing an independent fluid temperature sensor [1][5] or two similar feedback bridge circuits with two
identical sensors operating at two different constant temperatures [6]. The finite nonzero amplifier input offset voltage does not
permit the sensor temperature to remain constant with varying fluid velocity [7]. This offset voltage also affects the dynamic
response of the feedback circuit. The circuit temporal response is slower for a higher offset voltage. Further, it has been shown that
when the amplifier input offset voltage is zero, or below a critical value, the circuit becomes oscillatory.
Thermal anemometry is the most common method used to measure instantaneous fluid velocity. The technique depends on the
convective heat loss to the surrounding fluid from an electrically heated sensing element or probe. If only the fluid velocity varies,
then the heat loss can be interpreted as a measure of that variable.
Thermal anemometry enjoys its popularity because the technique involves the use of very small probes that offer very high spatial
resolution and excellent frequency response characteristics. The basic principles of the technique are relatively straightforward and
the probes are difficult to damage if reasonable care is taken. Most sensors are operated in the constant temperature mode.
PRINCIPLE OF OPERATION

Based on convective heat transfer from a heated sensing element, possessing temperature coefficient of resistance .
Hot-wire anemometers have been used for many years in the study of laminar, transitional and turbulent boundary layer flows and
much of our current understanding of the physics of boundary layer transition has come solely from hot-wire measurements.
Thermal anemometers are also ideally suited to the measurement of unsteady flows such as those that arise behind rotating blade
rows when the flow is viewed in the stationary frame of reference. By a transformation of co-ordinates, the time-history of the
flow behind a rotor can be converted into a pitch-wise variation in the relative frame so that it is possible to determine the structure
of the rotor relative exit flow. Until the advent of laser anemometry or rotating frame instrumentation, this was the only available
technique for the acquisition of rotating frame data.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


227
www.ijergs.org


Fig.1- Block Diagram of Fluid Flow Hot Wire Sensor
3. HOT WIRE EQUATION
To examine the behaviour of the hot wire, the general hot wire equation must first be derived. This equation will be used to
examine both the steady state response of the hot wire, discussed here, and its frequency response, discussed later. By considering
a small circular element of the hot wire, Figure.2, an energy balance can be performed, assuming a uniform temperature over its
cross-section:

2
+

4
(1)

This can be simplified, Hjstrup et al., 1976, to give the general hot wire equation :

+
2


3
(2)

if radiation is neglected. The constants are given by:

1
=

(3)

1
=

2
, (4)

2
=

(5)

and

3
=

1. (6)



Fig.2-Heat Balance for an Incremental Element

A heat balance can then be performed over the whole wire, assuming that the flow conditions are uniform over the wire:

(7)
The two heat transfer components can be found from the flow conditions and the wire temperature distribution:
FLOW
RATE
VARIES
CONVECTIV
E HEAT
TRANFER
COFFICIENT
(h) VARIES
HEAT
TRANSFER
FROM
FILLAMENT
VARIES
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


228
www.ijergs.org

= 2

, (8)

= 2

=1
, (9)

To give a steady state heat transfer equation:

= 2h
c

dlT
m
T
a

, (10)

HOT WIRE ANEMOMETER DESIGN

A fluid flow measurement using constant temperature hot wire anemometer is show in fig. 5. The input stage consist of M1 and
M2 and the baising current is provided by M3 and M4 and the dc baising current is given 1nA.The output port Vout is connected
to M10 and M12 transistor.



Fig.3-Schematic of CTA


The table- 1 show the dimension of each transistor in the circuit. The input transistor M1 and M2 are drawn with identical sizes
and their width to length ratio as


1
. similarly transistors M8 and M9 are same size


8
. The PMOS current transistors
M3, M4, M5 and M8 are same size


3
.

Table 1: W/L Of CTA

Transistors


M1, M2 50
0.18

M3, M4, M5, M8, M9 4
0.18

M6, M7 1
0.18

M10 30
0.18

M11 50
0.18

M12 45
0.18



International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


229
www.ijergs.org



Fig.4-Common mode supply Fig.5- Fluid Flow Measurement Using Constant
Temperature
Hot Wire Anemometer




SIMULATION AND RESULT



Fig.6-Output of commom mode input supply

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


230
www.ijergs.org



Fig.7-output of differential mode input in db



Fig.8- Output result
CONCLUSION

In this paper an Fluid Flow Measurement Using Constant Temperature Hot Wire Anemometer using on 0.18 m technology
proposed.The input can vary in the range of microvolts. Therefore the simulation result with 64.2dB value of gain, 70dB value of
CMRR, and 400Hz bandwidth are obtained. These result demonstrate that the proposed circuit can be used to develop an
integrated circuit.The output obtain is in milivolts and then amplify.


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


231
www.ijergs.org

REFERENCES:
[1] Anderson, C. S., Semercigil, S. E. and Turan, . F., 2003, Local Structural Modifications for Hot-Wire Probe Design,
Experimental Thermal and Fluid Science, Vol.27, pp. 193-198.

[2] Brunn, H. H., 1971, Linearization and hot wire anemometry, Journal of Physics & Scientific Instruments, Vol.4, pp. 815-
820.

[3] Bruun, H. H., 1995, Hot-Wire Anemometry-Principles and Signal Analysis, Oxford Science Publications, New York.

[4] Bruun, H. H., Khan, M. A., Al-Kayiem, H. H. and Fardad, A. A., 1988, Velocity Calibration Relationships for Hot-Wire
Anemometry, Journal of Physics & Scientific Instruments, Vol.21, pp. 225-232.

[5] Citriniti, J. H., Taulbee, K. D. and Woodward, S. H., 1994, Design of Multiple Channel Hot Wire Anemometers, Fluid
Measurement and Instrumentation, Vol.183, USA, pp. 67-73.

[6] Eguti, C. S. A.; Woiski, E. R. and Vieira, E. D R., 2002, A Laboratory Class for Introducing Hot-wire Anemometry in a
Mechanical Engineering Course, Proceedings (in CD ROM) of the ENCIT 2002 VIII Brazilian Congress of Thermal Science and
Engineering , Paper code: CIT02-0411, October, 15 18 Caxambu, MG.

[7] Goldstein, R. J., 1983, Fluid Mechanics Measurements, Hemisphere Publishing Corp., 630 p. Gonalves, H. C., 2001,
Determinao Experimental da Freqncia de Emisso de Vrtices de Corpos Rombudos, Master of science dissertation, Unesp
Ilha Solteira, 200p.

[8] Lekakis I., 1996, Calibration and Signal Interpretation for Single and Multiple Hot-wire/hot-filme Probes, Measurement.
Science and Technology, Vol. 7, pp.1313-1333.

[9] Lomas, C. G., 1986, Fundamentals of the Hot-wire Anemometry, Cambridge University Press.

[10] Menut, P. M., 1998, Anemometria de Fio-quente, Proceedings of the First Spring Schools of Transition and Turbulence,
(A. P. S. Freire ed.), Rio de Janeiro, pp.235-272.

[11] Mller, S. V., 2000, Experimentao em turbulncia, Proceedings of the Second Spring Schools of Transition and
Turbulence, (A. Silveira Neto ed.), Uberlndia, MG, pp.63-97.

[12] Perry, A. E., 1982, Hot-Wire Anemometry, Oxford University Press, New York, 185 p.

[13] Persen, L. N. and Saetran, L. R., 1984, Hot-film Measurements in a Water Tunnel, Journal of Hydraulic Research, vol.21,
no. 4, pp. 379-387.

[14] Sasayama, T., Hirayama, T., Amano, M., Sakamoto, S., Miki, M., Nishimura, Y. and Ueno, S., 1983, A New Electronic
Engine Control Using a Hot-wire Airflow Sensor SAE Paper 820323, 1983.

[15] Vieira, E. D. R., 2000, A Laboratory Class for Introducing Hot-Wire Anemometry, proceedings of the ICECE 2000
International Conference on Engineering and Computer Education, de 27 a 30 de agosto de 2000, So Paulo, SP.

[16] Weidman, P. D. & Browand, F. K., 1975, Analysis of a Simple Circuit for Constant Temperature Anemometry, Journal of
Physics & Scientific Instruments, Vol.8, pp. 553-560





International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


232
www.ijergs.org

Canonical Cosine Transform Novel Stools in Signal processing
S.B.Chavhan
1

1
Yeshwant Mahavidyalaya, Nanded-431602, India
E-mail- chavhan_satish49@yahoo.in

Abstract: In this paper a theory of distributional two-dimensional (2-D) canonical cosine is developed using Gelfand-Shilov
technique and defined some operators on these spaces also the topological structure of some of the S-type spaces of
distributional two dimensional canonical cosine transform.
Keywords: 2-D canonical transforms, generalized function, testing function space, s-type spaces, canonical cosine
transform.

1. INTRODUCTION:
Linear canonical transform is useful tools for optical analysis and signal processing .The Fourier
Analysis is undoubtedly the one of the most valuable and powerful tools in signal processing, image processing and
many other branches of engineering .The fractional Fourier transform, a special case of linear canonical transform is
studied through different angles. Almeida [1], [2] Had introduced it and proved many of its properties Namias
[5].Opened the way of defining the fractional transform through the Eigen value as in case of fractional Fourier
transform. The conversional canonical cosine transform is defined as .

2 2
2 2
1
{ ( )}( ) cos ( ) ,
2
i d i a
s t
b b
s
CCTf t s e t e f t dt
ib b t
| | | |
| |
\ . \ .

| |
=
|
\ .
}

It is easily seen that for each
n
s R e and the function ( ) ,
c
K t s belongs to E(R
n
) as a function of t, where
2 2
2 2
1
cos
2
( , )
i d i a
s t
b b
c
s
e e t
ib b
K t s
t
| | | |
| |
\ . \ .
| |
|
\ .
=

Hence the canonical cosine transform of
1
( )
n
f E R e can be defined by

( ) ( ) { ( )}( ) , , ,
c
CCTf t s f t K t s =
where right hand side has a meaning as the application of
1
E f e to ( , ) .
c
K t s E e

As compared to one dimensional, canonical cosine transform has a considerably richer structure in two dimensional.
The definition of distributional two dimensional canonical cosine transform is given in section 2. S-type spaces using
Gelfand-shilov technique are developed in section 3.Section 4 is devoted for the operators on the above spaces. In
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


233
www.ijergs.org

section 5, discuss the result on the topological structures of some spaces. The notation and terminology as per
Zemanian[6],[7]. Gelfand-Shilove[3],[4].
2. DEFINITION OF TWO DIMENSIONAL (2D) CANONICAL COSINES TRANSFORMS:
Let
1
( x ) E R R denote the dual of ( x ) E R R . Therefore the generalized canonical cosine-cosine transform of
'
( , ) ( x ) f t x E R R e is defined as

{ }
1 2
2 ( , ) ( , ) ( , ), ( , ) ( , )
C C
DCCCT f t x s w f t x K t s K x w =

{ }
2 2 2 2
2 2 2 2
2 ( , ) ( , )
1 1
cos cos . ( , )
2 2
i d i d i a i a
s w t x
b b b b
DCCCT f t x s w
s w
e e t x e e f t x dxdt
b b
ib ib t t
| | | | | | | |
| | | |
\ . \ . \ . \ .

| | | |
=
| |
\ . \ .
} }

where,
2 2
1
2 2
1
( , ) . .cos
2
i d i a
s t
b b
C
s
K t x e t
b ib t
| | | |
| |
\ . \ .
| |
=
|
\ .
when 0 b =

( )
2
2
( )
i
cds
t ds
d e
o
= when b=0
&
2 2
2
2 2
1
( , ) . .cos
2
i d i a
w x
b b
C
w
K x w e x
b ib t
| | | |
| |
\ . \ .
| |
=
|
\ .
when 0 b =

2
( )
2
( )
i
cdw
d e x dw o = where 0 b =
where
{ }
1 2 1 2
,
( , ) ( , ) sup ( , ) ( , )
k l
E k C C t x C C
t
x
K t s K x w D D K t s K x w
< <
< <
= < .
3. VARIOUS TESTING FUNCTION SPACES:
In this section several spaces consisting of infinitely differentiable function are defined on the first and second
quadrants of coordinate plane.
3.1 The space
,
:
a b
CC

It is given by

( )
( )
,
, , ,
1
sup . ,
: / , . (3.1)
l k q
t x a b l l
l k q k q
t D D t x
CC E t x C A l
I

|
| | o |
+


= e = s
`

)

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


234
www.ijergs.org

The constant C
k,q
and A depend on | .
3.2 The space
,
:
a b
CC
|


( ) ( )
{ }
,
, , ,
. / , sup , (3.2)
a b l K q k k
l k q t x l q
CC E t x t D D t x C B k
|
|
|| | |
+
= e = s
The constants
, l q
C and B depend on | .
3.3 The space
, , a b
CC
|

:
This space is formed by combining the condition (3.1) and (3.2)
( ) ( )
{ }
1
, , sup ,
, , , ,
: / , , (3.3)
a b l q k l l k k
a b q l k I x t
CC E t x t D D t x C Al B k
| |

| | | |
+
= e = s
, , 0,1, 2............ l k q = Where A,B,C depend on | .
In next we have introduced subspaces of each of the above space that are used in defining the inductive limits of these
spaces.
3.4 The space
,
,
a b
m
CC

:It is defined as,



( ) ( ) ( )
{ }
1
, sup
, , , , , ,
: / , , (3.4)
l
a b l k q l
m a b q l k I t x k q
CC E t x t D D t x C m l

| | o | |
+
= e = s +
For any 0 > where m is the constant, depending on the function| .
3.5 The space
,
,
a b
n
CC
|
:This space is given by

( ) ( ) ( )
{ }
1
, sup
, , , , , , ,
: / , , (3.5)
k
a b l k q k
n a b q l k I t x l q
CC E t x t D D t x C n k
|
| o
| | | | o
+
= e = s +
For any 0 o > where n the constant is depends on the function| .
3.6 The space
, , ,
,
a b n
m
CC
|

:
This space is defined by combining the conditions in (3.4) and (3.5).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


235
www.ijergs.org

( ) ( )
{
( ) ( )
}
1
,, , sup
, , ,
: / , ,
. (3.6)
a b n l k q
m l k q I t x
l k
l k
C E t x t D D t x
C m n l k
|

|
o
| | | |
o
+
= e =
s + +

For any 0, 0 > > o and for given
m > 0, n > 0 unless specified otherwise the space introduced in (3.1) through (3.6) will henceforth be consider
equipped with their natural, Hausdoff, locally convex topologies to be denoted respectively by,
,
, , ,, , , , , , ,
, ,
, , , , ,
m
a b a b a b a b a b a b n
m n
T T T T T T

| |
| |

These topologies are respectively generalized by the total families of seminorms.
{ } { } { } { } { } { }
, , , , , , , , , , , , , , , , , , , ,
, , , ,
, , , , and
a b q l k a b q l k a b q l k a b q l k a b q l k
a b q l k
o o

4 SOME BOUNDED OPERATORS IN S-TYPE SPACES:
This section is devoted to the study of different types of linear operators, namely, shifting operator, differentiation
operator, scaling operator, in the
, , a b
CC
|

space. These operators are found to be bounded (continuous also) in the


, , a b
CC
|

.

Proposition 4.1: If ( )
, ,
,
a b
t x CC
|

| e and
is fixed real number then
( )
, ,
, , 0
a b
t x C t
|

| + e + >
Proof: Consider, ( ) ( )
1
, sup ,
l q k
lk x t
I
t x t D D t x | | + = +
( ) ( ) ( )
1
' '
, sup ,
l
k q
lk t x
I
t x t D D t x | | + =

'
where t = + t


s
l l k k
CAl B k
|

( )
, ,
thus , ,
a b
t x C
|

| + e

for 0. + > t
Proposition 4.2: The translation (shifting) operator ( ) ( ) : , , T t x t x | | + is a topological automorphism on
, , a b
C
|

for 0 + > t .
Proposition 4.3: If ( )
, ,
,
a b
t x C
|

| e and 0 > strictly positive number then ( )


, ,
,
a b
t x C
|

| e
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


236
www.ijergs.org

Proof: Consider
( ) ( )
1
sup
,
, ,
l k q
l k t x I
t x t D D t x | | =
( )
1
sup
,
l
k q
t x I
T
D D T x |

| |
=
|
\ .


( )
1
sup
,
l k q
l T x I
C T D D T x | =
Where
l
C is constant depending on

1 2
. s
l l k k
CC A l B k
|


.
. . s
l l y k k
CA l B K
|
,
where C = C
1
C
2

Thus ( )
, ,
,
a b
t x C
|

| e for 0 >

Proposition 4.4:

If 0 > and ( )
, ,
,
a b
t x C
|

| e then the scaling operator.



, , , ,
:
a b a b
R C C
| |

defined = R|
Where ( ) ( ) , , t x t x | = is a topological automorphism.
Proposition 4.5:The operator ( ) ( ) , ,
t
t x D t x | | is defined on the space
, , a b
C
|

and transform this space into itself


Proof: Let ( )
, ,
,
a b
t x C
|

| e ,if ( ) ( ) , ,
t
D t x t x | = we have,
( ) ( )
1
sup
,
,
l k q
l k t x I
t D D t x = ( )
1
sup
,
l k q
t x t I
t D D D t x =
( )
1
sup 1
,
l q k
x t I
t D D t x
+
=

( )
( )
( ) 1 1
1
+ +
s +
k k l l
CA l B k
|


( )
, ,
,
a b
t x C
|

e

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


237
www.ijergs.org

5 TOPOLOGICAL PROPERTIES OF
, a b
CC

- SPACE:
This section is devoted to discuss the result on the topological structures of some of the spaces and the results
exhibiting their relationship. Then attention is also paid to be strict inductive limits of some of these spaces.
Theorem 5.1:
( )
, ,
,
a b a b
CC T

is a Frechet space
Proof: As the family
, a b
A

of seminorms
{ }
, , , ,
, , 0
l k q a b
l k q
o

=
generating
, a b
T

is countable it, suffices to prove the


completeness of the space
( )
, ,
,
a b a b
CC T

.
Let us consider a Cauchy sequence { }
n
| in
, a b
CC

. Hence for a given 0 e> there exist


an
, , l k q
N N = such that for , > m n N
( ) ( )
1
sup
, , , ,
l q k
a b l k q m n x t m n I
t D D o | | | | = <e (5.1)
In particular for 0, , l k q m n N = = = >


( ) ( )
1
sup
, ,
m n I
t x t x | | <e
(5.2)

Consequently for fixed t in
1
I ( ) { }
, t x | is a numerical Cauchy sequence.
let ( ) , t x | be the point wise limit of ( ) { }
,
m
t x | using (5.2) we can easily deduce that ( ) { }
,
m
t x | converges to |
uniformly on
1
I
.
Thus | is continuous moreover, repeated use of (5.1) for different values l,k,q, yields that |

is
smooth i.e.

+
eE | further from,(5.1)
We get, ( ) ( )
, , , , , , , a b ql k m a b l k q N
o | o | s +e > m n

,
,
l l y
k q
C A l E s +
taking m and e is arbitrary we get,
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


238
www.ijergs.org

( ) ( )
1
sup
, , ,
,
l q k
a b l k x t I
t D D t x o | | =
s
l l
k
C A l


Hence
, a b
CC

| e and it is the
, a b
T

limit of
m
| by (5.1).
This proves the completeness of
, a b
CC

and
( )
, ,
,
a b a b
CC T

is a Frechet space.
Proposition 5.2: If
1 2
< m m then
1 2
, ,
, ,
.
a b a b
m m
C C

c The topology of
1
,
,
a b
m
C

is equivalent to the topology induced on


1
,
,
a b
m
C

by
2
,
,
a b
m
C


1 2 1
, , ,
, , ,
. ~ /
a b a b a b
m m m
i e T T C


Proof: For
1
,
,
a b
m
C

| e

and
( ) ( )
, , , , 1
,
l
l
a b l k k
x C m l

o | s +

( )
, 2
s +
l
l
k
C m l

thus,

, 1 , 2
, , a b a b
m m
C C

c
The second part is clearly from the definition of topologies of these spaces. The space

,
,
a b
C

can be expressed as union of countably normed spaces.


6. CONCLUSION:
In this paper two-dimensional canonical cosine is generalized in the form the distributional sense, and proved
some operators on these spaces also discussed the topological structure of some of the S-type spaces.



REFERENCES:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


239
www.ijergs.org

[1] Almeida Lufs B., The fractional Fourier transform and time frequency, representation IEEE trans. On Signal
Processing, Vol. 42, No. 11, Nov. 1994.
[2] Almeida Lufs B., An introduction to the Angular Fourier transform, IEEE, 1993.
[3] Gelfand I.M. and Shilov G.E., Generalized Functions, Volume I Academic Press, New York, 1964.
[4] Gelfand I.M. and Shilov G.E., Generalized Functions, Volume II, Academic Press, New York, 1967.
[5] Namias Victor., The fractional order Fourier transform and its application to quantum mechanics, J. Inst Math.
Apptics, Vol. 25, P. 241-265, 1980.
[6] Zemanian A.H., Distribution theory and transform analysis, McGraw Hill, New York, 1965.
[7] Zemanian A.H., Generalized integral transform, Inter Science Publishers New York, 1968



















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


240
www.ijergs.org

Students and Teachers Perception of the Causes of Poor Academic
Performance in General and Further Mathematics in Sierra Leone: A Case
Study of BO District Sourthern Province
Gegbe B
1
, Koroma J.M
1

1
Department of Mathematics and statistic, School of Technology, Njala University
E-mail- bgegbe@njala.edu

ABSTRACT: The essential basis for economics and social well being of any country lies in the understanding by its people, basic
mathematical, scientific, and technological knowledge. This concern is not just about those expected to continue in further studies
and professions that are related to mathematics, science, technology and economics , but a competent mathematical population
forms a basis for national growth and development. There are number of different forces that have led to strong concern about the
low quality of mathematical knowledge, skills values and performance among students the last few decades in Bo city. Over the
years, many different perceptions about General Mathematics and Further Mathematics have been held. This study examines the
poor performance among Senior Secondary students at West Africa Secondary School Certificates Examination level in Bo City,
in Sierra Leone. The target population of the study included one hundred students (100) and seventy-five (75) teachers randomly
selected from five (5) secondary schools in Bo city. Questionnaires were used to collect relevant data for the study. Chi-square
tests were used to analyse the research questions. Other forms of data are presented in the form of percentages. Teacher
qualification and student environment did not influence students poor performance, but teaching methods have influenced poor
performance of students in General Mathematics and Further Mathematics. Teachers should encourage and motivate the student to
adore the mathematics related subjects. Students must develop positive attitude towards the teacher and the subjects matter.
KEYWORDS: Academic performance; perception; qualifications, student and teacher
ACKNOWLEDGMENT
I owe depth of gratitude to God Almighty through Jesus for giving me knowledge, wisdom and understanding throughout my
academic pursuit.
My sincere thanks go to Miss Marian Johnson who works assiduously as a typist to ensure that this work comes to an end. I am
particularly grateful to my wife for her architectural role in my academic activities. Thanks and appreciations go to my mother and
late father, they nurtured me to the level I am today.
INTODUCTION
The essential basis for economics and social well-being of any country lies in the understanding by its people, basic
mathematical, scientific, and technological knowledge, there are number of different forces that have led to strong
concern about the low quality of mathematical knowledge, skills, values and performance among students in the last
few decades in Bo City. The concern is not just about those expected to continue in further studies and professions
that are related to Mathematics, science, Technology and Economics, but a competent mathematical population forms
a basis for national growth and development. This concern has often revolves on how mathematically literate students
are surviving in our rapid advancing scientific and technological world we live today.
In Sierra Leone for instance, the differential scholastic achievement of students has been, and still remain a source of
concern and research interest to educators, government and parents. This is so because of the great importance that
education has on national development of the country. Also, there is a consensus of opinion about the fallen standard
of education, parents and governments are in total agreement that their investment on education is not yielding the
desired dividend. Teachers also have continued to complain of students how performances at both internal and
external examination as result of peer group influence.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


241
www.ijergs.org

Technology related subjects at university, such a result are worrisome. The situation is almost the same for Further
Mathematics for which the average pass was 3.7%. Looking at the analysis is not surprising that the Gbamanja
Commission recommended the following.
Awarding Grant-in Aid to all female students who had gained admission to tertiary institution to study
Science causes such as Mathematics Physics, Chemistry, Biology and Engineering options.
Recruited other four thousand (4000) teachers (2008)

Education at secondary school level is supposed to be bedrock and the foundation towards higher knowledge in
tertiary institutions. It is an investment as well as an instrument that can be used to achieve a more repaid economic,
social, political technological, scientific and cultural development in the country. The national policy on education
(2004) stipulated that secondary education is an instrument for national development general development of the
security and equality of educational opportunities to all Sierra Leone children, irrespective of any real or marginal
disabilities. The role of secondary education is to lay the foundation for further education and id a good foundation is
laid at this level, there are likely to be no problem of subsequent level.
However, different people at different times have passed the blame of poor performance in secondary school to
students because of their low retention, parental factors, association with wrong peers, low achievement motivation
and the like (Aremu & Sokan, 2003; Aremu & Oluwole 2001).
Morakingo (2003) believed that the falling level of academic achievement is attributable to teacher own use at verbal
reinforcement strategy. Others found out that the attitude of some teachers to their job is reflected in their poor
attendance to lessons, lateness to school, unsavoury comments about students performance that could damage their
ego poor method of teaching and they like affect pupils academic performance.
This research is geared towards students and teachers perception of the poor academic performance in General and
Further mathematics in Sierra Leone.
STATEMENT OF THE PROBLEM
According to Audit Service Sierra Leone Report 2009, it was observed that external examination remain poor in 2009.
And out of the total three million, eight thousand one hundred and sixty thousand Leones (Le3, 080,160,000) paid
for West Africa senior School certificate (WASSCE) fees by government, the sum of two million seven hundred and
eighty one, three hundred and forty eight thousand Leones (Le2, 781,348,000) was for candidates whose credit passes
were low that they could not qualify for entering to any tertiary institution of learning in Sierra Leone. The poor
performance of pupils in the 2008 Basic Education Certificate Examination (BECE) and West Africa Senior School
Certificate Examination (WASSCE) in Sierra Leone prompted his Excellency the president to set up Professor
Gbamanga commission of enquiry to investigate reasons for such dismissal performance. The table below shows
results for WASSCE in Sierra Leone for 2007, 2008 and 2009 respectively in General Mathematics and Further
Mathematics.
Table 1: General Mathematic
Year Total Number of Candidate Credit (A1 C6) % Failed (above C6) %
2007 18397 4 96
2008 23799 4 96
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


242
www.ijergs.org

2009 2922 5 95
Data source: WASSCE
Table 2 Further Mathematics
Year Total Number of Candidate Credit (A1 C6) % Failed (above C6) %
2007 384 4 96
2008 2770 4 96
2009 2084 5 96
Data Source: WASSCE
From the above tables 2007 2009 academic year students performance in the above subjects prove to be absolutely
poor, hence average of 4.3% of the total number of students offering General Mathematics at WASSCE level maintain
a credit or better. Considering that General Mathematics is a prerequisite for admission for all Science and
Technology related subjects at university level such result is worrisome. The situation is almost the same for Further
Mathematics for which the average percentage pass was 3.7% looking at the above analysis is not surprising.
JUSTIFICATION
All over the country, there is a consensus of opinion of the fallen standards of education in Sierra Leone, parents and
Government are in total agreement in the opinion that, their huge investment on education is not yielding the desired
dividend. Teachers also complain of students low performance at both internal and external examination.
The West Africa senior School Certificate Examination (WASSCE) results conducted by West Africa Examination
Council (WAEC) justified the problematic nature and generalization of poor secondary school students performance
in different school subjects.
The question as early statement, what is the cause of this fallen standard and poor academic performance of students?
Is the fault entirely that of teachers or students or both of them? It is that students of today are non-achievers because
they have low intelligence quotient and a good neutral mechanism to be able to act purposefully, think rationally and
deal effectively with academic tasks? Or is it because teachers are no longer putting in much commitment as before?
Or is it in teacher method of teaching and interaction with pupils? Or is the poor performance of students caused by
parents. Neglect separation and poverty? The present study therefore sought to find out students and teacher
perception on the causes of poor academic performance among secondary school students in Bo city.

THE PURPOSE OF STUDY
The purpose of this study sets out clearly among other things to: Finding out whether there is significant difference
between methods of teaching and academic performance qualification to teachers and academic performance
qualification to teachers and academic performance and students environment and poor academic performance.
RESEARCH QUESTIONS
This research will attempt to answer the following questions:
i. What is the perception of teachers on students poor performance and teachers qualification?
ii. What is students perception on teachers qualification and students poor academic performance?
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


243
www.ijergs.org

iii. What is the perception of teachers on students poor performance and teachers method of teaching?
iv. What is the students perception on their academic performance and teachers methods of teaching?
v. What is teachers perception on students environment and students poor performance?
vi. What is the students perception on students environment and poor academic performance

RESEARCH OBJECTIVES
The specific objectives of the study are to identify
a) Demographic information of respondents and gender
b) To use Chi-Square test to know
i. The perception of teachers on students poor performance and teachers qualification.
ii. The students perception on teachers qualification and students poor academic performance.
iii. The perception of teachers on students poor academic performance and teachers method of teaching.
iv. The students perception on their poor academic and teachers method of teaching.
v. The teachers perception on students environment and students poor performance.
vi. The students perception on students environment and poor academic performance.

STUDY AREA
This study was carried in five (5) randomly selected senior secondary schools in Bo, Bo City. The following
schools for the purpose of the study where
i. Ahmadiya Muslim Secondary School (AMSS)
ii. Bo Government Secondary School (Bo School)
iii. Christ the King College (CKC)
iv. Queen of the rosary Secondary School (QRS)
v. Saint Andrews Secondary School (UCC)

SCOPE OF THE STUDY
The study seeks to investigate of students and Teachers Perception of the consensus poor Academic performance in
five (5) randomly selected senior secondary Schools in Central part of Bo City.
RESEARCH DESIGN
The researcher randomly selected five (5) senior secondary schools in Bo City, to consult with the teachers and student
about some of the problems affecting the student and teachers perception of the causes of poor academic performance
in general mathematics and further mathematics teaching at school. The study adopted descriptive survey design. This
is because the researcher is only interested in determining the influence of the independence variables on the
dependent variables without manipulating any of the variables. The variables that were identified in the study for
research questions and data collection instrument were.
i. Students poor or academic performance and teachers qualifications
ii. Students poor academic performance and teachers method of teaching
iii. Students environment and poor academic performance.

This study of the poor performance toward general mathematics and Further Mathematics on both subjects was done
using the qualities method. This was done by using a questionnaire name where student and teachers used a liker scale
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


244
www.ijergs.org

to gather the data for quantitative aspect of the study. The Information were analysed for a correlation between the
variables in the study. The results of the questionnaire name were placed in to themes for reporting. The researcher
attempted to record student; and teachers; reaction to the impact of attitude on classroom performance. In addition to
the liker scale questions; the subject were asked to qualify their answer with a brief explanation or comment. The
student and teachers were notified of the study. The participants did maintain complete anonym is the study. The
surveys were returned to the researcher through person to person process. The researcher collected all of the surveys
and the data was placed into statistical analysis.
SAMPLING PROCEDURE AND SAMPLE SIZE
Simple random sampling was used to select five (5) major Secondary Schools in Bo City. The standard of the schools
was also taken into consideration for better yield of result.
INSTRUMENTATION
The main instrument designed for the study is a self-designed questionnaire on perception of students poor academic
performance. The questionnaire contained two (2) sections:
A - Contains information
B- Requires responses of alternation options from the respondents. Options ranged from strongly disagree. The
researcher used the following instruments in the study:
i. Well-structured questionnaire, which helped the researcher to attain high response rate.
ii. An informal interview was used to complement the effect of the questionnaire. This was done in the form
of conversational discussion.
iii. A secondary data from examination office was obtained formally

DATA COLLECTION
At the various schools, the researcher and the introduced himself to the principal, class teachers and the student and
briefed the college authorities and the student about the purpose of his visit and study. The researcher equally
explained to the subject what their role should be during the training programme. The researcher the randomly
selected the number of student needed for the study, gave them the questionnaire and explained to them how to
respond to it.
The process of responding to the questionnaire was explained to the student to ensure that valid data were collected.
The researcher printed and administered one hundred and seventy five questionnaires to both student and teachers.
One hundred (100) were administered to student and seventy five (75) to the teachers.
Eight five (85) questionnaires were collected from student and seventy (70) from the teachers. Therefore a total of
fifty five (55) out of one hundred and seventy five (175) were collected from both teacher and student. Primary data
were collected from the student and teachers to determine the performance of student in general mathematics and
further mathematics at WASSCE. The data obtained were analysed using frequency count and chi-Square statistical
analysis with the formula.

2
= (O-E)
2

E
Where
2
= Chi-Square
O = Observed frequency
E = Expected frequency
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


245
www.ijergs.org

TREATMENT OF DATA
The data collected were compiled organized and interpreted. This led to the computation of percentages of teachers
and student responses to the questionnaire and interviews based on their perception of the problems. Inferential
statistics (analysis of frequency count and Chi-Square) were the two methods used to analyse the collected data

HYPOTHESES
In attempting to reach decisions, it is useful to make assumption about the population involved. Such assumptions,
which may or may not be true, are called statistical hypothesis. They are generally statements about the probability
distribution of the populations. Chi-Square test was to test the null and alternative hypothesis.

1) H
1
: Teachers perceive that teachers qualification does not affect poor academic performance among
secondary school students.
H
o
: Teachers perceive that teachers qualification does affect poor academic performance among
secondary school students.
2) H
1
:Students perceive teachers Qualification does not have impact on their academic performance
H
o
: Academic performance. Students Perceive Teachers Qualification does have impact on their
academic performance.
3) H
1
: Teachers method of teaching and learning materials does not influence students academic
performance.
H
o
: Teachers method of teaching and learning materials does influence students academic performance.
4) H
1
: Teachers do not perceive students environment as influencing their academic performance.
H
o
: Teachers do perceive student environment as influencing their academic performance.
5) H
1
: Students perception that teachers methods of teaching and learning materials do not influence
students academic performance.
H
o
Students perceive that teachers method of teaching and learning materials do influence students
academic performance.
6) H
1
: Teachers do not perceive students environment as influencing their academic performance.
H
o
: Teachers do perceive student environment as influencing their academic performance

THE RESULT OF PRIMARY DATA
The chapter presents the results of study. It does so on the context of the research question in chapter one. The results
of analysis are presented as follows:

DEMORGRAPHIC INFORMATION OF RESPONDENTS
Table 3: Gender of respondents
Male Female Total
Respondents No. % No. % %
Teachers 50 71.4 20 28.6

100
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


246
www.ijergs.org

Students 52 62.4 32 37.6

100

Table 3 shows the gender of respondents. It could be observed that seventy teachers were given questionnaire. Fifty
male (71.4%) responded to the questionnaire and twenty female (28.6%) responded to the questionnaire. It is
observed that eighty four students were given questionnaire, and out of which sixty two percent (62.4%) responded to
the questionnaire and thirty eight female (37.6%) responded to the questionnaire.
Table 4: Perception of academic performance and Teachers' poor academic performance questionnaire.
Items Variable SA A UU D SD Raw
Total
1 Lack of quality of teachers has an
adverse effect in the poor performance
of students
23
(8)
32
(21)
4
(2)
17
(36)
6
(15)
82
2 Most teachers do not have adequate
knowledge of their subject matter
2
(16)
22
(42)
4
(4)
85
(73)
50
(29)
163
3 Teachers extreme dependence on
textbooks can lead to poor academic
performance
4
(16)
17
(43)
1
(4)
115
(75)
31
(30)
168
4 Seminars, workshop, in-service course
are not organized for teachers
15
(11)
40
(43)
3
(3)
42
(53)
18
(21)
118
5 Inadequate teaching skill 6
(13)
38
(36)
3
(3)
70
(62)
22
(25)
139
6 Poor status of teachers with economic
stress have drained the motivation of
the teachers
22
(8)
45
(21)
2
(2)
6
(36)
8
(15)
83

HYPOTHESIS:
H
o
: Teachers perceive that teachers qualification does not affect poor academic performance among secondary
school students

H
1
: Teachers perceive that teachers qualification does affect poor academic performance among secondary school
students

At 5 % level of significance Degree of freedom :(r-1) (c-1) = (5-1) (6-1) = 4*5=20

2
(table) = 31.41

2
(calculated) = (23-8)
2
+ (32-21)
2
+(4-2)
2
+(2-2)
2
+ (6-37)
2
+(8-15)
2
=
8 21 2 2 37 15

2
(cal) = 228.5



DISCUSSIONS
Since the
2
(228.5) calculated is greater than
2
(31.41) table, we reject the null hypothesis
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


247
www.ijergs.org

and accept that alternative hypothesis

CONCLUSIONS
Teachers perceive that teachers qualification affect poor academic performance among
Secondary School Students.
Table 5: Perception of students in their poor academic performance and teachers qualification
Items Variable SA A UU D SD Raw
Total
1 Lack of quality of teachers has an
adverse effect in the poor performance
of students
75
(27)
50
(43)
4
(18)
52
(78)
41
(56)
222
2 Most teachers do not have adequate
knowledge of their subject matter
30
(40)
59
(62)
4
(29)
142
(113)
87
(81)
322
3 Teachers extreme dependence on
textbooks can lead to poor academic
performance
28
(30)
62
(58)
115
(18)
80
(135)
102
(125)
387
4 Seminars, workshop, in-service course
are not organized for teachers
30
(45)
58
(71)
18
(30)
135
(129)
`125
(92)
366
5 Inadequate teaching skill 33
(41)
54
(64)
13
(27)
160
(117)
72
(81)
332
6 Poor status of teachers with economic
stress have drained the motivation of
the teachers
42
(38)
91
(60)
6
(26)
113
(109)
60
(72)
312

HYPOTHESIS:

H
o
: Students perceive teachers Qualification does not have impact on their Academic performance.

H
1
: Students perceive teachers Qualification does have impact on their academic performance

CALCULATION
At 5% level on significance
Degree of freedom:

2
(calculated) = (75-27)
2
+ (50-43)
2
+(4-18)
2
+(6-26)
2
+ (113-109)
2
+(60-72)
2
=
72 27 43 18 26 109

2
(cal.) = 459.6

DISCUSSION

Since the
2
(459.6) calculated is greater than
2
(31.41) a table we reject the null hypothesis
and accept the alternative hypothesis.

CONCLUSION
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


248
www.ijergs.org

Students perceive teachers qualification as having impact on their academic performance
Table 6: Perception of teachers on the influence teachers' method of teaching and learning materials on students' poor
academic performance
Items Variable SA A UU D SD Raw
Total
1 Large number of Students accommodated
in a classroom make the teacher not do
have classroom management
22
(6)
49
(16)
0
(1)
10
(41)
0
(18)
81
2 Teachers are not innovative in
methodology
4
(12)
18
(32)
0
(2)
112
(85)
33
(36)
167
3 Instructional materials are not provided for
the teachers to use in teaching various
subjects. Teachers never organize inter-
class and inter school debates for the
students
15
(9)
20
(25)
2
(1)
70
(66)
22
(28)
129
4 Inadequate supervision of the inspectors in
secondary schools
6
(10)
22
(26)
4
(1)
75
(169)
`28
(29)
135
5 Teacher do not plan their adequately 2
(13)
14
(36)
1
(2)
93
(96)
79
(41)
189
6 There are no adequate textbooks in schools 15
(9)
51
(24)
2
(1)
34
(63)
22
(27)
124

HYPOTHESIS

H
o
: teachers method of teaching and learning materials does not influence students academic performance.

H
1
: teachers method of teaching and learning materials does not influence students academic
performance.

Calculation
At 5% level on significance

Degree of freedom: (r-1) (c-1) = (5-1) (7-1) =4x6=24

2
(table)
=
36.41

2
(cal) = (22-6)
2
+ (49-16)
2
+ (0-1)
2
+(2-1)
2
+ (34-63)
2
+(22-7)
2

6 4 1 1 63 27

2

(cal.)
= 329.03

DISCUSSION
Since the
2
(329.03)
calculated is greater than
2
(36.410) table we reject the null hypothesis
and accept the alternative hypothesis.

CONCLUSION
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


249
www.ijergs.org

Teachers method of teaching and learning materials influences students academic performance.

Table 7: Perception of students on the influence of teachers' method of teaching and learning materials on students
poor performance
Items Variable SA A UU D SD Raw
Total
1 Large number of Students accommodated in a
classroom make the teacher do not have
classroom management
61
(52)
133
(16)
4
(7)
58
(127)
32
(78)
288
2 Teachers are not innovative in methodology 18
(30)
37
(66)
31
(9)
218
(16)
60
(98)
364
3 Instructional materials are not provided for the
teachers to use in teaching various subjects.
Teachers never organize inter-class and inter
school debates for the students
29
(36)
61
(73)
9
(9)
158
(178)
145
(108)
402
4 Inadequate supervision of the inspectors in
secondary schools
21
(28)
58
(61)
2
(8)
159
(149)
`97
(91)
337
5 Teacher do not plan their adequately 18
(36)
39
(79)
4
(10)
289
(191)
132
(116)
432
6 There are dedicated to their teaching subjects 26
(31)
56
(69)
7
(9)
172
(169)
116
(102)
372
7 There are no adequate textbooks 8in schools 39
(30)
83
(66)
3
(8)
129
(160)
108
(97)
362

HYPOTHESIS
H
o
: Students perception that teachers methods of teaching and learning materials do not influence students
academic performance.

H
1
: Students perceive that teachers method of teaching and learning materials do influence students academic
performance
Calculation
At 5% level of significance.

Degree of freedom = (r-1) (c-1) = (5-1) (7-1) 4x6 = 24

2
(cal)
=

36.41

2

(cal)
= (61-24)
2
+ (133-52)
2
+(4-7)
2
+(3-8)
2
+ (129-160)
2
+(108-97)
2

24 52 7 8 160 97

2

(cal)
= 446.6

DISCUSSION
Since the
2

(446.6)
calculated is greater than
2

(36.41)

table we reject the null hypothesis and accept the alternative
hypothesis

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


250
www.ijergs.org

CONCLUSION
Students perceive that teachers method of teaching and learning material do influence students academic
performance.
Table 8: Perception of teachers' and students' environment and their poor performance
Items Variable SA A UU D SD Raw
Total
1 Student have no negative attitude to their studies 17
(21)
40
(45)
4
(2)
22
(14)
6
(7)
89
2 Most students background/environment do not
stimulate learning or studies
24
(22)
41
(46)
0
(2)
14
(15)
13
(7)
92
3 Level of the parents education affects their
childrens academic performance
20
(20)
41
(46)
2
(2)
21
(15)
7
(7)
91
4 Poor group influence students 20
(22)
63
(49)
3
(2)
2
(15)
7
(7)
95
5 Divorce among parents affects the academic
performance of students
25
(20)
43
(43)
1
(2)
14
(14)
2
(7)
85

HYPOTHESIS
H
o
: Teachers do not perceive students environment as influencing their academic performance

H
1
: Teachers do perceive student environment as influencing their academic performance
CALCULATION
At 5% level of significance.

Degree of freedom = (r-1) (c-1) = (5-1) (5-1)= 4x4 = 16

2
(cal)
=

26.3

2
(cal)
= (17-21)
2
+ (40-45)
2
+(4-2)
2
+(1-2)
2
+ (14-14)
2
+(2-7)
2

21 45 2 2 14 7

2

(cal.)
= 39.46

DISCUSSION
Since the
2

(39.46)
calculated is greater than
2
(26.3)

table we reject the null hypothesis and accept the alternative
hypothesis

CONCLUSION
Teachers do perceive student environment as influencing their academic performance
Table 9: Perception of student on students' environment and their poor academic performance
Items Variable SA A UU D SD Raw
Total
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


251
www.ijergs.org

1 Student have no negative attitude to their studies 46
(53)
99
(108)
13
(9)
118
(94)
71
(83)
347
2 Most students background/environment do not
stimulate learning or studies
41
(49)
79
(101)
3
(8)
102
(88)
98
(77)
323
3 Level of the parents education affects their
childrens academic performance
40
(49)
82
(100)
2
(8)
92
(87)
104
(77)
320
4 Poor group influence students 44
(40)
90
(81)
7
(7)
77
(71)
42
(62)
260
5 Divorce among parents affects the academic
performance of students
63
(50)
131
(90)
13
(7)
29
(79)
53
(69)
289

HYPOTHESIS

H
o
: Student perceive that environment do not affect their academic performance

H
1
: Student perceives that environment do affect their academic performance

CALCULATION
At 5% level of significance.

Degree of freedom = (r-1) (c-1) = (5-1) (5-1) 4x6 = 16

2
(cal)
=

26.3

2

(cal)
= (46-53)
2
+ (99-108)
2
+(13-9)
2
+(13-7)
2
+ (29-79)
2
+(53-69)
2

53 108 9 9 79 69

2

(cal.)
= 117.6

DISCUSSION
Since the
2

(117.6)
calculated is greater than
2
(26.3)

table we reject the null hypothesis and accept the alternative
hypothesis

CONCLUSION
Student perceive that environment do affect their academic performance

SUMMARY OF FINDINGS
1. Perception of teachers on students poor academic performance and teachers qualification, since the
2

(228.5)

calculated is greater that
2
(31.41)

table we

reject the all hypothesis and then accept the alteration hypothesis.
2. Perception of student on their poor academic performance and teachers qualification, since the
2

(459.4)

calculated is greater than
2

(31.49)
table we reject the null hypothesis and accept the alteration hypothesis.
3. Perception of teachers on the influence of teachers method of teaching and learning material on student poor
academic performance, since
2

(329.03)
calculated is greater than
2

(36.41)
table we reject the null hypothesis and
accept the alteration hypothesis.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


252
www.ijergs.org

4. Perception of student on the influence of teachers and learning materials on student poor academic
performance, since the
2

(446.6)
calculated is greater than
2

(36.41)
table we reject the null hypothesis and accept
the alteration hypothesis.
5. Perception of teachers on the students environment and their poor performance, since the
2

(39.46)
calculated is
greater than
2

(26.3)
table we reject the all hypothesis and accept the alteration hypothesis.
6. Perception of student on student environment and their poor academic performance, since the
2

(117.6)

calculated is greater the
2

(26.3)
table we reject the all hypothesis and accept the alteration hypothesis.
7. Moreover it has been revealed that the students are facing enormous constraints in learning general
mathematics constraints involve: difficulties to understand the concept taught cost of learning materials,
inefficiency of teachers.
8. Furthermore, in the finding departments are not given the much-needed motivation to kindle the efficiency of
teaching general mathematics and further mathematics; finally, it has been revealed that the teachers in the
department are facing great constraints in the teaching of General and Further mathematics. Some of these
constraints includes: inadequate and inappropriate class size for individual attention, limited amount of time
allocated to the subject of teaching and the expansions of teachers due to insufficiency in the schools.
DISCUSSION OF FINDINGS
The purpose of this study was to determine if there is a correlation between a student and teachers perception of the
cases of poor academic performance in General and Further mathematics in the classroom.
A five point liker scale survey was used to access the poor performance toward General Mathematics and Further
Mathematics of eighty five (85) students and seventy (70) teachers in five (5) randomly selected secondary schools.
The first three questions in the survey were to gather some general demographic information about the respondents.
They asked for the respondents name, gender and age. The eighteen questions were to access the responses at
alternative opinion from the respondents. Options ranged from strangely agree to strongly disagree.
For research question one and two, teachers believed that students poor academic performance is not influenced by
teachers qualification; while students perceived that teachers qualification affect their academic. The difference in
their perception could be because students have high expectations for teachers that could teach them and therefore
believe that any teacher that does not meet up to such expectations will not aid their academic performance. However,
from the conclusion above, student perception states that students poor academic performance is influence by
teachers qualification.
Also, only teachers perceive that teachers, method of teaching and learning material influence students academic
performance. That is the fallen level of academic achievement is attributed to teachers non-use of verbal
reinforcement strategy. Students disagreement to this may be because they perceive that students personal factors
affect their academic performance more that teachers method of teaching and learning environment.
CONCLUSION
Based on the findings, the following conclusions were arrived at:

1. Teachers perceive that, teachers qualification affect poor academic performance among secondary school
students
2. Student perceive teachers qualification as having impact on the academic performance
3. Teachers method of teaching and learning materials influences students academic performances.
4. Student perceives that teachers method of teaching and learning material do influence student academic
performance
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


253
www.ijergs.org

5. Teachers do perceive student environment as influence their academic performance
6. Student perceive that environment do affect do their academic performance

REFERENCES:
[1] West African Examination Council Bulletin Report 2008, 2009,2010
[2] European Journal of Social Sciences Volume 13, Number 2 (2010)
[3] Adebule, S. O. (2004). Gender differences on a locally standardized anxiety rating scale in mathematics for
Nigerian secondary schools in Nigerian Journal of Counseling and Applied Psychology.Vol.1, 22-29.
[4] Adesemowo, P. O. (2005). Premium on affective education: panacea for scholastic
Malfunctioning and aberration. 34
th
Inaugural Lecture, Olabisi Onabanjo University. Ago-
Iwoye: Olabisi Onabanjo University Press.
[5] Adeyemo, D. A. (2005). Parental Involvement Interest in Schooling and School Environment as predictors of
Academic Self-efficacy among fresh Secondary School Student in Oyo State,Nigeria. Electronic Journal of Research
in Educational Psychology, 5-3 (1) 163-180.
[6] Ajala and Iyiola (1988). Adolescence Psychology for teachers: Oyo: Abodurin Rogba
Publishers.
[7] Ajala, N. & Iyiola, S. (1988). Adolescence psychology for Teachers. Oyo: Abodunrin Rogba Publishers.
[8] Ajayi, Taiwo (1988). A system approach towards remediation of academic failure in Nigerian schools. Nigeria
Journal of Educational Psychology, 3, 1, 28-35.
[9] Aremu, A. O. (2000). Academic performance 5 factor inventory. Ibadan: Stirling-Horden
Publishers.
[10] Aremu, A.O. & Oluwole, D.A. (2001).Gender and birth order as predictors of normal pupils anxiety pattern in
examination. Ibadan Journal of Educational Studies, 1, (1), 1-7














International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


254
www.ijergs.org

Quality in Design and Architecture- A Comprehensive Study
Ruqia Bibi
1
, Munazza Jannisar Khan
1
, Muhammad Nadeem Majeed
1

1
University of Engineering and Technology, Taxila, Pakistan
E-mail- ruqia.kibria@yahoo.com
Abstract The design quality holds a decisive influence on success of product. Our work is comprehensive study of
software metrics to evaluate quality in the design phase which will be beneficial to find, repair design problems and
saves large amount of potential expenditure. This paper evaluates employment of several design methods such as
Robust Engineering Design, Failure Mode and Effect Analysis etc., to ensure quality and minimize variation. It has
also figured the use of emerging technologies, new materials and the adoption of a simultaneous approach to design. It
introduces a quality attribute driven perspective on software architecture reconstruction. It is essential to elect and
follow architectures that full fill specific concerns or required properties with a certain degree of confidence, as
architecture and design models together, signifies the functional behavior of the system.

Keywords FMEA (Failure Mode and Effect Analysis); QMOOD (Quality Model for Object Oriented Design);
DEQUALITE (Design Enhanced Quality Evaluation); EBS (Engineering breakdown Structure); OBS (Objective
Breakdown Structure); SOA (Service Oriented Architecture); (EJB) Enterprise JavaBeans.
INTRODUCTION
Quality of a system is vital and considered as a conditional, perceptual and often subjective attribute. It is always
crucially handled in the system production and must be expressed in a quantified manner. It is not just a marketing
phrase to be used, nor it is created by control, but it is a function which must be designed and synthesized into the
evolution and development of a product [10]. Software quality makes a bridge with the class, organization of class and
key important with their design. Quality in the design of system lays its foundation on a set of various lucid and
correct decisions in design process. To great extent, quality in design is determined by the level of designers decision-
making skill. Designers, taking emergence in to account, concurrently should bring design quality factors under
consideration in products life cycle [1]. The quality of design is influenced by several factors, which include inter-
alia, the designers or design team involved in the project, the design techniques, tools and methods employed during
the design process, the quality and level of available technical knowledge, the quality of the management of the design
process, and the nature of the environment under which the design process is carried out. The above factors in one way
or the other do have significant influence on the quality of both the design process and the resulting product. To design
quality into a product requires the adoption of a planned and controlled approach to design, which can be
accomplished by using a methodical or systematic design process [2]. On-line parameter design mode involves
optimization of the controllable variables with respect to expected levels of outcome quality parameters. Identification
phase embeds establishment of a strategy, a model, which has the ability to relate quality response characteristics with
the controllable and uncontrollable variables [4]. An acceptable claim in systematic, well defined process control and
evaluation of each phase in development, improves overall quality of software product [7].
Our research work is comprises, analysis and illustration of software metrics those embed quality in design and
architecture, along with prior approaches proposed and followed. Design of systems is essential in producing the
product quality. Dig deep strategy is applied on the approach and every implicated methodology is discussed. A
paper structure probe down to Section 2 discusses the quality in design and architecture techniques. Section 2 further is
divided to sub section those comprise of separate analysis of each former research work. Section 3 entrenches the
analysis stage of our work, three tables summarize and depict evaluation parameters and section 4 is about the
conclusion build from all above mentioned study.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


255
www.ijergs.org

QUALITY IN DESIGN AND ARCHITECTURE TECHNIQUES
Quality is one of the most important issues in software development. The developed software product would result in
customer dissatisfaction if it didnt meet the quality standards. Quality of a software product relies on complete
understating and evaluation of the underlying design and architecture. Previous studies mentioned [2][6] that the high
level design descriptions helped in predicting the quality of a product, thus used as one of the important quality
assurance technique. The problems left unnoticed in design phase would penetrate in later development stages and
even a minute change will cost much. These factors point to a need of some methods or techniques which can reduce
issues related to design phase and hence contribute in the overall quality of a system. Issues before stated have vital
impact on products output. Our study surveys various approaches those have been applied or proposed to deal with
the concerns. Several techniques such as MQI [1], Online Parameter Design [3], and Factor Strategy Model [4] are
proposed by researchers. The paper surveys these techniques for achieving quality in design and architecture.
FRAMEWORK FOR PERFORMING ON-LINE PARAMETER DESIGN
In previous studies, quality information regarding product and design has not been a matter of consideration by
designers and other persons who are accountable for the project because of the difficulty level they faced to capture
quality information. To cover all the aspects which left uncovered the researchers has presented a model named MQI
(Manufacturing Quality Information) helps in making decisions related to design phase which manufacture quality
information by employing layered approach. The quality information is divided into three layers i.e. application,
logical and physical layer. IDEF0 diagram has been used to demonstrate the supporting design decisions in MQI. The
proposed model will not only shorten the development lifecycle but also reduce the cost dramatically.
DESIGN FUNCTION DEVELOPMENT SYSTEM
Authors of this paper mentioned that the quality is a function which must be designed and synthesized into evolution
and development of a product and/or process, at the early stages of engineering design. They developed a Design
function Development (DFD) system. This system was developed by the expansion of Quality Function Deployment
and the integration of other important aspects of design [2].
APPROACH FOR INTELLIGENT QUALITY CONTROLLERS
During the production or operation phase some uncontrollable factors are left un-noticed which if observed will reveal
significant improvements in quality. The researchers proposed the methodology called online parameter design by
using the extra information about uncontrollable factors. The methodology has two distinct modes- identification
mode and online parameter design mode. For modeling quality response characteristics Feed-forward neural networks
are recommended. Plasma etching manufacturing process is tested against proposed quality controllers [3].
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


256
www.ijergs.org


Figure 11 Proposed Framework for performing on-line parameter design [3]
FACTOR- STRATEGY TECHNIQUE
As we move from one software product to the other, their assessment with external quality attributes will be harder
because of the increasing complexity and variation of design properties. The effect on top level quality factors by
object oriented design has been analyzed in this research and to quantify the impact a naval approach has been
proposed called detection strategy. The proposed model (Factor-Strategy) has two major characteristics- an intuitive
construction and direct link between cause/problems and design level [4].


Figure 12 Factor Strategy Model. The concept [4]
For automation researchers have developed Pro-Detection toolkit. By utilizing detection strategies the toolkit inspects
the code quality.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


257
www.ijergs.org

QUALITY ATTRIBUTE DRIVEN SOFTWARE ARCHITECTURE (QADSAR)
During development phase, architectures need to look back existing systems to inspect methodical obstacles in
incorporating new technology approaches. The researchers proposed Quality Attribute Driven Software Architecture
Reconstruction (QADSAR) approach to present reasoning and irradiates the information needed to link organizations
goal to the gained information. By using Software Architecture Reconstruction several mediums can be improved:
understanding and improving the architecture of existing systems, assessing the quality characteristics and improving
the documentation of the overall system. QADSAR proved to be an important contribution when systems types and
its quality attributes were studied in detail.

Figure 3 the QADSAR Steps [5]
ANALYSIS OF QUALITY MODELS TO DESIGN SOFTWARE ARCHITECTURE
The success of architecture depends on the quality of its design. Quality software product can be achieved if each
development stage is evaluated and controlled in a well-defined process. The choice of quality model plays vital role
in establishing the quality of software architecture. The researchers discussed three approaches based on quality
models: ISO 9126, ABAS (Attribute Based Architectural Styles) and Dromey. These approaches are useful in
introducing quality issues related to design in the development process. Analysis pointed out the lack of unified
language and shares the fact that the high level characteristics in a software product must be quantified [6].
ANALYSIS OF SOFTWARE QUALITY, FROM DESIGN AND ARCHITECTURES PERSPECTIVE
Integration of reusable components proved beneficial for the evolution of software product but it also demands
complete understanding of the previous version of software. For that understanding of code is not sufficient, other
descriptions such as design and architecture descriptions are also necessary. The paper focuses on cognitive approach
based on previous knowledge and experiences. Design and architecture primarily expresses functional aspects, an
experiment is conducted to identify whether it is possible to represent some non-functional aspects. The research
concluded that incorporation of these representations in design and architecture is worthwhile thus helping developers
in maintaining and evolving complex software systems [7].

QMOOD (QUALITY MODEL FOR OBJECT ORIENTED DESIGN)
Rapid upsurge in environmental changes introduce various business challenges for the organizations. The issue is
highlighted and addressed, by presenting a hierarchical model for quality assessment in service oriented architecture.
Suggested model recognizes design problems before hand they flow into implementation phase of system.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


258
www.ijergs.org


Figure 4 SOA Design Components [8]
Flow of design problem into later stage makes the defects difficult to resolve, with more consumption of resources.
The research approach extends the QMOOD model for the object oriented software. Metrics those evaluate design
quality in the product design, provides organizations with an opportunity to save large expenditure for problem
resolution [8].

The Crucial Factors for Product Quality Improvement
Authors [9] targeted the design quality concerns, often introduced in the industrial practitioners by large distances
between manufacturing and design departments in supply chains. Design quality holds a crucial prominence and in
the supply chain early involvement of the manufacturer is essential. To justify the importance, a case study of Chinese-
made toys is brought under consideration. Study illustrates model named as design- manufacture chain model. Paper
also presents a quality relationship model between the design quality, product quality and manufacturing quality by
elucidating a conceptual framework. Outcome can be sensitizing in the industrial domain with intended actors and
their association-ship with the product quality and design process.

Figure 5 Quality Relationship Model [9]
DEQUALITE (DESIGN ENHANCED QUALITY EVALUATION) APPROACH
Authors [10] work presents the quality models those take into account the design of systems, specifically antipatterns,
design patterns and code smells. Quality models are presented to calculate the quality of objected oriented embedded
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


259
www.ijergs.org

systems. Diverse methodologies are presented in the prior work whose aim is to enhance the design of systems with
good quality feature. An approach DEQUALITE (Design Enhanced Quality Evaluation) that build quality models is
projected. The method focused on the internal attributes of object oriented systems, their design and measures quality.
Quality- Assurance personnel, Practitioners- developers and managers can use this technique that is being instigated as
a working tool, which can evaluate the quality of their systems [10].

RELIABILITY & QUALITY IN DESIGN
The authors [11] open a talk about the quality improvement and annual reliability plan for designing and for the way
designing is carried out. The work presented an overview on the design process, showing the quality and reliability
perspective. Paper shows the two major approaches for design, Transactions and transformations. In the transactions
follow over creativity in design in encouraged according to a strategy, emphasizes the projects output performance
and value, to the time and cost factor. The transformation approach is to improve the methodology followed to
improve design, carrying it throughout the design production. A systematic approach is used in the improvement effort
of design process. The improvement strategy should be made on input and output activities of design process. The
reliability attribute of systems is shown according to organizations chart and is of two kinds, one is field- failure
analysis and the second one is predictive reliability. The paper relates problem to the diverse perspectives in an
organizations those are not identified. [11]

DESIGN QUALITY MANAGEMENT APPROACH
Design defects those unfortunately flow in the construction and operating stages cause large resource expenditure. It is
proposed that 40% of quality problems are caused by flaw in product design. The authors [12] in the paper present a
project life cycle, this cycle introduces design quality management. Survey based on questionnaire strategy is used to
collect and investigate diverse opinions from relevant departmental personnel. EBS-OBS based design quality
matrixes are implicated in the case study. The communication among all the personals is considered important.
Authors, based on survey result valued design quality management on a project life cycle as essential.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


260
www.ijergs.org


Figure 6 Information Interaction in Project Life Cycle [12]

DESIGN AND IMPLEMENTATION OF AN IMPROVED CHANNELIZED ARCHITECTURE
Authors [13] in their research work illustrated digital channelized receiver architecture, it covers theory of arithmetic
and implementation for real time signal processing. The proposed architectures performance conforms in quality with
the conventional architecture strategy. The study analysis the convolution in the non- blind spots digital channelized
receiver. Using two modules the filter bank structure is also achieved. The research work concluded that the suggested
architecture is beneficial in solving processor resource issues [13].

QUALITY MEASUREMENT IN OBJECT-ORIENTED DESIGN
Authors in this research work [14] depicted that by the use of adequate quantification strategies, quality can be
calculated in object oriented software systems. Metrics in systems does not portray information for creating a verdict
about code transformation that can help to enhance quality. A mechanism known as factor-strategy is recommended.
Goodness in design is mentioned in terms of metrics, conforming the design quality. The work concludes that the
detection methodology is beneficial, finds the design problems and heuristics are enumerated in metrics- based rules.

STUDY OF MINING PATTERNS TO SUPPORT SOFTWARE ARCHITECTURE EVALUATION
Authors [15] have illustrated an approach that depicts the software architecture evaluation process. This approach
takes into account systematic extraction, architecturally essential data from the software design and architecture
patterns. Patterns used are EJB architecture usage patterns. Any benefits claimed by pattern can only be achieved by
applying same tactics. Paper also examines the validation pattern those are published. Major research objective
presented by authors is distill sensitive scenarios in quality attribute and improve SA design. Study suggests that
software patterns are helpful and important source of information about architecture. This information latterly is
extracted and documented systematically to improve SA evaluation process.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


261
www.ijergs.org


Figure 7 Information Interaction in Project Life Cycle [12]
ANALYSIS
As above mentioned sections depict that our survey encompasses fifteen approaches and used sixteen parameters for
evaluation. Table 2 show the results of analysis of evaluation parameters defined in evaluation criteria in table 1.
Through the analysis of Table 2, it is found that almost all the techniques use the tool support including [2, 4]. All the
techniques have the quality parameter of reusability showing integration with the components, conforming to its
specification, clearly defined and verified. Research work stated in [5, 6, 7, 12] cater for behavior specification. Most
of these methodologies have a parameter of robustness which is a quality assurance methodology focused on testing
the robustness of software. Robustness testing has also been used to describe the process of verifying the robustness
(i.e. correctness) of test cases in a test process. Xianlong XU [1] uses the case study of heavy vehicle transmission
design, Jie Ding [3] the case study of On-line parameter design, Yanmai Zhu [5] the case study of Automotive body
components while Christoph Stoermer [9] research study is related to a case study on Chinese-made toys focusing on
the deign quality from industrial perspective and addressing their manufacturing flaws [9]. Many of the techniques do
not use testability. Testability is an important quality characteristic of software. A major and essential feature is lack of
testability contributes to a higher test and maintenance effort. Testability of product needs to be measured throughout
the life cycle. This means to start with testability requirements, i.e. requirements related to the testability of the
software product. Stakeholders for testability requirements include the customers and software users since testability is
important to shorten maintenance cycles and to locate residual errors. The research previously stated in [4, 6, 8, 10,
14], deal with the issue in language interoperability. However, rest of approaches has the competences of language
interoperability.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


262
www.ijergs.org



Table 1: Evaluation Criteria for Quality in design and architecture


Evaluation Parameters Meaning Possible Value
Tool support For proposed design a tool is produced. Yes, No
Performance In terms of responsiveness and stability. Yes, No
Language
Interoperability Language translator for real time implementation
Yes, No
Behavior Specification
Functional decomposing and representation of the
problem
Yes, No, State Chart, Other
modeling notation
Maintainability
It can be restoring to specified condition within a
specified period of time or not.
Yes, No
Usability
User interface in software development should contain
usability attribute for its intended audience
Yes, No
Testability The design being proposed is testable or not. Yes, No
Security
The software is able to withstand hostile act and
Influences or not.
Encryption algorithm, No
Case study Support of examples. Yes, No
Reliability
Probability of system failure and that it will perform its
intended function or not for a specified time interval
Yes, No
Correctness Required functions are performed accurately or not. Yes, No
Robustness
Whether it is able to operate under stress or tolerate
unpredictable or invalid input.
Yes, No
Reusability
It has the ability to add further features with slight or no
modification
Yes, No
Timing Constraint Quality specification through timing Yes, No
UML Complaint UML standard has been followed or not. Yes, No
Extensibility
New capability can be added to the software without
changes to the underline architecture
Yes, No
S# Techniques Correctnes
s
Reliability Case Study Testability Maintainabil
ity
Language
Interoperabi
lity
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


263
www.ijergs.org

1 Xianlong XU et al,
2007
Yes Yes Case study of
heavy vehicle
transmission
design
No Yes No
2 S. Sivaloganathan et
al, 1997
Yes Yes No No Yes No
3
Jie Ding et al, 2000
Yes No Plasma
etching
process
modeling and
online
parameter
design
Yes Yes No
4 Ratiu et al, 2004 Yes Yes Yes Yes Yes C++ and Java
5
Christoph Stoermer et
al, 2000
Yes Yes Related to
Automotive
body
components
Yes Yes No
6 Francisca Losavio et
al, 2001
Yes Yes No Yes Yes Object
Oriented
7 Lars Bratthall et al,
2002
Yes Yes No Yes Yes No
8 Bingo Shim et al,
2005.
Yes Yes No No Yes Yes
9 Yanmai Zhu et al,
2008.
Yes Yes Chinese made
toys
No Yes No
10 Foutse Khomh et al,
2009.
Yes Yes No No Yes OOP
11 W.A. Golomski &
Associates, 1995
Yes Yes No No Yes No
12 Luo Yan et al, 2009 Yes Yes Yes No Yes No
13 Xu Shichao et al,
2009
Yes No No No Yes No
14 Radu Marienescu,
2005
Yes Yes Yes Yes Yes OOP
interoperabili
ty
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


264
www.ijergs.org

Table 2: Analysis of parameters of quality attributes in design and architecture
CONCLUSIONS
Approaches presented in the previous mentioned sections can be way more advantageous to introduce design quality
issues in the development process. There should be possible way to show some attributes related to quality clearly in
systems design and architecture. A good product design covering its users needs generate a quality product. It defines
product congenital, inherent quality. The improvement in product design quality depends on a set of rational and right
decisions in design process. The evaluation and control of each stage of development in a well-defined process will
improve the overall quality of the final software product.
The quality of design is influenced by several factors, which include inter-alia, the designers or design team
involved in the project, the design techniques, tools and methods employed during the design process, the quality and
level of available technical knowledge, the quality of the management of the design process, and the nature of the
environment under which the design process is carried out. The quality practices those link internal attributes of
system to the external features are limited to the fault proneness and do not considered the systems designs. This
makes it hard to differentiate between a well-structured and a system with poor design, even though their respective
designs are the first things that maintainers see. Design flaws later make the expenditure large in the construction and
operation stage, so the quality in design has a great influence on the life-cycle quality of the project. There is merely a
perfect software design. The process of producing software design is error prone and makes no exception. The defects
in system design have inverse effect on the quality attributes such as flexibility or maintainability. Thus, the
identification and detection of these design problems is essential for the evaluation and making a product with
improved quality.
REFERENCES:
[1] Xianlong Xu and Shurong Tong A model of manufacturing quality information supporting design International
Conference of Industrial Engineering and Engineering Management, IEEE, 2007

[2] Evbuomwan, Sivaloganathan, and S. Jebb A Design function deployment-a design for quality system, Customer
Driven Quality in Product Design IEEE, 1997

[3] Chinnam R.B, Jie Ding and May G.S Intelligent quality controllers for on-line parameter design
Semiconductor Manufacturing, IEEE Transactions, 2000

[4] Marinescu and Ratiu Quantifying the quality of object-oriented design: the factor-strategy model proceeding of
11th Working Conference on Reverse Engineering, 2004.

[5] Christoph Stoermer and Liam OBrien Moving Towards Quality Attributes Driven Software Architecture
Reconstruction Robert Bosch Corporation, Software Engineering Institute Carnegie Mellon University, USA, 2000

[6] Francisca Losavio and Ledis Chirinos Quality models to design software architectures proceedings of
Technology of Object-Oriented Languages and Systems, 07 August, 2001

15 Muhammad Ali
Babar et al, 2004
Yes Yes No No Yes No
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


265
www.ijergs.org

[7] Lars Bratthall and Claes Wohlin Understanding Some Software Quality Aspects from Architecture and Design,
Models Dept. Communication Systems, Lund University, 2002

[8] Bingu Shim, Siho Choue, Suntae Kim and Sooyoung Park A Design Quality Model for Service-Oriented
Architecture 15
th
Asia-Pacific Software Engineering Conference, 03 December, 2005

[9] Yanmei Zhu, Jianxin You and Alard Design Quality: The Crucial Factor for Product Quality Improvement in
International Production Networks 4th International Conference on Wireless Communications, Networking and
Mobile Computing, February, 2008

[10] Khomh Software Quality Understanding through the Analysis of Design WCRE 16th Working Conference
on Reverse Engineering, 2009

[11] Golomski and W.A Reliability and quality in design IEEE, 1995

[12] Luo Yan, Mao Peng and Chen Qun Innovation of Design Quality Management Based on Project Life Cycle
International Conference on Management and Service Science, December, 2009

[13] Xu Shichao, Gao Meiguo and Liu Guoman Design and implementation of an improved channelized
architecture International Conference on Computer Science and Information Technology, August, 2009

[14] Radu Marinescu Measurement and quality in object-oriented design Proceedings of the 21st IEEE
International Conference on Software Maintenance, IEEE, 2005

[15] Liming Zhu, Muhammad Ali Babar and Ross Jeffery Mining Patterns to Support Software, 2004






International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


266
www.ijergs.org

A Review of Different Content Based Image Retrieval Techniques
Amit Singh
1
, Parag Sohoni
1
, Manoj Kumar
1

1
Department of Computer Science & Engineering, LNCTS Bhopal, India
E-Mail- amit.singh5683@gmail.com

Abstract: The extraction of features and its demonstration from the large database is the major issue in content based
image retrieval (CBIR). The image retrieval is interesting and fastest developing methodology in all fields. It is
effective and well-organized approach for retrieving the image. In CBIR system the images are stored in the form of
low level visual information due to this the direct correlation with high level semantic is absent. To bridge the gap
between high-level and low-level semantics several methodologies has developed. For the retrieval of image, firstly
extracts the features of stored images then all extracted features will goes for the training. After the completion of
preprocess, itll compare with the query image. In this paper the study of different approaches are discussed.
Keywords: - CBIR, Extraction, Semantic gap, DWT, SVM, Relevance Feedback, EHD, Color model.
INTRODUCTION
With the development in the computer technologies and the advent of the internet, there has been bang in the amount
and the difficulty of digital data being produced, stored, conveyed, analyzed, and accessed. The lots of this information
are multimedia in behavior, comprising digital images, audio, video, graphics, and text information. In order to
construct use of this enormous amount of data, proficient and valuable techniques to retrieve multimedia information
based on its content need to be developed. In all the features of multimedia, image is the prime factor.
Image retrieval techniques are splitted into two categories text and content-based categories. The text-based algorithm comprises
some special words like keywords. Keywords and annotations should be dispenses to each image, when the images are stored in a
database. The annotation operation is time consuming and tedious. In addition, it is subjective. Furthermore, the annotations are
sometimes incomplete and it is possible that some image features may not be mentioned in annotations [1]. In a CBIR system,
images are automatically indexed by their visual contents through extracted low-level features, such as shape, texture, color, size
and so on [1, 2]. However, extracting all visual features of an image is a difficult task and there is a problem namely semantic gap
in the semantic gap, presenting high-level visual concepts using low-level visual concept is very hard. In order to alleviate these
limitations, some researchers use both techniques together using different features. This combination improves the performance
compared to each technique separately [3, 4].
In this paper, there are two steps for answering a query to retrieve an image. First, some keywords are used to retrieve similar
images and after that some special visual features such as color and texture are extracted. In other words, in the second step, CBIR
is applied. Color moments for color feature and co-occurrence matrix for extraction of texture features have been computed. This
paper is organized as follows. The next session focuses on the related works in the field. In section 3, content-based image
retrieval systems have been explained. In section 4, about different CBIR techniques are explained and in last section the paper is
concluded.
CONTENT BASED IMAGE RETIEVAL
A typical CBIR system automatically extract visual attributes (color, shape, texture and spatial information) of each image in the
database based on its pixel values and stores them in to a different database within the system called feature database [5,6]. The
feature data for each of the visual attributes of each image is very much smaller in size compared to the image data. The feature
database contains an abstraction of the images in the image database; each image is represented by a compact representation of its
contents like color, texture, shape and spatial information in the form of a fixed length real-valued multi-component feature
vectors or signature. The users usually prepare query image and present to the system. The system usually extract the visual
attributes of the query image in the same mode as it does for each database image and then identifies images in the database whose
feature vectors match those of the query image, and sorts the finest analogous objects according to their similarity value. During
operation the system processes less compact feature vectors rather than the large size image data thus giving CBIR is
contemptible, speedy and proficient advantageous over text-based retrieval. CBIR system can be used in one of two ways. First,
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


267
www.ijergs.org

precise image matching, that is matching two images, one an example image and another image in image database. Second is
estimated image matching which is finding very intimately match images to a query image [7].

FIG. 1. BLOCK DIAGRAM OF SEMANTIC IMAGE RETRIEVAL
BASICALLY CBIR USED TWO APPROACHES FOR RETRIEVING THE IMAGES FROM THE IMAGE DATA BASE.
Two approaches
TEXT-BASED APPROACH (INDEX IMAGES USING KEYWORDS)
CONTENT-BASED APPROACH (INDEX IMAGES USING IMAGES)
Text-Based Approach:
Text based method used the keywords descriptions as a input and get the desired output in the form of similar types of
images .Examples:- (Google, Lycos, etc.) [14].
Content-Based Approach:
Content based approach using image as an input query and it generate the output of similar types of images [14].
RELATED WORK
There are various method has been proposed to extract the features of images from very large database. In this paper
various algorithms are discussed to retrieve the image:
a) Jisha. K. P, Thusnavis Bella Mary. I, Dr. A. Vasuki [8]: proposed the semantic based image retrieval system
using Gray Level Co-occurrence Matrix (GLCM) for texture attribute extraction. On the basis of texture features,
semantic explanation is given to the extracted textures. The images are regained according to user contentment and
thereby lessen the semantic gap between low level features and high level features.
b) Swati Agarwal, A. K. Verma, Preetvanti Singh [9]:
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


268
www.ijergs.org

The proposed algorithm is enlightened for image retrieval based on shape and texture features not only on the basis of
color information. Firstly the input image is decomposed into wavelet coefficients these wavelet coefficients give
generally horizontal, vertical and diagonal features in the image. Subsequent to wavelet transform (WT) and Edge
Histogram Descriptor (EHD) is then used on preferred wavelet coefficients to gather the information of foremost edge
orientations. The grouping of DWT and EHD methods increases the performance of image retrieval system for shape
and texture based retrieve. The performance of diverse wavelets is also compared to find the appropriateness of
meticulous wavelet function for image retrieval. The proposed algorithm is skilled and examined for large image
database. The results of retrieval are conveyed in terms of exactitude and recall and compared with different other
proposed schemes to show the supremacy of our scheme.
c) Xiang-Yang Wang, Hong-Ying Yang, Dong-Ming Li [10]: proposed a new content-based image retrieval
technique using color and texture information, which achieves higher retrieval effectiveness..Initially, the image is
altered from RGB space to adversary chromaticity space and the individuality of the color contents of an image is
incarcerated by using Zernike chromaticity distribution moments from the chromaticity space. In next, the texture
attributes are extracted using a rotation-invariant and scale-invariant image descriptor in contour-let domain, which
presents the proficient and flexible estimation of early processing in the human visual system. Lastly, the
amalgamation of the color and texture information provides a vigorous feature set for color image retrieval. The
experimental results reveal that the proposed color image retrieval is more accurate and efficient in retrieving the user-
interested images.
d) S. Manoharan, S. Sathappan [11]: They Implemented the high level filtering wherever they are using the
Anisotropic Morphological Filters, hierarchical Kaman filter and particle filter proceeding with feature extraction
method based on color and gray level feature and subsequent to this the results were normalized.
e) Heng Chen and Zhicheng Zhao [12]: authors described relevance feedback method for image retrieval.
Relevance feedback (RF) is an efficient method for content-based image retrieval (CBIR), and it is also a realistic step
to shorten the semantic gap between low-level visual feature and high-level perception. SVM-based RF algorithm is
proposed to advances the performance of image retrieval. In classifier training, a model expanding method is adopted
to stability the proportion of positive samples and negative samples. After that a fusion method for multiple classifiers
based on adaptive weighting is proposed to vote the final query results. SVM-based RF scheme is proposed to improve
performance of image retrieval. In classifier training, a sample intensifying scheme is accepted to balance the
proportion of positive and negative samples and then fusion scheme for multiple classifiers based on adaptive
weighting is anticipated to vote the final query results.
f) Monika Daga, Kamlesh Lakhwani [13]:
Proposed a new CBIR classification was being developed using the negative selection algorithm (NSA) of ais. Matrix
laboratory functionalities are being used to extend a fresh CBIR system which has reduced complexity and an
effectiveness of retrieval is increasing in percentage depending upon the image type.
g) S. Nandagopalan, Dr. B. S. Adiga, and N. Deepak [15]: They proposed a novel technique for generalized image
retrieval based on semantic contents is offered. The grouping of three feature extraction methods specifically color,
texture, and edge histogram descriptor. There is a prerequisite to include new features in future for better retrieval
efficiency. Any combination of these techniques, which is more suitable for the application, can be used for retrieval.
This is presented through User Interface (UI) in the form of relevance feedback. The image properties analyzed in this
work are by using computer vision and image processing algorithms. Anticipated for color the histogram of images are
calculated, for texture co-occurrence matrix based entropy, energy etc are calculated and for edge density it is Edge
Histogram Descriptor (EHD) that is found. To retrieval of images, a new idea is developed based on greedy approach
to lessen the computational complexity.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


269
www.ijergs.org

h) G. Pass [16]: They proposed a novel method to describe spatial features in a more precise way. Moreover, this
model is invariant to scaling, rotation and shifting. In the proposed method segmentations are objects of the images
and all images are segmented into several pieces and ROI (Region of Interest) technique is applied to extract the ROI
region to enhance the user interaction.
i) Yamamoto [17] proposed a content-based image retrieval system which takes account of the spatial information of
colours by using multiple histograms. The proposed system roughly captures spatial information of colors by dividing
an image into two rectangular sub-images recursively. The proposed method divides an image into dominant two
regions using a straight line vertically or horizontally, even when the image has three or more color regions and the
shape of each region is not rectangular. In each sub-image, the division process continues recursively until each region
has a homogeneous color distribution or the size of each region becomes smaller than a given threshold value. As a
result, a binary tree which roughly represents the color distribution of the image is derived. The tree structure
facilitates the evaluation of similarity among images.
DIFFERENT IMAGE RETRIEVAL TECHNIQUES
There are various techniques have been proposed to retrieve the image effectively and efficiently from the large set of
image data in which some of the methods are described below:

Relevance Feedback:
Every users need will be different and time varying. A typical scenario for relevance feedback in content-based image
retrieval is as follows [19]:
Step 1: Machine provides early retrieval results
Step 2: User provides opinion on the currently exhibited images based on the degree whether they are relevant or
irrelevant to her/his request
Step 3: Machine learns the judgment of the user and again search for the images according to user query. Go to step 2
Gaussian Mixture Models:
Gaussian mixture models are one of the density models which includes a number of component Gaussian functions.
These functions are combined with different weights to form a multi-modal density. Gaussian mixture models are a
semi-parametric which can be used instead of non-parametric histograms (which can also be used to approximate
densities). It has high flexibility and precision in modeling the underlying distribution of sub-band coefficients.
Consider N texture classes labeled by n N {1,.N} related to different entities. In order to classify a pixel,
neighborhood of that pixel must be considered. Then SS sub-images blocks features can be computed assign classes
to these blocks [20]. The set of blocks is represented by B. The neighborhood of a block b is called patch P(b). It
should be defined as the group of blocks in a larger T.T sub-image with b at its centre. Db is designated as the data
associated to that block and Vb N be the classification of b. The classification can be done based on the following
rule Equation (1):
v = argmax Pr(Db | vb = n) (1)
Thus, all the blocks in P(b) which has class n maximizes the probability of the data in P(b). It reduces computation
time to classify the texture. The data Db linked with each block is denoted by the vector of features . For each and
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


270
www.ijergs.org

every texture class, a probability distribution that represents the feature statistics of a block of that class must be
selected. Thus the probability that obtained will be a convex combination of M Gaussian densities Equation (2):

=1
,

(2)
where, ,

is Gaussian of mean

and
Covariance the parameters for a given class are thus

.
A GMM is the natural model which can be if a texture class contains a number of distinct subclasses. Thus by using
Gaussian mixture model to retrieve the texture properties of the image gives desired accuracy.
Semantic template:
This technique is not so widely used. Semantic templates are generated to support high-level image retrieval. Semantic
template is usually defined as the "representative" feature of concept calculated from a collection of sample images
[8].
Wavelet Transform:
Wavelet transforms are based on diminutive waves, called wavelets, of varying frequency & limited duration. Discrete
wavelet transform renovate the image in four different parts higher frequency part (HH), high low frequency part
(HL), Low high frequency part(LH), lower part (LL) vertical parts is 1-level image decompositions then compute
moments of all frequency part than store and use it as feature to obtain the images. Texture entropy and contrast,
clumsiness are the mostly used properties. Statistical features of grey levels were one of the efficient methods to
classify texture. The Grey Level Co-occurrence Matrix (GLCM) is used to extract second order statistics from an
image. GLCMs have been used very profitably for texture calculations. From Grey Level Co-occurrence Matrix all the
features are deliberated and stored into the database. The use of Grey Level Co-occurrence Matrix provides good
result but it is in spatial domain so it is more error pron. CCH (Contrast Context Histogram) to find out the feature of
the query image and other images stored in the database. CCH is in spatial domain and it presents global distribution.
The MPEG Descriptors has been used like Edge Histogram Descriptor for texture. The Edge histogram differentiates
edges according to their direction [20].
Gabor filter:
They are widely used for texture analysis because its similar characteristics with human perception. A two-
dimensional Gabor function g(x ,y) consists of a sinusoidal plane wave of some frequency and orientation (carrier),
modulated by a two dimensional translated Gaussian envelope. Gabor Filter have one mother filter using that other
filter banks are generated and their features are calculated and stored in database. Structure of different types of Edges
[20]

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


271
www.ijergs.org


Fig. 2. Different types of edges
Support Vector Machine
Support vector machine is a supervised learning technique that analyzes data and identify pattern used for
classification. It takes a set of input, read it and for each input desired output form [21] such type of process is known
as classification, when if output is continuous than regression performed. For constructing maximum separating hyper-
planes SVM maps input vector to a higher dimension feature space. Feature space refers to an input space which is
reserved for measuring similarity with the help of kernel function. It is high dimension space where linear separation
becomes very easier than input space [22]. In this, raw data is transformed into a fixed length sample vectors. Here are
two terms which are used in feature space i.e. called feature values and feature vectors. The features of image is called
feature values and these feature values presented the machine in a vectors is known as feature vectors. Kernel function
used in the kernel method performing some operation such as classification, clustering upon different categories of
data like text document, progression, vectors, group of points, image and graphs etc. it maps the input data into a
higher dimension feature space because in this data could be easily separated or better structured [23]. There are some
points in the feature space which are separated by some distance is called support vectors. It is the point between
origin and that point and demonstrates the location of the separator. The detachment from the decision surface to the
closet data point concludes the margin the classifier.

Fig. 3. Linear separating hyper-planes for two class separation


International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


272
www.ijergs.org

Color Histogram:
It is a standard demonstration of color characteristic in CBIR systems. It is very efficient in description of both local
and global features of colors. This computes the chromatic information and invariant of image along the view axes for
translation and rotation, when the large scale image data base computes histogram, its efficiency is not satisfactory and
to overcome this conflict joint histogram technique is introduced. Color histograms are a fundamental technique for
retrieving images and extensively used in CBIR system. The color space has segmentation, for every segment the
pixels of the color within its bandwidth are counted, which demonstrates the relative frequencies of the counted colors.
We use the RGB color space for the histograms. Only minor differences have been observed with other color spaces
for the histogram. Color Histogram H(m) is a distant probability function of the image color. This probability function
is used for the determination of joint probability function for the intensities of the three color channels. Further
informally, the color histogram is defined as.
ha,b,c=N.prob(a,b,c)
where a, b, c represent the three color channel (RGB)
H (m) = [h1, h2hn]
Hk=nk/N, k=1, 2.n;
where N is the no. of pixel image M and nk is the no. of pixel with the image value k.

2D Dual-Tree Discrete Wavelet Transform:
D-DWT is developed to overcome two main drawbacks of DWT: shift variance and poor directional selectivity [24].
With carefully designed filter banks, DDWT mainly has following advantages: approximate shift invariance,
directional selectivity, restricted redundancy, and analogous computation efficiency as DWT either the real part or the
imaginary part of DDWT [24] yields perfect reconstruction and thus can be employed as a stand-alone transform. We
use magnitude of sub-bands to determine feature vector. The execution of DDWT is very simple. An input image is
decomposed by two partitions of filter banks,
0

,
1

and
0

,
1

disjointedly, filtering the image horizontally and


then vertically just as predictable 2D- DWT does. Then eight sub bands are acquired:

and


Each high-pass sub-band from one filter bank is combined with the corresponding sub-band from the other filter bank
by uncomplicated linear operations: averaging or differencing. The size of every sub-band is the equal as that of 2D
DWT at the same level. But there are six high pass sub-bands instead of three high-pass sub-bands at each level. There
are two low-pass sub-bands, LLb and LLa are recursively decomposed up to a preferred level within each branch. The
basic functions of 2D DDWT and 2D DWT are shown in Fig. 4.a and Fig. 4.b correspondingly. Each DDWT basis
function is oriented at a definite direction, including 75, 15, and 45. Conversely, the basis function of HH sub-
band of 2D DWT mixes directions of 45 together.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


273
www.ijergs.org



(a) (b)

(c)
Fig. 4. 2-D Discrete Wavelet Transform sub-bands
CONCLUSION AND FUTURE WORK
The study in content-based image retrieval (CBIR) in the past has been emphasis on image processing, low-level
feature extraction, etc. Extensive experiments on CBIR systems demonstrate that low-level image features cannot
always describe high-level semantic concepts in the users mind. It is believed that CBIR systems should provide
maximum support in bridging the semantic gap between low-level visual features and the richness of human
semantics. In this paper literature of different content retrieval method is discussed like SVM based retrieval, SVM
with relevance feedback method, DWT based method etc in which some of the methods are efficient to shorten the
semantic gap between the image while some are less so in future work need to develop such technique which much
efficiently and effectively reduces the semantic gap and increases the information gain.

REFERENCES:
[1]. H. Mohamadi, A. Shahbahrami, J. Akbari, Image retrieval using the combination of text-based and content-
based algorithms, Journal of AI and Data Mining, Published online: 20, February-2013.
[2]. Pabboju , S. and gopal, R. ( 2009). A Novel Approach For Content- Based Image Global and Region Indexing and
Retrieval System Using Features. International Journal of Computer Science and Network Security. 9(2), 15-21.
[3]. Li, X., Shou, L., Chen, G., Hu, T. and Dong, J. (2008). Modelling Image Data for Effective Indexing and Retrieval
In Large General Image Database. IEEE Transaction on Knowledge and Data Engineering. 20(11), 1566-1580.
[4]. Demerdash, O., Kosseim, L. and Bergler, S. (2008). CLaC at Image CLEFphoto 2008, ImageCLEF Working
Notes.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


274
www.ijergs.org

[5]. K. C. Sia and Irwin King. Relevance feedback based on parameter estimation of target distribution In IEEE
International Joint Conference on Neural Networks, pages 19741979, 2002.
[6] Simon Tong and Edward Chang. Support vector machine active learning for image retrieval. In MULTIMEDIA
in Proceedings of the ninth ACM international conference on Multimedia, pages 107118.2001.
[7] M. E. J. Wood, N. W. Campbell, and B. T. Thomas. Iterative refinement by relevance feedback in content-based
digital image retrieval In ACM Multimedia 98, pages 1320. ACM, 1998.
[8] Jisha.K.P, Thusnavis Bella Mary. I, Dr. A.Vasuki, An Image Retrieve Al Technique Based On Texture Features
Using Semantic Properties, International Conference on Signal Processing, Image Processing and Pattern
Recognition [ICSIPR], 2013.
[9] Swati Agarwal, A. K. Verma, Preetvanti Singh, Content Based Image Retrieval using Discrete Wavelet
Transform and Edge Histogram Descriptor, International Conference on Information Systems and Computer
Networks, proceeding of IEEE xplore-2013.
[10] Xiang-Yang Wang, Hong-Ying Yang, Dong-Ming Li A new content-based image retrieval technique using color
and texture information, Computers & Electrical Engineering, Volume 39, Issue 3, April 2013, Pages 746-761
[11] S. Manoharan, S. Sathappan, A Novel Approach For Content Based Image Retrieval Using Hybrid Filter
Techniques, 8th International Conference on Computer Science & Education (ICCSE 2013) April 26-28, 2013.
Colombo, Sri Lanka
[12] Heng chen, zhicheng zhao an effective relevance feedback algorithm for image retrieval 978-1-4244-6853-
9/10/ 2010 IEEE.
[13] Monika Daga, Kamlesh Lakhwani, A Novel Content Based Image Retrieval Implemented By NSA Of AIS,
International Journal Of Scientific & Technology Research Volume 2, Issue 7, July 2013 ISSN 2277-8616.
[14] Patheja P.S., Waoo Akhilesh A. and Maurya Jay Prakash, An Enhanced Approach for Content Based Image
Retrieval, International Science Congress Association, Research Journal of Recent Sciences, ISSN 2277 2502 Vol.
1(ISC-2011), 415-418, 2012.
[15] S. Nandagopalan, Dr. B. S. Adiga, and N. Deepak, A Universal Model for Content-Based Image Retrieval,
World Academy of Science, Engineering and Technology, Vol:2 2008-10-29.
[16] ROI Image Retrieval Based on the Spatial Structure of Objects, Weiwen ZOU1 , Guocan FENG2, 12Mathematics
and Computational School, Sun Yat-sen University, Guangzhou, China, 510275 paper: 05170290.
[17] H. Yamamoto, H. Iwasa, N. Yokoya, and H. Takemura, Content- Based Similarity Retrieval of Images Based on
Spatial Color Distributions, ICIAP '99 Proceedings of the 10th International Conference on Image Analysis and
Processing.
[18] Patil, P.B. and M.B. Kokare, Relevance feedback in content based image retrieval: A review J. Appli. Comp.
Sci. Math., 10: 41-47.
[19] Shanmugapriya, N. and R. Nallusamy, a new content based image retrieval system using gmm and relevance
feedback, Journal of Computer Science 10 (2): 330-340, 2014 ISSN: 1549-3636.
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


275
www.ijergs.org

[20] Mit Patel, Keyur Brahmbhatt, Kanu Patel, Feature based Image retrieval based on clustering and non-clustering
techniques using low level image features, International Journal of Advance Engineering and Research Development
(IJAERD) Volume 1,Issue 3, April 2014, e-ISSN: 2348 - 4470 , print-ISSN:2348-6406
[21] SANDEEP KUMAR, ZEESHAN KHAN, ANURAGJAIN, A REVIEW OF CONTENT BASED IMAGE CLASSIFICATION
USING MACHINE LEARNING APPROACH, INTERNATIONAL JOURNAL OF ADVANCED COMPUTER RESEARCH (ISSN
(PRINT): 2249-7277 ISSN (ONLINE): 2277-7970) VOLUME-2 NUMBER-3 ISSUE-5 SEPTEMBER-2012
[22] T. JYOTHIRMAYI, SURESH REDDY, AN ALGORITHM FOR BETTER DECISION TREE, (IJCSE) INTERNATIONAL
JOURNAL ON COMPUTER SCIENCE AND ENGINEERING, VOL. 02, NO. 09, 2010, 2827-2830.
[23] SUNKARI MADHU, CONTENT BASED IMAGE RETRIEVAL: A QUANTITATIVE COMPARISON BETWEEN QUERY BY
COLOR AND QUERY BY TEXTURE, JOURNAL OF INDUSTRIAL AND INTELLIGENT INFORMATION VOL. 2, NO. 2, JUNE
2014
[24] N S T SAI, R C PATIL, IMAGE RETRIEVAL USING 2D DUAL-TREE DISCRETE WAVELET TRANSFORM,
INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS (0975 8887) VOLUME 14 NO.6, FEBRUARY 2011





















International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


276
www.ijergs.org

Evaluating the Efficiency of Bilateral Filter
Harsimran Kaur
1
, Neetu Gupta
2

1
Research Scholar (M.Tech), ECE Deptt, GIMET
2
Asst. prof, ECE Deptt, GIMET
E-Mail- er.harsimrankaur@gmail.com

Abstract- Bilateral filtering is a simple, non-iterative scheme for texture removal and edge-preserving and noise-reducing
smoothing filter. The intensity value at each pixel in an image is replaced by a weighted average of intensity values from nearby
pixels. This weight is based on a Gaussian distribution. Thus noise is averaged and signal strength is preserved. Performance
parameters of bilateral filter have been evaluated. The design and implementation is done in MATLAB using image processing
toolbox. The comparison has shown that the bilateral is quite effective for random and Gaussian noise
Keywords- Filtering, noise, gaussian noise, texture, GUI, artifact, compression
INTRODUCTION
A bilateral filter is non-linear, edge-preserving and noise-reducing smoothing filter[1]. The intensity value at each pixel
in an image is replaced by a weighted average of intensity values from nearby pixels. This weight can be based on a
Gaussian distribution. This preserves sharp edges by systematically looping through each pixel and adjusting weights
to the adjacent pixels accordingly[2].The bilateral filter is defined as:
where:
- I
filtered
is the filtered image;
- I is the original input image to be filtered;
- X are the coordinates of the current pixel to be filtered;
- is the window cantered in x;
- f
r
is the range kernel for smoothing differences in intensities. This function can be a Gaussian function;
- g
s
is the spatial kernel for smoothing differences in coordinates. This function can be a Gaussian function.
Gaussian low-pass filtering computes a weighted average of pixel values in the neighbourhood, in which the weights decrease
with distance from the neighbourhood centre. However, such an averaging consequently blurs the image. How can we prevent
averaging across edges, while still averaging within smooth regions? Bilateral filtering is a simple, non-iterative scheme for
edge-preserving smoothing. The basic idea underlying bilateral filtering is to do in the range of an image what traditional
filters do in its domain[7],[10].
THE GAUSSIAN CASE
A simple and important case of bilateral filtering is shift-invariant Gaussian filtering, in which both the closeness function c and
the similarity function s are Gaussian functions of the Euclidean distance between their arguments [4].More specifically, c is
symmetric.

where

is the Euclidean distance.
METHODOLOGY USED
The following flowchart gives the procedure of the bilateral filtering algorithm with an image f(x,y).
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


277
www.ijergs.org


Fig.1 Bilateral filter algorithm
TEST BED
The following table shows the experimental images related to the project with their size and type of format.
TABLE X
IMAGES USED IN SIMULATION
S.No. TITLE OF THE IMAGE SIZE FORMAT
1. Colg1 2.05 MB JPG
2. 2 83.3 KB JPG
3. Sim1 2.14 MB JPG
4. Mandrill 31.6 KB JPG
5. fruits 988 KB JPG

PERFORMANCE PARAMETERS
A good objective quality measure should reflect the distortion on the image well due to, for example, blurring, noise, compression,
and sensor inadequacy. Such measures could be instrumental in predicting the performance of vision-based algorithms such as
feature extraction, image-based measurements, detection, tracking, and segmentation, etc., tasks. Quantitative measures for image
quality can be classified according to two criteria:
1. number of images used in the measurement;
2. nature or type of measurement.
According to the first criterion, the measures are divided into two classes: univariate and bivariate. A univariate measure uses a
single image, whereas a bivariate measure is a comparison between two images. A number of measures have been defined to
determine the closeness of the degraded and original image fields. On the basis of this a study is done on the following measures
and analysed.
1. Pixel difference-based measures: (eg. the Mean Square Error and Maximum Difference).
2. Correlation-based measures: A variant of correlation based measures can be obtained by considering the absolute mean and
variance statistics (eg. Structural Correlation / Content, Normalized Cross Correlation) [1],[5].

A. Mean Square Error
In the image coding and computer vision literature, the most frequently used measures are deviations between the original and
coded images of which the mean square error (MSE) or signal to noise ratio (SNR) being the most common measures. The reasons
for these metrics widespread popularity are their mathematical tractability and the fact that it is often straightforward to design
systems that minimize the MSE but cannot capture the artifacts like blur or blocking artifacts. The effectiveness of the coder is
optimised by having the minimum MSE at a particular compression and MSE is computed using the following equation:-
Input image f(x,y)
Define w= half width
sigma1=gaussian distance weights
sigma2=gaussian intensity weights
Add gaussian noise to input image
f(x,y)
Calculate Gaussian Distance by G
= exp(-
(X.^2+Y.^2)/(2*sigma_d^2))
Calculate Itensity weights by H =
exp(-(I-A(i,j)).^2/(2*sigma_r^2)
Apply filtering values on noisy
image by F = H*G((iMin:iMax)-
i+w+1,(jMin:jMax)-j+w+1)
get the resultant filtered as final
output image
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


278
www.ijergs.org


B. Peak Signal to Noise-Ratio
Larger SNR and PSNR indicate a smaller difference between the original (without noise) and reconstructed image. The main
advantage of this measure is ease of computation but it does not reflect perceptual quality. An important property of PSNR is that
a slight spatial shift of an image can cause a large numerical distortion but no visual distortion and conversely a small average
distortion can result in a damaging visual artifact, if all the error is concentrated in a small important region[12].This metric
neglects global and composite errors PSNR is calculated using the following equation:


C. Average Difference
A lower value of Average Difference (AD) gives a cleaner image as more noise is reduced and it is computed using following
equation:


D. Maximum Difference
Maximum difference (MD) is calculated using the given equation and it has a good correlation with MOS for all tested
compression techniques so this is preferred as a very simple measure as a reference for measuring compressed picture quality in
different compression systems. Large value of MD means that the image is of poor quality.

SIMULATION RESULTS
The bilateral filtering algorithm is applied to the experimental images which are displayed in GUI(Graphic User Interface).It has
LOAD button to browse the image, APPLY button to apply filtering action, CLOSE button for closing the GUI window. The
following snapshots show the simulation results.

Fig. 2 Applying filtering algorithm

Fig. 3 Filtered image after applying bilateral filtering
Efficiency parameters i.e. values obtained from the bilateral filtering action applied on image no.1 from table 1.1
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


279
www.ijergs.org


Fig. 4 performance parameters
On comparison with other (median) filter, the values obtained for the same image:

Fig. 5 (contd.) performance parameters
TABLE XII
EFFICIENCY PARAMETERS
S.No. PARAMETER BILATERAL
FILTER
MEDIAN
FILTER
1. Mean square
error
0.0050 15296
2. Peak SNR 71.1815 6.2849
3. Average
difference
-0.0046 115.1108
4. Maximum
difference
0.4749 254.1485

Evaluation: The values obtained of different parameters are mentioned in the tables above. The study shows that the bilateral filter
has greater efficiency .For e.g. if we consider image no. 1, the peak SNR is higher in case of bilateral filter than the median
filter[9],[11]. Similarly, other parameter values depict the higher efficiency of the bilateral filter.
VII . CONCLUSION
The work has presented a detailed study on the bilateral filtering technique. By conducting the survey we have found that the
bilateral filter is based on the concept of Gaussian distribution. The bilateral filter is a non linear filter and also it reduces the noise
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


280
www.ijergs.org

in such a way that it preserves the edges. The survey has shown that the bilateral filter is quite effective for random noise so that is
why it is more preferable over others. The design and implementation is done in MATLAB using image processing toolbox. The
comparison has shown that the bilateral is quite effective for random and Gaussian noise.

FUTURE SCOPE
- To enhance the visibility of digital images.
- To reduce the random noise from the images.
- To remove the fog or haze from the images.
- To filter in such a way that it can preserves the edges.
As it is known bilateral filter is unable to remove salt and pepper noise so in near future we will extend this research work by
integrating the bilateral filter with median filter. Because we know that the median filter can remove salt and pepper noise.

REFERENCES:
[1]. A BILATERAL FILTER IN GRADIENT DOMAIN- ZHENGGUO LI, JINGHONG ZHEN, ZIJIAN ZHU, SHIQIAN WU, SUSANTO
RAHARDJA SIGNAL PROCESSING DEPARTMENT, INSTITUTE FOR INFOCOMM RESEARCH, 1 FUSIONOPOLIS WAY, SINGAPORE 2012
IEEE 1113 ICASSP 2012.
[2]."FAST BILATERAL FILTER WITH ARBITRARY RANGE AND DOMAIN KERNELS- BAHADIR K. GUNTURK, SENIOR MEMBER, IEEE.
[3].A BLOCK-BASED 2D-TO-3D CONVERSION SYSTEM WITH BILATERAL FILTER-CHAO-CHUNG CHENG, CHUNG-TE LI, PO-SEN
HUANG, TSUNG-KAI LIN, YI-MIN TSAI, AND LIANG-GEE CHEN GRADUATE INSTITUTE OF ELECTRONICS ENGINEERING, NATIONAL
TAIWAN UNIVERSITY, TAIWAN, R.O.C.
[4]. SWITCHING BILATERAL FILTER WITH A TEXTURE/NOISE DETECTOR FOR UNIVERSAL NOISE REMOVAL-CHIH-HSING LIN, JIA-
SHIUAN TSAI, AND CHING-TE CHIU, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 9, SEPTEMBER 2010,
2307.
[5].NEW TEMPORAL HIGH-PASS FILTER NON UNIFORMITY CORRECTION BASED ON BILATERAL FILTER- CHAO ZUO, QIAN
CHEN, GUOHUA GU, AND WEIXIAN QIAN 440 LAB, JGMT, EEOT, NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY,
NANJING 210094, CHINA(RECEIVED SEPTEMBER 26, 2010; ACCEPTED DECEMBER 27, 2010).










International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


281
www.ijergs.org

Investigation of SMAW Joints By Varying Concentration of Rutile (TiO
2
) in
Electrode Flux
Chirag Sharma
1
, Amit Sharma
2
, Pushpinder Sharma
2

1
Scholar, Mechanical Department, Geeta Engineering College, Panipat, Haryana, India
2
Assistant Professor, Mechanical Department, Geeta Engineering College, Panipat, Haryana, India
E-Mail- chirag485@gmail.com

Abstract Our aim to investigate the SMAW joints by varying the concentration of Rutile (TiO
2
) in the flux composition on the
various characteristics of metal cored coated electrodes for the purpose of developing efficient and better rutile electrodes for
structural mild steel. In this work five rutile metal cored coated electrodes were prepared by increasing Rutile (TiO
2
), at the
expense of cellulose and Si-bearing components like Mica and Calcite in the fluxes. Various mechanical properties like micro
hardness, tensile properties and Impact toughness were measured and metallographic studies were undertaken. Qualitative
measurements of operational properties like porosity, slag detachability and arc stability were also carried out.
Keywords Rutile (TiO
2
),Composition of Flux in various electrodes, Hardness Test, Tensile Test, Impact Test, Slag
Detachability, porosity, Microstructure of weld bead.
INTRODUCTION
Welding is a fabrication process that joins materials permanently, usually similar or dissimilar metals by the use of heat causing
fusion with or without the application of pressure. SMAW is the arc welding process known to even a layman and can be
considered a roadside welding process. When an arc is struck between an electrode and the work piece, the electrode core wire and
its coating melt, the latter provides a gas shield to protect the molten weld pool and the tip of the electrode from the ill effects of
the atmospheric gases. The diameter of electrodes usually varies between 3.15 to 12.50 mm. the length varies between 350 to 450
mm.
EXPERIMENTAL METHOD
The method for the accomplishment of the experiment include the production of electrodes, extrusion of electrodes, micro
hardness testing, tensile strength testing, impact strength testing and microstructure testing of the weld beads produced by five new
developed electrodes.
Process of Electrode Production
The ingredients in dry form were weighted and were mixed appropriately for around 10-15 minutes to obtain a homogeneous dry
flux mixture. A liquid silicate binder was then added to the mixed dry flux, followed by mixing for further 10 minutes, this process
is also known as wet mixing. The binder consists of a complex mixture of different alkali silicates with a wide range of viscosities.
The flux coating ingredients commonly used are Rutile, Aluminite, CaCO
3
MgCO
3,
Cellulose, Ferromanganese, Quartz, Calcite,
China clay, Mica, Iron Powder, Talcum Powder, and Binding agents (Sodium Silicate) etc.
The flux was then extruded onto a 3.15 mm diameter mild steel core wire and coating diameter of flux .4 -.55 mm, final dia. after
coating is approximately 3.7 mm. with coating factor of 1.18, where the coating factor represents the ratio of the core wire
diameter to the final electrode diameter.
The electrodes were baked after the extrusion. The baking cycle consisted of 90 minutes at 140-150C. These electrodes were
tested by taking weld bead on plates and finally 5 types of flux coating composition were obtained by varying the Rutile (TiO
2
)
from 27 to 42 % wt. at the expense of calcium fluoride, cellulose, Calcite, and Si-bearing raw materials in the dry mix.

International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


282
www.ijergs.org

Extrusion of Electrodes
The final five electrodes were finally extruded from injection moulding machine. All the electrodes were produced with the same
wire and different powder raw material batches.
The coating dry mix composition with the corresponding weight percentage of the components is shown in Table with Coating
Composition (wt %):-
Constituents 27% TiO
2
31% TiO
2
34.5% TiO
2
39% TiO
2
42%TiO
2

Aluminite 16.47 16.47 16.42 16.39 16.32
CaCO
3
MgCO
3
7.6 7.5 7.3 7 7
Cellulose 5.43 5.2 4.6 3.9 3.6
Ferromanganese 6 5.5 5.5 5.5 5.5
Quartz 6.52 6.2 5.9 5.2 4.8
Calcite 9.8 8.3 7.6 7.1 6.3
China Clay 9.8 8.8 7.8 6.6 5.8
Mica 8.7 8.4 7.7 6.7 6.12
Telcom Power 1.6 1.6 1.6 1.6 1.6
Iron Powder 1.6 1.1 1.1 1.1 1.1


RESULTS AND DISCUSSION
Slag Properties
The slag properties by all of the flux coatings are of good quality i.e. all of them covered the bead completely. The bead was in
good shape and cleans after the removal of slag. The slag produced by 31 % TiO
2
flux was observed to interfere with the weld
pool in both of the current conditions i.e. DCEP and DCEN.
On the other hand the 27 %, 34.5 %, 39 % and 42 % TiO
2
slag did not interfere with the weld pool and the weld beads obtained by
these electrodes were smooth and clean.
Spatter
The spatters produced in DCEP welding were observed to be more than in DCEN welding. Further it was observed that in DCEP
27 %, 31 % and 34.5 % TiO
2
electrodes produced more spatters than in other electrodes. In General, the spatters were easy to
remove and were of medium size.

Fig. Weld beads obtained on welding with DCEN and DCEP
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


283
www.ijergs.org

Operational Properties
In general, the arc stability in DCEN welding for all types of electrodes was better than that in case of DCEP. The slag produced
by 27 %, 31 %, 34.5 % and 39 % TiO
2
electrodes was thicker than that of 42 % TiO
2
electrodes.
The slag detachability is good for DCEN welding for all electrodes. The slag was more difficult to detach in DCEP especially with
39 % and 42 % TiO
2
electrodes.
The slag for all electrodes presented porosity but it was more prominent in DCEP especially with 42% TiO
2
electrodes.
Observations of porosity, arc stability, slag detachability during welding:
Coating Current Type Arc Stability Slag Detachability Porosity
27% TiO
2
DCEP Good Good Present
DCEN Good Good Present
31% TiO
2
DCEP Medium Medium Present
DCEN Good Good Present
34.5% TiO
2
DCEP Medium Medium Present
DCEN Good Good Present
39% TiO
2
DCEP Good Medium Present
DCEN Good Good Highly Present
42% TiO
2
DCEP Excellent Good Present
DCEN Excellent Good Highly Present

Micro hardness Measurements The micro hardness was measured at five points on each sides of the weld bead including
the weld bead itself on a specimen.
Micro Hardness Test Results (MVH) DCEP:


Micro hardness at Weld Bead Vs TiO
2
Composition

26
27
28
29
30
31
32
33
34
35
Base
metal
27% 31% 34.50% 39% 42%
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
%age of TiO
2
Microhardness Vs %age of TiO
2
at Weld Bead
DCEP
DCEN
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


284
www.ijergs.org


Micro hardness variation along the test coupon 27 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 31 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 34.5 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 39 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 42 %
TiO
2
(DCEP)

Micro hardness variation along the test coupon 27 %
TiO
2
(DCEN)
26.4
27.26
28.3
30.06
29.4
30.05
29.2
27.58
26.44
26
27
28
29
30
31
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance From Weld Bead
Microhardness variation on test
coupon (27% TiO
2
DCEP)
26.28
27.24
27.3
29.76
29.58
29.78
27.6
27.6
26.32
26
27
28
29
30
31
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s
(
M
V
H
)
Distance From Weld Bead
Microhardness variation on test
coupon (31% TiO
2
DCEP)
26.34
26.94
28.55
29.79
29.56
29.79
28.5
27.64
26.36
26
27
28
29
30
31
-12 -9 -6 -3 0 3 6 9 12 M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variations on test
coupons (34.5%TiO
2
DCEP)
26.2
27.48
30.36
32.46
32.37
32.45
30.24
27.26
26.32
26
27
28
29
30
31
32
33
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (39%TiO
2
DCEP)
26.92
28.26
32.94
33.6
32.56
33.65
33.08
27.96
26.88
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (42%TiO
2
DCEP)
26.36
28.82
30.7
32.2
33.2
31.9
29.2
28.64
26.18
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (27%TiO
2
DCEN)
International Journal of Engineering Research and General ScienceVolume 2, Issue 5, August September 2014
ISSN 2091-2730


285
www.ijergs.org


Micro hardness variation along the test coupon 31 %
TiO
2
(DCEN)


Micro hardness variation along the test coupon 34.5 %
TiO
2
(DCEN)

Micro hardness variation along the test coupon 39 %
TiO
2
(DCEN)

Micro hardness variation along the test coupon 42 %

TiO
2
(DCEN)
26.28
28.24
29.3
31.5
32.6
30.78
28.6
27.96
26.22
26
27
28
29
30
31
32
33
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance form weld bead
Microhardness variation on test
coupon (31%TiO
2
DCEN)
26.24
28.5
30.15
31.79
33.7
31.83
31.56
28.34
26.31
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variations on test
coupons (34.5%TiO
2
DCEN)
26.16
28.48
30.4
31.6
33.7
31.45
30.24
28.14
26.26
26
27
28
29
30
31
32
33
34
-12 -9 -6 -3 0 3 6 9 12
M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variation on test
coupon (39%TiO
2
DCEN)
26.9
29.96
32.54
33.65
34.6
33.39
33.08
33.06
26.8
26
27
28
29
30
31
32
33
34
35
-12 -9 -6 -3 0 3 6 9 12 M
i
c
r
o
h
a
r
d
n
e
s
s

(
M
V
H
)
Distance from weld bead
Microhardness variaton on test
coupon (42%TiO
2
DCEN)

Tensile Properties Test Results
The results of tensile properties measurements are recorded in Tables for DCEP and DCEN currents respectively. The elongation is
decreased with decrease in tensile strength.

Histogram for Tensile Properties for DCEP

Histogram for Tensile Properties for DCEN
Charpy V Notch Impact Test Results
Charpy V notch test samples were prepared for the impact strength measurements. These test coupons were dipped in liquid nitrogen
to drop their temperature from room temperature to -30
o
C, -20
o
C,-10
o
C and 0
o
C by varying the dipping time of test coupons in the
liquid nitrogen.
The results showed that the toughness of the weld coupon increases as the percentage of TiO
2
is increased for both types of current
conditions. The variations of impact energy with temperature for DCEP and DCEN are shown in figures respectively. Toughness is
related to the hardness and tensile properties of the material. The toughness of the weld metal is reported to be increased with a
reduction in tensile strength of the weld coupon and an increment in toughness is also observable with an increment in micro hardness
of the weld coupon.
Tensile properties Vs %age of TiO
2
(DCEN)
24 19 18 20 15
14
531
423
429
439
393
382
98
116
144
134 128 126
0
100
200
300
400
500
600
Base
Metal
27 31 34.5 39 42
%age of TiO2
Elongation
(mm)
Load (kN)
Tensile
Strength
(N/mm
2
)
Tensile Properties Vs %age of TiO
2
(DCEP)
22 16 21 18
92
323
426
431
373
352
15 19
106
123
131 133
143
528
0
100
200
300
400
500
600
Base
Metal
27 31 34.5 39 42
%age of TiO
2

Elongation
(mm)
Load (kN)
Tensile
Strength
(N/mm
2
)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

287 www.ijergs.org


Energy Vs Temperature graph of impact Test Results (DCEP)

Energy Vs Temperature graph of impact Test Results (DCEN)
Microstructure Test Results
The microstructure of base metal shows ferrite grains and small quantities of pearlite at the grain boundaries.
In the electrodes having 27 % TiO
2
and 31 % TiO
2
small quantity of grain boundary ferrite is observed whereas the acicular ferrite is
prominently present. The presence of pearlite is also observable with a minute quantity of martensite which results in small increment
in micro hardness.
For electrodes having 34.5 %, 39 % and 42 % TiO
2
acicular ferrite is much less than that of 27 % and 31 % TiO
2
electrodes
microstructure. With the presence of pearlite, aggregates of cementite are also observable. Precipitates of aligned martensite are also
noticeable. The presence of cementite and aligned martensite results in increased micro hardness. Such type of microstructure renders
the weld metal with low ductility and increased toughness. The presence of martensite is more prominent in DCEN welding than that
of DCEP.


Variation of Impact Energy w.r.t. Temperature (DCEN)
0
20
40
60
80
100
120
-30 -20 -10 0 30
Temperature (
o
C)
E
n
e
r
g
y

(
J
o
u
l
e
)

27% TiO2
31% TiO2
34.5% TiO2
39% TiO2
42% TiO2
Base Metal
Variation of impact energy w.r.t. Temperature (DCEP)
0
20
40
60
80
100
120
-30 -20 -10 0 30
Temperature
o
C
E
n
e
r
g
y

(
J
o
u
l
e
)

27% TiO2
31% TiO2
34.5% TiO2
39% TiO2
42% TiO2
Base Metal
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

288 www.ijergs.org

CONCLUSION
The penetration and bead width was increased with increase in TiO
2
percentage in general for all types of electrodes and for DCEN type of
current conditions.
The bead geometry produced in DCEN welding was better than that produced in DCEP.
The arc stability is observed to be good for 42 % TiO
2
electrodes. Smoke level was seemed to be reduced at higher percentage of TiO
2
. Slag
detachability was generally good for 34.5 %, 39 % & 42 % TiO
2
electrodes.
An overall increase in the Micro Hardness at the weld bead was observed with the increase in the amount of TiO
2
.The micro hardness is
observed to be increased due to the increase in percentage and migration of carbon and silicon.
An overall decrease in the tensile strength was observed with the increase in TiO
2
. The increment in silicon and carbon resulted in reduction
in the tensile strength of the weld metal.

REFERENCES:
[1] U. Mitra, C. S. Chai and T.W. Eagar, Slag Metal Reactions during Submerged Arc Welding of Steel, Proc. Of Int. Conf. on
Quality and Reliability in Welding, 2, Chinese Mech. Engg. Soc. Harbin, 1984
[2] R. Datta, D. Mukharjee and S. Mishra, Weldability and toughness evaluation of pressure vessel quality steel using shielded
metal arc welding(SMAW) process, Journal of Materials Engineering and Performance, Volume 7(6) December 1998, 817-823
[3] N. M. R. De Rissone, J. P. Farias, I. De Souza Bott, and E. S. Surian, ANSI/AWS A5.1-91 E6013 Rutile Electrodes: The Effect
of Calcite, Suppliment to the Welding Journal, July 2002
[4] K. Sampath, Constraints-Based Modeling Enables Successful Development of a Welding Electrode Specification for Critical
Navy Applications, Welding Journal, August 2005, 131-138
[5] G. Mulas, F. Delogu

, E. Arca, J. Rodriguez-Ruiz and S. Palmas,The influence of mechanical processing on the
photoelectrochemical behaviour of TiO
2
powders, Journal of Materials Engineering and Performance 2009
[6] Paulino Estrada Diaz, Ana Ma. Paniagua-Mercado,Victor M. Lopez-Hirata, Hector J. Dorantes-Rosales and Elvia
Diaz Valdez, Effect of TiO2-containing fluxes on the mechanical properties and microstructure in submerged-arc
weld steels, Published online in 2009
[7] G. Magudeeswaran, V. Balasubramanian, and G. Madhusudhan Reddy, Effect of Welding Consumables on Fatigue
Performance of Shielded Metal Arc Welded High Strength, Q&T Steel Joints, JMEPEG (2009) 18:4956
[8] Kook-soo Bang, Chan Park, Hong-chul Jung and Jong-bong Lee, Effects of Flux Composition on the Element Transfer and
Mechanical Properties of Weld Metal in Submerged Arc Welding, Met. Mater. Int., Vol. 15, No. 3 (2009), 471-477
[9] Amado Cruz-Crespo, Rafael Fernandez Fuentes, and Americo Scotti The Influence of Calcite, Fluorite, and Rutile on the
Fusion-Related Behavior of Metal Cored Coated Electrodes for Hardfacing Journal of Materials Engineering and
Performance, Volume 19(5) July 2010, 685-692
[10] E. Rahimi, M. Fattahi, N. Nabhani and M.R. Vaezi, Improvement of impact toughness of AWS E6010 weld metal
by adding TiO2 nanoparticles to the electrode coating , Journal of Materials Engineering and Performance ,
Published online (2011)
[11] Kook-soo Bang, Hong-chul Jung and Il-wook Han, Comparison of the Effects of Fluorides in Rutile-Type Flux Cored Wire,
Met. Mater. Int., Vol. 16, No. 3 (2010), 489-494 American Welding Society, Structured welding code Steel 1996



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

289 www.ijergs.org

An Analytical Study on integration of Multibiometric Traits at Matching Score
Level using Transformation Techniques
Santosh Kumar
1
, Vikas Kumar
1
, Arjun Singh
1

1
Asst. Professor, ECE Deptt, Invertis university, Bareilly
E-Mail- Santosh.v@invertis.org
Abstract Biometric is one of those egressing technologies which are exploited for identifying a person on the basis of
physiological and behavioral characteristic. However, unimodal biometric system faces the problem of lack of individuality, spoof
attacks, non-universality, degree of freedom etc., which make these systems less precise and erroneous. In order to overcome these
problems, multi biometric has become the favorite choice for verification of an individual to declare him as an imposte or a genuine.
However, the fusion or integration of multiple biometric traits can be done at any one of the four module of a general multibiometric
system. Further, achieving fusion at matching score level is more preferable due to the availability of sufficient amount of information
present over there. In this paper we have presented a comparative study of normalization methodology which is basically used to
convert the different feature vectors of individual traits in common domain in order to combine them as a single feature vector.
Keywords Biometric, Multibiometric, Normalization, Unimodal, Unsupervised Learning rules, Imposter, Genuine.
1. INTRODUCTION
Biometric system is fundamentally a pattern recognition system. The Biometric is a Greek word in which bio stands for life and
metric for the measurement. Biometrics has been used in the science that studies living organisms for the data analysis problems for
a long period [1]. Different kinds of triats are use for authentication of individuality such as fingerprint recognition, hand geometry,
facial recognition, iris recognition, key stroke recognition, signature recognition, gait recognition, DNA (De-oxyribo Nucleic Acid),
voice recognition and palm print [2]. In the conventional approach of security, many password-cracking proficiencies being used
today and the complexity necessary for passwords make these system a bit less preferable choice. Further, it is also easy for an
application programmer to crack the password of someone, if any of these identity proof(token card and password) is lost by the
individual which he is carrying along with him, then it might be used by an imposter and will create problems.To overcome of all
these limitation, we use bio-metric techniques. In unimodal system, we use only one trait out of all for the identification [3]. In case of
fingerprint recognition, user place his/her finger on the fingerprint sensor and is identified as a genuine one or as an imposter, but in
due course of time when if the residual of the previous user remain present on the sensor may produces the false results and the right
identity of individual will not be measured. In addition to this, facial recognition is highly dependent on the quality of the image,
which is generally affected when low quality camera is used or due to the environmental factors. Many a time, facial recognition
system fails in verification process of identical twins, or father-son. In Multimodal biometric systems, more than one physiological or
behavioral characteristic are used for enrollment, verification, identification process or authentication of individuality. Multimodal
biometric systems have some unique advantages over unimodal biometric in terms of accuracy, enrolment rates, and susceptibility to
spoofing attacks [4]. In order to design a multi biometric system, the features of different biometric modalities are integrated at
different modules of a general multi biometric system.
In multimodal biometric system the entropies of data can be combined at any one of four levels, namely sensor level, feature
extraction level, matching score level and decision level, and fusion can occur at any level [5]. However it is beneficial to fuse the
information at that level only where maximum amount of information can be accessed with ease. Due to the presence of sufficient
amount of information at matching score level, it is best suited for fusion purpose. In this report we have briefly explored the problem
and better solution for choosing sensors and fusion methods, and present a case study describing its impact on biometric systems.

2. NORMALIZATION
Normalization is a particular course of action intended to coordinate the fields and mutual exclusiveness of the information in a
database in which relations among entropies are explicitly defined as approachable dimensions to minimize redundancy and
dependency. The goal is to set apart the data so that accessions, omissions, and changes of a field can be made in just one table and
then dispersed through with the rest of the database via the fixed relationships. Nevertheless, the objective of data normalization is to
cut down and evenly eliminate data layoff. It is useful to minimize the discontinuities when covering the database structure, to make
the data model more instructive to users, to keep away from preconception towards any particular form of questioning. Here, we have
used two normalization methods to change the matching scores obtained from the finger print and face in common domain [6].
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

290 www.ijergs.org

3. QUANTILE NORMALIZATION
Quantile normalization is a technique for making distributions identical in statistical properties [7]. To Quantile-normalize a test
distribution to a reference distribution of the same length, sort the test distribution and sort the reference distribution. The highest entry
in the test distribution then takes the value of the highest entry in the reference distribution, the next highest entry in the reference
distribution, and so on, until the test distribution is a perturbation of the reference distribution. To Quantile normalize two or more
distributions to each other, without a reference distribution, sort as before, and then set to the average of the distributions. So the
highest value in all cases becomes the mean of the highest values, the second highest value becomes the mean of the second highest
values, and so on. Quantile normalization is frequently used in microarray data analysis.Extending this approximation to N dimensions
contributes us a technique of determining a common statistical distribution from multiple number of biometric modalities in the
following steps.
1. A two dimensional matrix (X) of matching scores having N database of length M each obtained from different identifiers is
available in MxN form.
2. Now, configure p = (
1
N
,...,
1
N
)
3. Sort each column of MxN matrix (X) of the matching scores to obtain the X
sort
.
4. Successively each row of X
sort
is Projected onto p to get
sort
X'
5. Finally,
sort
X' is rearranged having the same order of original X to obtained the X
norm
.

4. DELTA NORMALIZATION
The delta normalization is a novel approach to convert the data in common domain and it helps to spread the whole statistical
distribution in the range of 0 and 1, i.e. the minimum values approaches toward 0 and maximum toward 1 [8].this method is both
functioning effectively and full-bodied in nature as it does not estimate the statistical distribution and cuts down the impression of
outliers too. If is the archetype matching score then normalized scores
'
are given by.

=
1
2
1

2
+

Here, is a smoothing constant which takes out the infrequent and uncorrelated data from the statistical distribution. Usually we
take the value of approximately equal to the 100 and more as it gives better accuracy for higher value of .

5. FUSION
Fusion is the method necessary for combining information from various single modality systems. The process of integrating the
information from number of evidences to build-up a multi biometric system is called fusion. The information can be integrated or
fused by fusion at any level. In this, information from different domains is transformed into a common domain [6].

5.1. Sum Rule
sum rule is helps in eliminating the problem of ambiguity during assortment of the database. Futhermore, after the normalization
of finger and face data they are summed up to acquire the fused score. Here, input pattern is delegated to the class c such that.

i
x

=
1

5.2. Product Rule
The product rule renders a more inadvertent consequences than sum rule as it is based on the statistical influence of the feature
vectores. The input pattern designated to the class c is given by.

i
x

=
1

5.3. Min Rule
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

291 www.ijergs.org

Min rule of fusion dominate by considering a mimnum posterior probability which is accumulated out of all classifies. Therefore,
the stimulus pattern designated to the class c such that[9].

( )
j j i
c argmax minP w | x
i
=


5.4. Max rule
Max rule of fusion dominate by considering a maximum posterior probability which is accumulated out of all classifies.
Therefore, the stimulus pattern designated to the class c such that[9].
( )
j j i
c argmax maxP w | x
i
=



6. MATCHING SCORE & DATABASE
To assess the execution of the normalization proficiencies with the fusion rules, the NIST- Biometric Scores Set - Release 1
(BSSR1), biometric database has been utilized. This database has a prominent amount of matching scores of faces and fingers,
particularly derived for the fusion procedure.

7. FINGERPRINT MATCHING SCORE
Matching score for the fingerprint of 10 users have been considered for the experimental study.

Table 1. Matching scores of fingerprint of 10 users

8. FACE MATCHING SCORE
Matching scores for the face of 10 users have been considered for the experimental study.
Table 2. Matching scores of face of 10 users
Users A B C D E F G H I J
1 29 4 6 4 4 7 5 6 6 9
2 7 26 12 4 11 9 4 9 6 5
3 8 5 63 6 7 5 9 6 7 8
4 8 5 10 73 9 8 12 6 16 6
5 11 5 12 6 175 6 9 8 8 10
6 8 4 6 3 4 10 6 5 6 3
7 9 3 6 5 5 5 11 5 4 5
8 8 4 10 5 9 10 8 38 8 5
9 6 6 5 7 11 4 11 6 142 6
10 3 5 8 4 14 6 6 10 6 163
Users A B C D E F G H I J
1 .57 .53 .52 .55 .54 .54 .55 .55 .58 .52
2 .56 .78 .51 .51 .51 .52 .51 .54 .56 .52
3 .45 .52 .81 .49 .51 .54 .53 .50 .54 .58
4 .51 .53 .49 .82 .47 .51 .53 .51 .51 .52
5 .50 .55 .54 .50 .59 .54 .54 .52 .52 .51
6 .45 .49 .52 .52 .49 .67 .52 .47 .51 .52
7 .53 .57 .52 .53 .49 .50 .67 .52 .55 .52
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

292 www.ijergs.org


9. NORMALIZED MATCHING SCORE
The matching scores considered previously have been applied in Quantile and delta normalization and the following tables are
evaluated.
Table 3. Normalized matching scores of fingerprint of 10 users through Quantile normalization

Users A B C D E F G H I J
1
0.937 -0.467 -0.354 -0.467 -0.467 -0.298 -0.411 -0.354 -0.354 -0.411
2
0.263 -0.354 -0.017 -0.467 -0.074 -0.186 -0.467 -0.186 -0.354 -0.13
3
-0.24 -0.411 2.8469 -0.354 -0.298 -0.411 -0.186 -0.354 -0.298 -0.354
4
-0.24 -0.411 -0.13 3.4085 -0.186 -0.242 -0.017 -0.354 0.2072 -0.13
5
-0.07 -0.411 -0.017 -0.354 9.1371 -0.354 -0.186 -0.242 -0.242 -0.074
6
-0.24 -0.467 -0.354 -0.523 -0.467 -0.13 -0.354 -0.411 -0.354 -0.354
7
-0.19 -0.523 -0.354 -0.411 -0.411 -0.411 -0.074 -0.411 -0.467 -0.411
8
-0.24 -0.467 -0.13 -0.411 -0.186 -0.13 -0.242 1.4428 -0.242 0.3757
9
-0.35 -0.354 -0.411 -0.298 -0.074 -0.467 -0.074 -0.354 7.2837 -0.523
10
-0.24 -0.411 -0.242 -0.467 0.0949 -0.354 -0.354 -0.13 -0.354 1.2182
Table 4. Normalized matching scores of face of 10 users through Quantile normalization


Table 5. Normalized matching scores of fingerprint of 10 users through delta normalization
8 .54 .54 .48 .53 .57 .49 .52 .77 .49 .51
9 .52 .53 .52 .53 .54 .50 .52 .50 .69 .52
10 .50 .52 .50 .55 .57 .52 .52 .60 .54 .58
Users A B C D E F G H I J
1
1.202 -0.269 -0.64 0.426 -0.142 -0.004 0.235 0.547 1.438 -0.685
2
0.843 9.3888 -1.007 -1.31 -0.987 -0.637 -1.23 0.136 0.903 -0.885
3
-3.32 -0.812 10.52 -1.96 -0.994 0.104 -0.29 -1.45 0.183 1.6488
4
-1.31 -0.472 -1.967 11.12 -2.655 -1.247 -0.46 -1.29 -1.08 -0.615
5
-1.37 0.2182 -0.036 -1.37 1.7858 0.04 0.173 -0.67 -0.88 -1.25
6
-3.47 -2.045 -0.867 -0.92 -2.111 5.12 -0.63 -2.61 -1.2 -0.931
7
-0.47 1.1465 -0.735 -0.31 -1.983 -1.503 4.936 -0.77 0.284 -0.893
8
0.158 -0.162 -2.274 -0.38 1.0883 -1.752 -0.78 9.14 -1.77 -0.997
9
-0.6 -0.306 -0.93 -0.51 0.0726 -1.511 -0.95 -1.71 6.058 -0.85
10
-1.45 -0.737 -1.42 0.577 1.3527 -0.628 -0.81 2.517 0.142 1.7597
Users A B C D E F G H I J
1 0.473 0.186 0.257 0.186 0.186 0.287 0.224 0.257 0.257 0.224
2 0.431 0.257 0.384 0.186 0.37 0.334 0.186 0.334 0.257 0.354
3 0.312 0.224 0.494 0.257 0.287 0.224 0.334 0.257 0.287 0.257
4 0.312 0.224 0.354 0.495 0.334 0.312 0.384 0.257 0.424 0.354
5 0.37 0.224 0.384 0.257 0.499 0.257 0.334 0.312 0.312 0.37
6 0.312 0.186 0.257 0.144 0.186 0.354 0.257 0.224 0.257 0.257
7 0.334 0.144 0.257 0.224 0.224 0.224 0.37 0.224 0.186 0.224
8 0.312 0.186 0.354 0.224 0.334 0.354 0.312 0.484 0.312 0.442
9 0.257 0.257 0.224 0.287 0.37 0.186 0.37 0.257 0.499 0.144
10 0.312 0.224 0.312 0.186 0.407 0.257 0.257 0.354 0.257 0.48
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

293 www.ijergs.org


Table 6. Normalized matching scores of face of 10 users through delta normalization

Users A B C D E F G H I J
1
0.0287 0.0269 0.0264 0.0277 0.027 0.0272 0.0275 0.0279 0.029 0.0263
2
0.0283 0.0391 0.0259 0.0255 0.0259 0.0264 0.0256 0.0274 0.0284 0.0261
3
0.023 0.0262 0.0406 0.0247 0.0259 0.0273 0.0268 0.0254 0.0274 0.0293
4 0.0255 0.0266 0.0247 0.0413 0.0238 0.0256 0.0266 0.0256 0.0258 0.0264
5
0.0255 0.0275 0.0272 0.0255 0.0295 0.0273 0.0274 0.0264 0.0261 0.0256
6 0.0228 0.0246 0.0261 0.026 0.0245 0.0337 0.0264 0.0239 0.0257 0.026
7
0.0266 0.0287 0.0263 0.0268 0.0247 0.0253 0.0335 0.0262 0.0276 0.0261
8
0.0274 0.027 0.0243 0.0267 0.0286 0.025 0.0262 0.0388 0.0249 0.0259
9 0.0264 0.0268 0.026 0.0265 0.0273 0.0253 0.026 0.025 0.0349 0.0261
10
0.0254 0.0263 0.0254 0.0279 0.0289 0.0264 0.0262 0.0304 0.0274 0.0294

10. FUSED SCORE

The resultant tables obtained after the normalization are fused together to get the fused score and are evaluated as followed.
Table7.Fused scores of 10 users using sum rule fusion through Quantile Normalization

Users A B C D E F G H I J
1
2.139 -0.735 -0.995 -0.041 -0.609 -0.302 -0.176 0.193 1.084 -1.096
2
1.107 9.034 -1.024 -1.777 -1.061 -0.823 -1.698 -0.050 0.548 -1.014
3
-3.566 -1.222 13.367 -2.310 -1.292 -0.306 -0.480 -1.804 -0.115 1.294
4
-1.552 -0.883 -2.097 14.531 -2.841 -1.489 -0.481 -1.641 -0.873 -0.745
5
-1.439 -0.192 -0.054 -1.726 10.923 -0.315 -0.013 -0.910 -1.126 -1.324
6
-3.714 -2.512 -1.221 -1.440 -2.578 4.990 -0.984 -3.021 -1.550 -1.286
7
-0.652 0.624 -1.089 -0.720 -2.394 -1.914 4.862 -1.180 -0.182 -1.303
8
-0.084 -0.628 -2.404 -0.793 0.902 -1.882 -1.024 10.583 -2.016 -0.621
9
-0.957 -0.660 -1.341 -0.813 -0.001 -1.978 -1.019 -2.065 13.342 -1.373
10
-1.688 -1.148 -1.662 0.110 1.448 -0.982 -1.161 2.387 -0.212 2.978

Table 8. Fused scores of 10 users using product rule fusion through Quantile Normalization

Users A B C D E F G H I J
1
1.126 0.125 0.227 -0.199 0.066 0.001 -0.096 -0.194 -0.510 0.281
2
0.222 -3.327 0.018 0.612 0.073 0.118 0.575 -0.025 -0.320 0.115
3
0.805 0.333 29.949 0.693 0.296 -0.043 0.055 0.514 -0.055 -0.584
4
0.317 0.194 0.255 37.912 0.494 0.302 0.008 0.456 -0.224 0.080
5
0.100 -0.090 0.001 0.486 16.317 -0.014 -0.032 0.162 0.214 0.092
6
0.840 0.955 0.307 0.479 0.985 -0.664 0.223 1.072 0.424 0.330
7
0.087 -0.599 0.261 0.127 0.814 0.617 -0.363 0.316 -0.133 0.367
8
-0.038 0.076 0.295 0.157 -0.202 0.227 0.189 13.187 0.429 -0.375
9
0.214 0.108 0.382 0.153 -0.005 0.705 0.070 0.606 44.127 0.445
10
0.350 0.303 0.344 -0.269 0.128 0.222 0.286 -0.327 -0.050 2.144
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

294 www.ijergs.org


Table 9. Fused scores of 10 users using min rule fusion through Quantile Normalization

Users A B C D E F G H I J
1
0.937 -0.467 -0.640 -0.467 -0.467 -0.298 -0.411 -0.354 -0.354 -0.685
2
0.263 -0.354 -1.007 -1.311 -0.987 -0.637 -1.231 -0.186 -0.354 -0.885
3
-3.324 -0.812 2.847 -1.955 -0.994 -0.411 -0.294 -1.450 -0.298 -0.354
4
-1.310 -0.472 -1.967 3.409 -2.655 -1.247 -0.464 -1.287 -1.081 -0.615
5
-1.365 -0.411 -0.036 -1.371 1.786 -0.354 -0.186 -0.668 -0.883 -1.250
6
-3.472 -2.045 -0.867 -0.917 -2.111 -0.130 -0.630 -2.610 -1.195 -0.931
7
-0.466 -0.523 -0.735 -0.411 -1.983 -1.503 -0.074 -0.769 -0.467 -0.893
8
-0.242 -0.467 -2.274 -0.411 -0.186 -1.752 -0.782 1.443 -1.774 -0.997
9
-0.603 -0.354 -0.930 -0.514 -0.074 -1.511 -0.945 -1.711 6.058 -0.850
10
-1.446 -0.737 -1.420 -0.467 0.095 -0.628 -0.807 -0.130 -0.354 1.218

Table 10. Fused scores of 10 users using max rule fusion through Quantile Normalization

Users A B C D E F G H I J
1
1.202 -0.269 -0.354 0.426 -0.142 -0.004 0.235 0.547 1.438 -0.411
2
0.843 9.389 -0.017 -0.467 -0.074 -0.186 -0.467 0.136 0.903 -0.130
3
-0.242 -0.411 10.520 -0.354 -0.298 0.104 -0.186 -0.354 0.183 1.649
4
-0.242 -0.411 -0.130 11.123 -0.186 -0.242 -0.017 -0.354 0.207 -0.130
5
-0.074 0.218 -0.017 -0.354 9.137 0.040 0.173 -0.242 -0.242 -0.074
6
-0.242 -0.467 -0.354 -0.523 -0.467 5.120 -0.354 -0.411 -0.354 -0.354
7
-0.186 1.147 -0.354 -0.309 -0.411 -0.411 4.936 -0.411 0.284 -0.411
8
0.158 -0.162 -0.130 -0.383 1.088 -0.130 -0.242 9.140 -0.242 0.376
9
-0.354 -0.306 -0.411 -0.298 0.073 -0.467 -0.074 -0.354 7.284 -0.523
10
-0.242 -0.411 -0.242 0.577 1.353 -0.354 -0.354 2.517 0.142 1.760

Table11. Fused scores of 10 users using sum rule fusion through Delta Normalization

Users A B C D E F G H I J
1 0.501 0.213 0.284 0.213 0.213 0.314 0.251 0.285 0.286 0.250
2
0.459 0.296 0.410 0.211 0.396 0.361 0.211 0.362 0.286 0.380
3
0.335 0.250 0.534 0.282 0.313 0.251 0.361 0.283 0.314 0.287
4 0.338 0.250 0.378 0.537 0.358 0.338 0.411 0.283 0.450 0.380
5
0.395 0.251 0.411 0.283 0.529 0.285 0.362 0.339 0.338 0.396
6 0.335 0.210 0.283 0.170 0.210 0.387 0.284 0.247 0.283 0.283
7
0.361 0.172 0.284 0.250 0.248 0.249 0.403 0.250 0.213 0.250
8 0.340 0.213 0.378 0.250 0.363 0.379 0.339 0.522 0.337 0.468
9
0.284 0.284 0.250 0.313 0.397 0.211 0.396 0.282 0.534 0.170
10
0.338 0.250 0.338 0.214 0.436 0.284 0.283 0.384 0.285 0.509

Table12. Fused scores of 10 users using product rule fusion through Delta Normalization
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

295 www.ijergs.org


Users A B C D E F G H I J
1
0.014 0.005 0.007 0.005 0.005 0.008 0.006 0.007 0.007 0.006
2
0.012 0.010 0.010 0.005 0.010 0.009 0.005 0.009 0.007 0.009
3
0.007 0.006 0.020 0.006 0.007 0.006 0.009 0.007 0.008 0.008
4
0.008 0.006 0.009 0.020 0.008 0.008 0.010 0.007 0.011 0.009
5
0.009 0.006 0.010 0.007 0.015 0.007 0.009 0.008 0.008 0.009
6
0.007 0.005 0.007 0.004 0.005 0.012 0.007 0.005 0.007 0.007
7
0.009 0.004 0.007 0.006 0.006 0.006 0.012 0.006 0.005 0.006
8
0.009 0.005 0.009 0.006 0.010 0.009 0.008 0.019 0.008 0.011
9
0.007 0.007 0.006 0.008 0.010 0.005 0.010 0.006 0.017 0.004
10
0.008 0.006 0.008 0.005 0.012 0.007 0.007 0.011 0.007 0.014

Table13. Fused scores of 10 users using min rule fusion through Delta Normalization

Users A B C D E F G H I J
1
0.029 0.027 0.026 0.028 0.027 0.027 0.028 0.028 0.029 0.026
2
0.028 0.039 0.026 0.026 0.026 0.026 0.026 0.027 0.028 0.026
3
0.023 0.026 0.041 0.025 0.026 0.027 0.027 0.025 0.027 0.029
4
0.026 0.027 0.025 0.041 0.024 0.026 0.027 0.026 0.026 0.026
5 0.025 0.027 0.027 0.025 0.029 0.027 0.027 0.026 0.026 0.026
6
0.023 0.025 0.026 0.026 0.025 0.034 0.026 0.024 0.026 0.026
7
0.027 0.029 0.026 0.027 0.025 0.025 0.033 0.026 0.028 0.026
8
0.027 0.027 0.024 0.027 0.029 0.025 0.026 0.039 0.025 0.026
9 0.026 0.027 0.026 0.027 0.027 0.025 0.026 0.025 0.035 0.026
10
0.025 0.026 0.025 0.028 0.029 0.026 0.026 0.030 0.027 0.029

Table14. Fused scores of 10 users using max rule fusion through Delta Normalization

Users A B C D E F G H I J
1
0.473 0.186 0.257 0.186 0.186 0.287 0.224 0.257 0.257 0.224
2
0.431 0.257 0.384 0.186 0.370 0.334 0.186 0.334 0.257 0.354
3
0.312 0.224 0.494 0.257 0.287 0.224 0.334 0.257 0.287 0.257
4
0.312 0.224 0.354 0.495 0.334 0.312 0.384 0.257 0.424 0.354
5
0.370 0.224 0.384 0.257 0.499 0.257 0.334 0.312 0.312 0.370
6
0.312 0.186 0.257 0.144 0.186 0.354 0.257 0.224 0.257 0.257
7
0.334 0.144 0.257 0.224 0.224 0.224 0.370 0.224 0.186 0.224
8
0.312 0.186 0.354 0.224 0.334 0.354 0.312 0.484 0.312 0.442
9
0.257 0.257 0.224 0.287 0.370 0.186 0.370 0.257 0.499 0.144
10
0.312 0.224 0.312 0.186 0.407 0.257 0.257 0.354 0.257 0.480

11. RESULT
The analytical consequences of a multibiometric system of rules for sum and product rule have been examined. The Genuine
Acceptance Rate and False Acceptance Rate for delta and Quantile normalizations with two fusion strategies have been evaluated and
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

296 www.ijergs.org

are shown in table 15 and table 16. Threshold measures in table15 and table16 are the diagonal measures which are prevailed after the
coalition of the transformed scores. The genuine acceptance rates and false rejection rates have been computed for some of the
threshold values which are infact genuine matching score received after the integration.

Table 15. GAR and FRR for Quantile normalization with four fusion rules

Sum Product Min Max


Quantile
Norm.
Thresho
ld
GAR FR
R
Thresho
ld
GAR F
R
R
Thresho
ld
GAR F
R
R
Thresho
ld
GAR F
R
R
2.139 100 0 -3.327 82 18 -1.30 92 8 1.202 99 1
4.990 100 0 -0.363 89 11 -0.354 96 4 4.936 100 0
9.034 100 0 1.126 100 0 1.786 100 0 5.120 100 0
10.92 100 0 29.94 100 0 2.847 100 0 9.389 100 0
13.37 100 0 37.19 100 0 3.409 100 0 10.52 100 0
14.53 100 0 44.12 100 0 6.058 100 0 11.12 100 0

Table 16. GAR and FRR for Delta normalization with four fusion rules

Sum Product Min Max


Delta Norm.
Threshol
d
GAR FR
R
Threshol
d
GAR F
R
R
Threshol
d
GAR F
R
R
Threshol
d
GAR F
R
R
0.296 84 16 0.010 96 4 0.029 96 4 0.257 92 0
0.387 91 9 0.014 100 0 0.033 100 0 0.354 97 0
0.403 97 3 0.015 100 0 0.034 100 0 0.370 98 0
0.501 100 0 0.017 100 0 0.035 100 0 0.473 100 0
0.534 100 0 0.019 100 0 0.039 100 0 0.480 100 0
0.537 100 0 0.020 100 0 0.041 100 0 0.495 100 0

CONCLUSION
The aim of substantial exercise shown in this paper is based on the elemental study of how more than one entity of biometric can
be fused together to generate a more practicable and efficacious authentication system. Here, with two novel terminologies for the
normalization of database have been used to fuse with sum rule and product rule fusion. The substantial eminence between these
methods has built on the basis of genuine and false recognition rates. Furthermore, the Delta normalization function used for the
normalization has given a reasonable performance over Quantile normalization method and has rendered superior GARs and FARs.
REFERENCES
[1] A.K. Jain. Biometric recognition. Nature, 449:3840, September 2007.
[2] A.K. Jain, A. Ross and S. Prabhakar, An Introduction to Biometric Recognition, IEEE Transactions on Circuits and Systems for
Video Technology, Special Issue on Image- and Video-Based Biometrics 14 (1) (2004) 420.
[3]C. Kant and R. Nath, Reducing Process-Time for Fingerprint Identification System, International Journals of Biometric and
Bioinformatics, Vol. 3, Issue 1, pp.1- 9, 2009.
[4] A.K. Jain and A. Ross, Multibiometric systems.Communications of the ACM, vol. 47, pp. 34-40, 2004.
[5] C. Ren, Y. Yin, J.Ma, and G. Yang, a novel method of score level fusion using multiple impressions for fingerprint verification.
Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, San Antonio, TX, USA - October 2009.
[6] A. Jain, K. Nandakumar, and A. Ross, Score normalization in multimodal biometric systems: Pattern Recognition Vol. 38 pp.
2270 2285, 18 Jan 2005
[7] B.Bolstad. Probe Level Quantile Normalization of High Density Oligonucleotide Array Data December 2001.
[8] Mathematical Index normalization available at: http://people.revoledu.com/kardi/tutorial/Similarity/Normalization.html
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

297 www.ijergs.org

[9]Face Processing: Advanced Modeling and Methods: Advanced Modeling and Methods by WenyiZhao,RamaChellappa
[10] Color image processing and application byKonstantinos N. Plataniotis.
[11] Biometrics: Theory, Methods and Applications, edited by N.V. Boulgouris.
[12] Digital color image processing edited by Andreas Koschan, MongiAbidi


























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

298 www.ijergs.org

AN OVERVIEW OF BUCKLING ANALYSIS OF SINGLE PLY
COMPOSITE PLATE WITH CUTOUTS
Parth Bhavsar
[1]
, Prof. Krunal Shah
[2]
, Prof. Sankalp Bhatia
[3]

[1]
Student, M.E. (CAD/CAM), A. D. Patel Institute of Technology, New V.V.Nagar, Gujarat, India
[2], [3]
Assistant Professor, A. D. Patel Institute of Technology, New V.V.Nagar, Gujarat, India
E-mail Id: parthbhavsar20990@gmail.com,

Abstract
The use of Composite material in aircraft industries is booming now-a days to grab the advantage of high strength to weight ratio.
These materials are used to build main components of aircraft like fuselage, panels, rudder, and skins. It is unavoidable to provide
different size, shaped cutouts in such components for different purpose i.e inspection, maintenance. The buckling behavior on
composite plate with different shape, size and location of the cutouts have been explore by many researchers in last 2 decades. The
presented review article is an attempt to summarize the issue of critically examine the current status, problems and opportunities to use
of single ply composite in aircraft industries.
Keywords:composite material, aircraft, single ply, cutout size, cutout shape, buckling analysis with FEA and Experimental.
INTRODUCTION
The buckling behavior of plates has been studied by many researchers in structural mechanics for aircraft and other structural parts
over a Century. Steel, Aluminium, Titanium plates are often used as the main components of aircraft structures such as fuselage,
panels, elevator, rudder, skins etc. So for making optimum structural components material must have best characteristics then other
conventional materials. So composite materials are widely used for this purpose, like carbon fiber reinforced polymer (CFRP), Glass
fiber reinforced polymer (GFRP), Aramide fiber reinforced polymer (AFRP), etc.

There are various holes, vents, cutouts and passage are provided for different purpose, to provide access for inspection, maintenance,
or simply to reduce weight. Due to this cutouts in plate elements leads to change in stress distribution within the member and
variations in buckling characteristics of the plate element. The effects of the shape, size, location and types of applied load on the
performance and buckling behavior of such perforated plates have been investigated by several researchers over the past two decades.
Many researcher explore such results for their research work.
REVIEW

These papers have been reviewed for this study. Following researches is based on their methodology, principles and their conclusions.

Donnie G. Brady, Melody A. Hammond explored manufacturing and usage of single ply composite plates, it is a Patent for Single ply
reinforced thermoplastic composite. Multiple ply laminates have been used in the aircraft industry for window shades and other
applications. A problem with multiple ply laminates is that they are relatively more expensive than single ply composites. However,
earlier single ply laminates have a problem to have brittle, and thus not well adapted for being rolled as is required in a window shade
application as well as for layering fuselage, fairing and for other components. The invention of a single ply thermoplastic composite is
very useful concept for aircraft manufacturer.

Arunkumar R. investigated Buckling Analysis of Woven Glass epoxy Laminated Composite Plate,In this study, the influence of cut-
out shape, length/thickness ratio, and ply orientation and aspect ratio on the buckling of woven glass epoxy laminated composite plate
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

299 www.ijergs.org

is examined experimentally. From the present analytical and experimental study, the following conclusions can be made. They found
following results
The buckling load decreases as the L/t (length to thickness) ratio of plate increases.
As the aspect ratio increases, the critical buckling load of the plate decreases.
When the fiber angle increases, the buckling load decreases.
The reduction of the buckling load due to the presence of a cutout is found. It is noted that the presence of cutout lowers the
buckling load and it varies with the cutout shape. The plate with circular cutout yielded the greatest critical buckling load.

Mahmoud Shariati and Ali Dadrasi has been studied, Numerical and Experimental Investigation of Loading Band on Buckling of
Perforated Rectangular Steel Plates. The aim of their paper is to investigate the buckling behavior of the steel rectangular plates with
circular and square cut outs under uniaxial in-plane compressive loading in elasto-plastic range with various loading bands using the
numerical and the experimental methods.
They had seen the experimental procedure on UTM (Universal testing machine).
The results show that, as loading band increases, the ultimate buckling load also increases. The buckling load of the specimen
with circle cut out is a little more than the specimens with the square cut out with the equal surface area.

Effect of cutout aspect ratio on buckling and postbuckling strengths of composite panel under shear had been carried away by S. B.
Singh, and Dinesh Kumar.This research deals with the effect of circular cutout aspect ratio on the buckling and postbuckling strengths,
and failure characteristics of a simply-supported quasi isotropic composite laminate subjected to in-plane shear load (positive and
negative).They conclude, it is predicted that buckling and first-ply failure loads decrease monotonically with the increase in aspect
ratio (i.e., d/b ratio) of the centrally placed circular cutout.

Dr. Riyah N.K, Mr Ahmed N.E. found Stress Analysis of Composite Plates with Different Types of Cutouts, They have research on an
experimental and theoretical investigation of the effect of cutouts on the stress and strain of composite laminate plates subjected to
static loads. A numerical investigation has been achieved by using the software package (ANSYS), involving static analysis of
symmetric square plates with different types of cutouts which gives
The value of normal strain at the edge of square hole is greater than the value at the edge of circular hole.
Increasing the holes dimensions to width of plate ratio increases the maximum value of stress and strain of a symmetric square
plate.
The value of maximum stress increases with the order of type of circular, square, triangular and hexagonal cutout, whereas the
value of maximum strain increases with the order of type of circular, square, hexagonal and triangular cutout.

Dr. Hani Aziz Ameen studied Buckling Analysis of Composite Laminated Plate with Cutouts. They found following results,
The effect of cutout shapes will cause decrease the critical buckling loads
The critical buckling load is decreased with increased the cutout sizes.
The critical buckling loads increases with small ratio of 0.65% with the increase the angle of orientation of cutouts ( from 0 to
60)

Buckling analysis of quasi-isotropic symmetrically laminated rectangular composite plates with an elliptical/circular cutout has been
analyzed by Lakshminarayana, R. Vijaya Kumar, G. Krishna Mohana Rao in different loading conditions. They have analyzed their
work using Finite element analysis(FEA). The results show that the buckling loads of rectangular composite plates subjected to
linearly varying in-plane loads are decreased by increasing of cutout positioned angle and increasing of c/b and d/b ratios .As the
plate length/thickness (a/t) ratio increases the buckling load decreases, irrespective of cutout shape, size, and orientation of cutout,
boundary conditions and various linearly varying inplane compressive loading conditions.

A closed-form solution for stress concentration around a circular hole in a linearly varying stress field has been generated by
raghavendra nilugal1, Dr. M. S. Hebbal. Their paper presents a closed-form solution, based on Theory of Elasticity to determine the
stress concentration around a circular hole in an infinite isotropic plate subjected to linearly varying stress. Numerical solutions such
as Finite Element Method, Finite Difference Method and Boundary Element Method can be employed to solve this problem. The
equation developed in them work can be used to determine the stress field around the circular hole. The results obtained are compared
with FEA results.
New equations for stress distribution around circular hole in an isotropic infinite plate in a linearly varying stress field are formulated
using closed-form solution and it is extended from the Kirschs problem for Stress-concentration due to a circular hole in a stressed
plate. The maximum stress concentration is found at the edge of the hole at an angle of 90 from the load direction. Localized stress
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

300 www.ijergs.org

concentration factor is calculated and it can be observed that the localized SCF is maximum at the hole and it decreases as radius
increases. At the end of the plate localized SCF is equal to 1 that means stresses are converged. They validate both results, FEA with
closed form solution of newly generated equation.


Payal Jain, Ashwini Kumar has been found Postbuckling response of square laminates with a central circular/elliptical cutout.The
finite element method is used to analyse the postbuckling response of symmetric square laminates with a central cutout under uniaxial
compression. The formulation is related toMindlins plate theory and von Karmans assumptions to incorporate geometric
nonlinearity. The governing finite element equations are solved using the NewtonRaphson method. For the purpose of analysis,
laminates with circular and elliptical cutouts are considered with a view to examine the effect of cutout shape, size and the alignment
of the elliptical cutout on the buckling and the first-ply failure loads of laminates. It is observed that these parameters have a
substantial influence on the reserve strength which laminates may possess beyond buckling.
They analyse following results.
For the geometries studied here, there is decrease in the buckling and the first-ply failure loads due to increase in the diameter of
circular cutout placed centrally. A laminate with an elliptical cutout aligned along the loading direction has lower buckling load than
that with a corresponding circular cutout. A laminate with an elliptical cutout aligned perpendicular to the loading direction has higher
buckling load compared to the case when the cutout is aligned along the loading direction. The actual postbuckling strength of a
laminate can be ascertained if the allowable transverse deflection is prescribed.A simple laminate without anycutout fails near the
diagonal corner, but while circular cutout is present, it failed towards the cutoutedge. However with an elliptical cutout the failure
takes place near the vertex of the cutout.
ACKNOWLEDGMENT
I would like to thank my parents to support and to praise me during this work, by their support I am able to stand at todays position. I
am very glade to thank my dissertation supervisor, Professor Krunal Shah and Professor Sankalp Bhatiya. They has led me into the
area of research where I have growing interests. Their expertise, vision and diligence coupled with his flexibility and patience has
formed the excellent tailored guidance needed for this work and the preparation of this paper. I would also thankful to all my friends
who encourage me a lot for study as well as life purpose. For all those I am most grateful.
CONCLUSION
The buckling behavior of Glass fiber reinforced polymer subjected to linearly varying loading has been studied by the finite
element method. Effects of various parameters on the buckling load of rectangular plates with aspect ratios of 1 have been
investigated
There are various parameters which affects the results of this study. Cutout size, Angle of cutout, Specimen geometry (plate),
Thickness, Fiber types, Stacking sequences (no. of ply), Ply angles, Loading condition.
Based on the findings, the following conclusions and recommendations have been made:
Single ply composite adapted for being rolled as is required in a window shade application as well as for layering
fuselage, fairing and for other components for aircraft manufacturing. It can be layered on to the different structural
component to strengthen the structure.
Presence of cutout lowers the buckling load and it varies with the cutout shape.
The value of maximum stress increases with the order of type of circular, square, triangular and hexagonal cutout, and
the value of maximum strain increases with the order of type of circular, square, hexagonal and triangular cutout.
The critical buckling load is decreased with increased the cutout sizes.
The buckling load of the specimen with circle cut out is a little more than the specimens with the square cut out with the
equal surface area.
The value of normal strain at the edge of square hole is greater than the value at the edge of circular hole.
The critical buckling loads increases with small ratio of 0.65% with the increase the angle of orientation of cutouts.
(From 0 to 60).
A laminate plate without a cutout fails near the diagonal corner, but in the presence of a circular cutout the failure
location shifts towards the cutout edge. However with an elliptical cutout the failure takes place near the vertex of the
cutout.

REFERENCES:
[1] Donnie G. Brady, Melody A. Hammond, Patent US5089207 - Single ply reinforced thermoplastic composite, 1988 Published at
1992, US5089207 A Application number US 07/237,708.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

301 www.ijergs.org

[2] Arunkumar R. Buckling Analysis of Woven Glass epoxy Laminated Composite Plate, May, 2009.
[3] Mahmoud Shariati and Ali Dadrasi, Numerical and Experimental Investigation of Loading Band on Buckling of Perforated
Rectangular Steel Plates, October (2012), Research Journal of Recent Sciences ISSN 2277-2502, Vol. 1(10), 63-71.
[4] S. B. Singh, and Dinesh Kumar, Effect of cutout aspect ratio on buckling and postbuckling strengths of composite panel under
shear. 2011, 16th International Conference on Composite Structures, ICCS 16.
[5] Dr. Riyah N.K, Mr Ahmed N.E., Stress Analysis of Composite Plates with Different Types of Cutouts, 2009, Anbar Journal of
Engineering Sciences, AJES, Vol. 2, No. 1.
[6] Dr. Hani Aziz Ameen, Buckling Analysis of Composite Laminated Plate with Cutouts, 2009, Engg. & Tech. Journal, Vol. 27,
No.8, pp. 1611-1621.
[7] Lakshminarayana, R. Vijaya Kumar, G. Krishna Mohana Rao, Buckling analysis of quasi-isotropic symmetrically laminated
rectangular composite plates with an elliptical/circular cutout subjected to linearly varying in-plane loading using FEM. 2012,
International Journal of Mechanics, Issue 1, Volume 6, pp. 508-517.
[8] William L. Ko, Anomalous Buckling Characteristics of Laminated Metal-Matrix Composite Plates With Central Square Holes,
July 1998, NASA/TP-1998-206559 Dryden Flight Research Center Edwards, California.
[9] Ganesan.C, P.K.Dash, Elasto Buckling Behaviour Of Gfrp Laminated Plate With Central Holes, International Journal of
Mechanical & Industrial Engineering,Volume-1 Issue-1.
[10] M S R niranjan Kumar, M MMsarcar, V Bala Krishna Murthy, Static analysis of thick skew laminated composite plate with
elliptical cutout, Feb. 2009, Indian Journal of engineering and material sciences, Vol. 16, pp. 37-43.
[11] A closed-form solution for stress concentration around a circular hole in a linearly varying stress field, Raghavendranilugal, Dr.
M. S. Hebbal. IJMET, Volume 4, Issue 5, September - October (2013), pp. 37-48.
[12] Postbuckling response of square laminates with a central circular/elliptical cutout, Payal Jain, Ashwini Kumar, Composite
Structures 65 (2004) 179185
[13] Elasto-plastic buckling of perforated plates under uniaxial compression, Khaled M. El-Sawy, Aly S. Nazmy Mohammad Ikbal
Martini, June 2003, Thin-Walled Structures 42 (2004) 10831101.
[14] A book of Composite Materials for Aircraft Structures by Alan Baker, Stuart Dutton, Donald Kelly, Second Edition, Published by
American Institute of Aeronautics and Astronautics, Inc















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

302 www.ijergs.org

Restructuring of an Air Classifier Rotor by Finite Element Analysis
Anuj Bajaj
1
, Gaurav V.Patel
2
, Mukesh N. Makwana
2

1
Graduate Research Assistant, mechanical Engineering Department, Arizona State University, United States
2
Assistant Professor, Department of Mechanical Engineering, A.D. Patel Institute of Technology, New V V. Nagar, Gujarat, India
E-Mail- ergvpatel@gmail.com

Abstract The air classifier is an equipment that finds its major applications in mineral processing plants. One of its utility is to
separate the finer particles from the coarse particles to provide the required size of particle output. Air classifiers use various
principles of operation depending on their prerequisite and usage. Various running parameters of the air classifiers can be varied to
obtain the desired output particle sizes which are governed by specific particle size distribution curves. The air classifier considered in
the paper is based on a deflector wheel principle. These deflector wheels are the rotors. During a trial run on a dynamic balancing
machine the rotors failed structurally resulting in their permanent plastic deformation. This indicated a fault in the design; thus
requiring failure analysis and restructuring of the same. This research paper points out an alternative to the current design keeping in
mind the constraints of the system. The authors have employed the procedure of calculating the failure manually by an analytical
approach. Later on, the design was examined and verified using a well-known Finite Element Analysis software followed by the
development of conceptual designs and then choosing the most optimum one. The optimized design offered structural integrity to the
rotor with minimal reduction in area, and hence the performance.
Keywords Air Classifier, rotor, failure diagnosis,simulation, Finite Element analysis,Von-Mises stress, stiffener ring
1. INTRODUCTION
An air classifier is an industrial machine which sorts materials by a combination of size, shape, and density. It works by injecting the
material stream to be sorted into a chamber which contains a column of rising air. Inside the separation chamber, air drag on the
objects supplies an upward force which counteracts the force of gravity and lifts the material to be sorted up into the air. Due to the
dependence of air drag on object size and shape, the objects in the moving air column are sorted vertically and can be separated in this
manner. Air classifiers are commonly employed in industrial processes where a large volume of mixed materials with differing
physical characteristics need to be sorted quickly and efficiently. [1]














(a) (b) (c)
Fig. 1. (a) Typical layout of mineral processing plant (b) fully assembled air classifier
(c) Air Classifier sectional view with particle flow.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

303 www.ijergs.org

Air Classifiers use various principles of operation depending on their need and use. Various running parameters are varied to obtain
the desired output particle sizes. These outputs are governed by specific particle size distribution curves. The air classifier considered
in the paper is based on a deflector wheel principle. These deflector wheels are the rotors. They are responsible for the screening
action that is required for particle separation. [5]Product fineness is controlled by the speed of the classifier wheel which is adjusted
using a frequency controller. Classification in the range of 150 to 2.5 micrometers is possible. The material, after coming out from the
outlet of the ball mill, is sucked upwards by an induced draught fan provided at the top of the system. This ground material flows
through the classifier rotor. The size of the particle that can be allowed to pass through is governed by the speed of the rotor.
The range of particles that are obtained from the process mills vary over a range of few micrometers to a hundred micrometers. But,
the output required varies from 6 to 20 micrometers. This can be obtained depending on the speed of the rotor. [4]
2. COMPONENT FAILURE
The rotor after being manufactured was tested on a dynamic balancing machine at 3000 rpm. While under test, it suddenly failed
resulting in permanent plastic deformation of the blades. It was suspected that the centrifugal force acting on the blades was
responsible for this failure. It pulled out the blades and the end plate under that action moved into the rotor as shown in figure 2(a).
The horizontal length of the damaged component was found to be approximately 150 mm. The weld of few blades also broke.









(a) (b)
Fig. 2. (a) CAD model of existing component (b) Failure of Component
2.1. Failure Analysis
The reason for failure, as mentioned earlier was suspected to be the centrifugal force. The conceptapplied to show the failure by
manual calculations was by treating one blade as a simply supported beam. From the results of manual calculations it was found that
the stress generated in the rotor was 337.523 MPa.[2] The material used in the rotor is Plain carbon steel (Code: IS 2062), having a
yield strength of 240 MPa. Hence, the stress generated due to the centrifugal force acting on the blades causes the plastic deformation
which causes the failure. To verify the result, structural analysis of the rotor in FEA software ANSYS 13.0 was performed.
3. SIMULATION

3.1. Air Classifier Rotor










(a) (b)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

304 www.ijergs.org

Fig. 3. (a) Static structural total deformation (b) Static structural equivalent stress developed in old rotor
The maximum deformation in static structural analysis of the air classifier rotor was observed to be 0.98 mm. The maximum occurs at
the middle of the blades as shown in figure 3 (a).

The maximum von-mises stress was also observed to be at center of blade as shown in figure 3 (b) having magnitude of 310.68 MPa.
The minimum safety factor of the design was found to be 0.7826. This clearly indicated an easy cause of failure in the rotor. The
simulation resultsalso concurred with the manual calculations as well.
4. RESTRUCTURING OF THE ROTOR
4.1. Variables of the system
Following variables can be provided to the rotor without affecting the performance to a large extent.

- A slight reduction in the area of inlet of the rotor. But beyond a limit the performance of the air classifier as a whole can be
affected.
- The material of construction of the vanes can be changed. However, as the vanes are welded onto the hubs, the weld between
the two dissimilar metals should be strong enough to not give away during high speed rotation. Also, the cost, strength of the
new material should be taken into consideration.
- Thickness of the blade can be marginally increased by a maximum of 1mm. Further increase in the thickness may have
implications regarding weight of the rotor, which always effect the motor bearing as the entire rotor is mounted on motor
without any other support. In addition to the above, higher thickness will provide lesser area for movement of material across
the classifier. This results into performance issues of the classifier.
4.2. Restrictions of the system
- The system has been designed to give an output of different quantity based on different particle size requirement of the
material.
- Reducing the length of the rotor reduces the rotor area, which also results into lower output. So, the change of rotor length was
prohibited.
- Reducing diameter reduces the output of the system; an increase is not possible due to space restrictions. Hence, the diameter
also cannot be changed.
- Higher weight will have effect on the motor bearing and the torque to be transmitted will get higher, which may result into
higher motor size. Higher motor size will have mounting problems and the system will be very bulky.
- A change in the particle size distribution is undesirable as it would affect all the other parameters up & down the mineral
processing plant.
[3]


Keeping in mind the constraints and variables of the system, it was found desirable to come up with the following
alternatives/improvements to the current design of the rotor:
- Provide a stiffening ring.
- Change in the material.
- Change in the dimensions of the blades.

Amongst the above mentioned alternatives the best option found was to provide a stiffening ring at the point where maximum
bending moment is acting.










(a) (b) (c)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

305 www.ijergs.org

Fig. 4. (a) Stiffener specifications (b) CAD model of stiffener ring (c) CAD model of modified rotor
The material of make would be IS 2062 Grade B. The stiffener has to be in the shape of a circular annulus ring. Just like the hub and
the end plate, there are 72 axis symmetric slots provided on its outer periphery where the blades can be welded.

Table 1. Stiffener design specifications
UTS TYS CYS Outer radius Inner radius Thickness
410 MPa 240 MPa 240 MPa 250 mm 210 mm
10 mm

The figure 4 (c) shows the modification that has been made to the original design of the rotor. As obtained in the failure calculations
and ANSYS results, the bending moment is found to be maximum at the center of the blade. Hence the stiffener ring has been placed
exactly at the middle of the blade.

From analytical calculations, the maximum stress and deformation developed on the blades was found to be 80.145 MPa and
0.05113mm respectively. Although the stress on the blades is reduced, the stress on the stiffener might be higher. Hence, ANSYS
analysis of the modified rotor design was done.
4.3. Analysis of Modified rotor










(a) (b)
Fig. 5. (a) Static structural total deformation (b) Static structural equivalent stress developed in modified rotor
The maximum deformation in ANSYS static structural analysis of the modified rotor was observed to be 0.235 mm. The
maximum von-mises stress in ANSYS static structural analysis of the modified rotor was observed to be 217.58 MPa. The
maximum stress was found to be on the stiffener ring and not on the blades. Hence,a drastic improvement in the model.
Upon performing transient analysis, the stress was still found to be higher than the yield strength of the material. Hence the new
design, although more stable, needed to be optimized for a better safety factor and stability.
4.4. Analysis of Optimized rotor










(a) (b)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

306 www.ijergs.org

Fig. 6. (a) Static structural total deformation (b) Static structural equivalent stress developed in optimized rotor
On decreasing the rotor inner radius from 210 mm to 200 mm& increasing thickness form 10 mm to 15 mm, the maximum
deformation was found to be reduced to 0.16 mm as shown in figure 6(a). The maximum stress of the optimized rotor was found to be
186.43 MPa, at the stiffener ring as shown in figure 6(b). The factor of safety of the optimized rotor was found to be a minimum of
1.341, which was acceptable, as 3000 rpm was the maximum operational speed of the rotor in the system. Hence there wont be a
chance of an increase in stress.
5. CONCLUSION
While under test, the rotor of the air classifier suddenly failed resulting in permanent plastic deformation of the blades. It was
suspected that the centrifugal force acting on the blades was responsible for this failure resulting in pulling out the blades and the end
plate under that action moved into the rotor.The authors employed the procedure of calculating the failure manually by an analytical
approach. Later on, the design was examined and verified using a well-known Finite Element Analysis software followed by the
development of conceptual designs and then choosing the most optimum one. During static structural analysis of this rotor, the
maximum deformation and von-mises stress were observed to be 0.98 mm and 310.68 MPa respectively occurring at the middle of the
blades. The minimum safety factor being 0.7826.
Hence, in order to overcome this problem it was found desirable to redesign the rotor by providing a stiffening ring at the point where
maximum bending moment was acting.The stress and deflection were brought down to 217.58 MPa and0.235 mm in the redesigned
rotor. But, upon performing transient analysis, the stress wasstill higher than the yield strength of the material. Hence the new design,
although more stable, needed to be optimized for a better safety factor and stability. On modifying the rotor, by decreasing its inner
radius & increasing thickness, the stress and deflection were brought down to 186.4 MPa and0.16 mm respectively, in the optimized
rotor. Due to a reduction of about 4.1667% in the area through which particles can pass through, the performance of the rotor was
reduced slightly within permissible limits.
ACKNOWLEDGEMENTS
The authors are heartily thankful to Mr. Nilesh Parekh for providing the opportunity to work with his organization Techno designs,
VitthalUdyognagar. We also express our acknowledgement to Love Patel and Nilay Patel, students of final year mechanical
engineering for their support during the work. We kindly acknowledge the assistance and resources provided by the Mechanical
Engineering Department of A. D. Patel Institute of Technology for the smooth co-ordination of our work.

REFERENCES:
[1] R. H. Perry, Perrys chemical engineers handbook, McGraw-hill publications, Ch8, pp. 8.1-8.22
[2] R. P. Rethaliya, Strength of Materials, Mahajan Publications, Ch5, pp. 5.1-5.30
[3] Yuan Yu, Jia Xiang Liu, Li Ping Gao, Experiment Study and Process Parameters Analysis on Turbo Air Classifier for Talc
Powder, Advanced Materials Research (Volumes 446 - 449), January 2012, 522-527H
[4] P. B. Crowe and J. J. Peirce, Particle Density and Air-Classifier Performance, Journal of Environmental Engineering, 1988, vol.
114, no. 2, pp. 282399
[5] http://www.hosokawa.co.uk/alpine-zps-circoplex-air-classifier-mill.php






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

307 www.ijergs.org

Design of Low-Voltage and low-Power inverter based Double Tail Comparator
B Prasanthi
1
, P.Pushpalatha
1

1
ECE, UCEK, JNTUK, Kakinada, India
E-Mail- prasanthi.btech06@gmail.com

ABSTRACT: Design of low voltage double-tail Comparator with pre-amplifier and latching stage is reported in this paper. Design
has specially concentrated on delay of both single tail comparator and double-tail comparator, which are called clocked regenerative
comparator. Based on a new dynamic comparator is proposed, where the circuit of conventional double tail dynamic comparator is
modified for low power and fast operation even in small supply voltages. Simulation results in 0.25m CMOS technology confirm the
analysis results. It is shown that proposed dynamic comparator both power consumption and delay time reduced. Both delay and
power consumption can be reduced by adding two NMOS switches in the series manner to the existing comparator. The supply
voltages of 1.5V while consuming 15w in proposed comparator and 16 w in existing comparator respectively.

Keywords-Conventional dynamic comparator, double tail comparator, Proposed dynamic comparator, low power, fast operation, low
power, Delay.

1.Introduction

Comparator is one of the fundamental building blocks in Analog-to-digital converters. designing high speed comparator is more
challenging when the supply voltage is smaller. in other words to achieve high speed, larger transistors are required to compensate the
reduction of supply voltage, which also means that more die area and power is needed. Developing a new circuit structures which
avoid stacking too many transistors between the supply rails is preferable for low voltage operation, especially if they do not increase
circuit complexity.
Additional circuitary is added to the conventional dynamic comparator to enhance the comparator speed in low voltage operation.
Many high speed ADCs such as flash ADCs requires high speed, low power comparators with small chip area.a new dynamic
comparator is presented, which does not require boosted voltage or stacking of too many transistors. Merely by adding a few
minimum-size transistors to the conventional double-tail dynamic comparator, latch delay time is profoundly reduced. This
modification also results in considerable power savings when compared to the conventional dynamic comparator and double-tail
comparator.

2.Conventional Single tail comparator

The schematic diagram of the conventional dynamic comparator. It is widely used in A/D converters. With high input impedance,
rail-to-rail output swing, no static power consumption.

2.1.Operation

Two modes of operation reset phase and comparison phase. in reset phase clk=0,Mtail=off, reset transistors M7-M8 are ON, pull both
output nodes to VDD to define start condition and have valid logical level. during the rest phase. in the comparison phase clk=VDD,
Mtail=ON, reset transistors M7-M8 are OFF. Output nodes had been pre-charged to VDD, start to discharge with different rates
depending on the corresponding input voltages. Where VINP>VINN, outp discharges faster than outn, where outp is(Discharged by
transistor M2 drain current) falls down to before outn(discharged by transistor M1 drain current)the corresponding PMOS transistor
M5 will turn ON initiating the latch regeneration caused by back-to-back inverters(M3,M5,M4,M6).Thus outn pulls to VDD and outp
discharges to ground. If VINP<VINN, The circuit works vice versa. the delay of the comparator is comprised two delays t0 and tlatch.

t
0
=
Vthp
2
2 Vthp

..(1)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

308 www.ijergs.org

where I2=

2
+Iin,I2 can be approximated to be constant and equal to the half of tail current.
t
latch
=

,
. ln

0
=

,
. ln
/2
0
.(2)
where gm,eff is the transconductance of the back-to-back inverters. In fact, this depends in logarithmic manner, on the initial output
voltage difference at the beginning of the regeneration(i.e.t=t0),V0 can be calculated from
0 = | = 0 ( = 0)|
=|Vthp|-
20

=|Vthp|1
2
1
(3)
The current difference Iin=|I2-I1|,between the branches is much smaller than I1 and I2,thus I1 can be approximated by Itail/2 and (3)
can be written as
0=|Vthp|

1

|Vthp|


= 2|Vthp|
1,2

.
= 2|Vthp|
1,2

...(4)
In this equation input transistors current factor and Itail is a function of input common mode voltage(Vcm) and VDD. Now
substituting V0 in latch delay expression and considering t0,the expression for the delay of the comparator as
tdelay=t0+tlatch
=
Vthp
2
+

,
. ln

4||

1,2
(5)
Total delay is directly proportional to the comparator load capacitance CL, and inversely proportional to the input voltage
difference(Vin),besides the delay depends indirectly to the input common mode voltage(Vcm). By reducing Vcm, the delay t0 of the
first sensing phase increases because lower Vcm causes smaller bias current(Itail),on the other hand, shows that delay discharge with
smaller tail results in an increased initial voltage difference(V0),reducing tlatch.

Fig2.1.Schematic diagram of single-tail dynamic comparator.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

309 www.ijergs.org


3.Conventional Double-tail Dynamic Comparator
A Conventional double tail comparator is shown in fig2.it can operate lower supply voltages compared to the single tail comparator.
The double tail enables both large current in the latching stage and wider Mtail2,for fast latching independent on input common mode
voltage(Vcm),and a small current in the input stage(small Mtail1),for low offset. input and ground of the circuit based on the tail
current. Intermediate stage transistor is switching when voltage drop occurs at the nodes fp and fn.
3.1.Operation
Fig2.shows the operation of conventional double-tail dynamic comparator.The intermediate stage formed by MR1 andMR2 passes
fn/fp to the cross-coupled inverters formed by good shielding between input and output, resulting reduced value of kick-back-noise.


Fig3.1. Schematic diagram of Conventional Double-tail dynamic comparator.
4.Modified Existing dynamic comparator(main idea)
Fig4.shows operation of modified existing comparator (main idea).it gives better performance in low voltage applications, and it is
designed based on the double-tail structure. latch regeneration speed is increased by increasing fn/fp. Two control transistor(MC1
and MC2) have been added to the first stage in parallel to M3/M4 transistors but in cross coupled for the purpose of increasing speed.
4.1Operation
During the reset phase clk=0,tail transistors Mtail1 and Mtail2 are off, avoiding the static power,M3 and M4 pulls both fn and fp to
VDD, then the control transistors MC1 and MC2 are in cutoff stage.MR1 and MR2 are intermediate transistors, it reset both latch
outputs to ground. During the comparison phase, clk=VDD,Mtail1 and Mtail2 are ON, transistors M3 and M4 are turn off. at the
beginning of this stage the control transistors MC1 and MC2 are still off. Thus the output nodes fp and fn start to drop different rates
according to the input voltages. If VINP>VINN, fp discharges faster than fn, because transistor M2 provides more current than
transistor M1.as long as fn continues falling the corresponding PMOS transistor MC1 start to turn on, pulling fp node back to VDD.
So another control transistor MC1 remains turn off. it allowed fn to be discharged completely. When one of the control transistor turns
ON, a current from VDD is drawn to ground via input and tail transistor, resulting in static power consumption. To overcome this
issue, two NMOS switches are used below input transistors shown in fig 4.2.during the decision making phase, clk=VDD,Mtail1 and
Mtail2 are ON, both NMOS switches are closed. Output node fn and fp are start to discharge with different rates, the comparator
detects faster discharging node. Control transistors increase their voltage difference. Suppose that fp is pulling to VDD, fn should
discharges completely. hence the switch in the charging path of fp will be opened, but other switch is connected to fn will be closed to
allow the complete discharge of fn node. The operation of the control transistors with the switches emulates the operation of latch.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

310 www.ijergs.org


Fig4.1. Schematic diagram of Modified existing dynamic comparator(main idea)

Fig4.2. Schematic diagram of Modified existing dynamic comparator(Final structure)
5.Proposed Double-Tail Dynamic Comparator

Fig5.shows the schematic diagram of Proposed comparator .it is designed based on the existing comparator. compared with
proposed it provides, better performance of double tail comparator in low voltage applications. Drawback of Existing comparator
is ,the nodes f
n
and f
p
starts to drop with different rates according to the input voltages. The continues falling of fn, the
corresponding transistor M
C1
starts to turn on and f
p
node backs to V
DD
. Node fn to be discharged completely(M
C2
off).When one
of the control transistors (M
C1
) turns ON, a current from V
DD
is drawn to the ground via input and tail transistor. Resulting a static
power consumption. For this purpose two switching transistors (M
sw3
and M
sw4
) have been added to M
sw1
and M
sw2
in series
manner. Proposed comparator reduced the delay, area and power.

5.1.Operation

Operation of Proposed in both reset and comparison phase is similar as Existing comparator .At the beginning of the decision
making phase, both f
n
and f
p
nodes have been pre charged to V
DD
. in the reset phase switches are closed , f
n
and f
p
starts to drop
with different discharging rates. As soon as comparator detects that one of the f
n
/f
p
nodes is discharging faster, control transistors
will act in a way to increase their voltage difference. If f
p
is pulling up to V
DD
and f
n
should be discharged completely , hence
switching in the charging path of f
p
will be opened but the other switch connected to f
n
will be closed to allow the complete
discharge of f
n
node. The operation of the control transistors with the switches emulates the operation of the latch.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

311 www.ijergs.org



Fig5.1. Schematic diagram of Proposed dynamic comparator
6.Results


Fig6.1 Conventional single tail Comparator Simulation Results



Fig6.2.Conventional double tail Comparator Simulation Results

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

312 www.ijergs.org



Fig6.3Modified existing Comparator(main Idea)Simulation Results



Fig6.4.Modified existing Comparator(Final structure)Simulation Results



Fig6.5.ProposedComparator (Final structure) Simulation Results

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

313 www.ijergs.org

6.6.Performance Comparison Table:

Comparator
Structure
Conventional
Dynamic
Comparator
Double tail
Dynamic
Comparator
Existing
Dynamic
Comparator
Proposed
Comparator
Technology(CMOS)
250nm

250nm

250nm

250nm
Supply Voltage
5V

1.5V

1.5V

1.5V
Delay 25ns 1.1ns 12.22ns 11.34ns
Average Power
1.9mw

10 w

16w

15 w

7.Layout design of the existing Comparator



8.Layout design of the Proposed Comparator



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

314 www.ijergs.org

9.CONCLUSION

In this Paper we presented a comprehensive delay analysis for conventional dynamic comparator expression were derived. A new
double tail dynamic comparator with two NMOS Switches was proposed in order to improve the performance of the comparator and
no static power consumption. Pre layout simulation results in 0.25m CMOS technology confirmed that the delay and energy per
conversion of Proposed comparator is reduced.

REFERENCES:
[1] B. Goll and H. Zimmermann, A comparator with reduced delay time in 65-nm CMOS for supply voltages down to 0.65, IEEE
Trans. Circuits Syst. II, Exp. Briefs, vol. 56, no. 11, pp. 810814, Nov. 2009.
[2] S. U. Ay, A sub-1 volt 10-bit supply boosted SAR ADC design in standard CMOS, Int. J. Analog Integer. Circuits Signal
Process., vol. 66, no. 2, pp. 213221, Feb. 2011.
[3] A. Mesgarani, M. N. Alam, F. Z. Nelson, and S. U. Ay, Supply boosting technique for designing very low-voltage mixed-signal
circuits in standard CMOS, in Proc. IEEE Int. Midwest Symp. Circuits Syst. Dig. Tech. Papers, Aug. 2010, pp. 893896.
[4] B. J. Blalock, Body-driving as a Low-Voltage Analog Design Technique for CMOS technology, in Proc. IEEE Southwest Symp.
Mixed-Signal Design, Feb. 2000, pp. 113118.
[5] M. Maymandi- Nejad and M. Sachdev, 1-bit quantiser with rail to rail input range for sub-1V __ modulators, IEEE Electron.
Lett., vol. 39, no. 12, pp. 894895, Jan. 2003.
[6] Y. Okaniwa, H. Tamura, M. Kibune, D. Yamazaki, T.-S. Cheung, J. Ogawa, N. Tzartzanis, W. W. Walker, and T. Kuroda, A
40Gb/s CMOS clocked comparator with bandwidth modulation technique, IEEE J. Solid-State Circuits, vol. 40, no. 8, pp. 1680
1687, Aug. 2005.
[7] B. Goll and H. Zimmermann, A 0.12 m CMOS comparator requiring 0.5V at 600MHz and 1.5V at 6 GHz, in Proc. IEEE Int.
Solid-State Circuits Conf., Dig. Tech. Papers, Feb. 2007, pp. 316317.
8] B. Goll and H. Zimmermann, A 65nm CMOS comparator with modified latch to achieve 7GHz/1.3mW at 1.2V and
700MHz/47W at 0.6V, in Proc. IEEE Int. Solid-State Circuits Conf. Dig. Tech. Papers, Feb. 2009, pp. 328329.
[9] B. Goll and H. Zimmermann, Low-power 600MHz comparator for 0.5 V supply voltage in 0.12 m CMOS, IEEE Electron.
Lett., vol. 43, no. 7, pp. 388390, Mar. 2007.
[10] D. Shinkel, E. Mensink, E. Klumperink, E. van Tuijl, and B. Nauta, A double-tail latch-type voltage sense amplifier with 18ps
Setup+Hold time, in Proc. IEEE Int. Solid-State Circuits Conf., Dig. Tech. Papers, Feb. 2007, pp. 314315.
[11] P. Nuzzo, F. D. Bernardinis, P. Terreni, and G. Van der Plas, Noise analysis of regenerative comparators for reconfigurable
ADC architectures, IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 55, no. 6, pp. 14411454, Jul. 2008.
[12] A. Nikoozadeh and B. Murmann, An analysis of latched comparator offset due to load capacitor mismatch, IEEE Trans.
Circuits Syst.II, Exp. Briefs, vol. 53, no. 12, pp. 13981402, Dec. 2006.
[13] S. Babayan-Mashhadi and R. Lotfi, An offset cancellation technique for comparators using body-voltage trimming, Int. J.
Analog Integr. Circuits Signal Process., vol. 73, no. 3, pp. 673682, Dec. 2012.
[14] J. He, S. Zhan, D. Chen, and R. J. Geiger, Analyses of static and dynamic random offset voltages in dynamic comparators,
IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 56, no. 5, pp. 911919, May 2009.
[15] J. Kim, B. S. Leibowits, J. Ren, and C. J. Madden, Simulation and analysis of random decision errors in clocked comparators,
IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 56, no. 8, pp. 18441857, Aug. 2009.
[16] P. M. Figueiredo and J. C. Vital, Kickback noise reduction technique for CMOS latched comapartors, IEEE Trans. Circuits
Syst. II, Exp. Briefs, vol. 53, no. 7, pp. 541545, Jul. 2006.
[17] B. Wicht, T. Nirschl, and D. Schmitt-Landsiedel, Yield and speed optimization of a latch-type voltage sense amplifier, IEEE J.
Solid-State Circuits, vol. 39, no. 7, pp. 11481158, Jul. 2004.
[18] D. Johns and K. Martin, Analog Integrated Circuit Design, New York, USA: Wiley, 1997






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

315 www.ijergs.org

A Study of Current Scenario of Cyber Security Practices and Measures:
Literature Review
Rajesh Mohan More
1
, Dr. Ajay Kumar
2

E-Mail- more.rajeshmore@gmail.com
Abstract Security measures are of prime importance to ensure safety and reliability of organizations. Hacking of data and
information has become almost a routine and regular of organizations. Before we think to combat such a situation; to avoid both
predictable and unpredictable loss, danger and risk associated, tangible and intangible factors, we have to strategize in keeping cool in
the heat of battle and find out the causes attributing to the same; so proactive action need to be taken to exterminate the same. The
researchers feel to encircle parameter to have an in-depth insight such as integrity of network connections and components,
telecommunication issues, firewall, filtering, intrusion detection and prevention system, and network maintenance. These are in fact
intra and interrelated.
Keywords Intellectual property, computer security, security risks, vulnerability, antivirus, encryption, cyber terrorism, auditing,
reviewing, intrusion detection system and intrusion prevention systems.
INTRODUCTION
In todays information-age, an organizations dependence on cyberspace is becoming an increasingly important aspect of
organizational security. As different organizations infrastructure are interconnected in cyberspace, the level of risk to national security
has increased dramatically. The threat to cyber security is growing. Computer systems at colleges and universities have become
favored targets as they store same record as bank. In academic institute, malicious software (malware), phishing, infrastructure attacks,
social network targeting, and peer-to-peer (P2P) information leakage are daily issues. Most universitys financial, administrative,
employment-related records, library records, certain research and other intellectual property-related records are accessible through a
campus network and hence they are vulnerable to security breaches that may expose the institute to losses and other risks.
CYBER SECURITY ATTACKS
Cyber attack refers to the use of deliberate actionsperhaps over an extended period of timeto alter, disrupt, deceive,
degrade, or destroy adversary computer systems or networks or the information and/or programs resident in or transiting these systems
or networks. Such effects on adversary systems may also have indirect effects on entities coupled to or reliant on them. A cyber attack
seeks to cause adversary computer systems and networks to be unavailable or untrustworthy and therefore less useful to the adversary.
Ponemon Institute presents the cyber crime study which is based on a sample of 56 organizations in various industry sectors
in United States. Table-1 shows the statistics of different types of cyber attacks occurred in year 2012 & 2013.
Types of Cyber attacks 2012 2013
Viruses, worms, trojans 100% 100%
Malware 95% 97%
Botnets 71% 73%
Web-based attacks 64% 63%
Stolen devices 46% 50%
Malicious code 38% 48%
Malicious Insiders 38% 42%
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

316 www.ijergs.org

Phishing & social engineering 38% 42%
Denial of service 32% 33%
Table-1 Types of cyber security attacks on organizations
The findings in a report released last year by the Center for Strategic and International Studies (CSIS), In the Crossfire:
Critical Infrastructure in the Age of Cyber war. Based on a survey of 600 IT security managers from critical infrastructure
organizations, the report found that 37% believed the vulnerability of the sector they worked increased over the year prior, and two-
fifths expect a significant security incident in their sector in the next year. Only one-fifth of respondents to the survey believe their
sector to be safe from serious cyber attack in the next five years [7]. Around 10% to 20% of the 100+ incidents recorded in BCITs
Industrial Security Incident Database (ISID) to date have been targeted attacks. The knowledgeable insider is the biggest threat and
played a part in a high profile case in Queensland, Australia, in February 2000. A disgruntled employee of a water-utility contractor
gained remote access to the utilitys control system and managed to release over one million liters of sewage into local waterways [8].
ChiChao Lu [18] in his paper explores the increasing number of cybercrime cases in Taiwan and examines the demographic
characteristics of those responsible for this criminal activity. As per the statistic 81.1% were male; 45.5% had some senior high school;
63.1% acted independently; 23.7% were currently enrolled students; and 29.1% were in the 18-23 age bracket, which was the majority
group. For those enrolled student cybercrime suspects, the findings show that the percentage of junior high school and senior high
school student suspects constituted 69.0% (2002), 76.1% (2003) and 62.7% (2004) of cybercrime suspects in their respective years.
The high rate shows that the number of currently enrolled students suspected of involvement in cybercrime is cause for concern.
In a survey of 100 UK organizations conducted by Activity Information Management 83% of the respondents believe that
they are under increasing risk of Cyber Attack [Figure 1]. It has been found that the financial and IT Sectors have financial and IT
sector has sufficient investment in cyber security as compared to sectors like central government, telecoms and academia.

Figure 13 Percentage of risks due to cyber security attacks
Ponemon Institute has been conducted a survey in June 2011 to study how well the organizations were responding to threats
against network security. In a survey it was found that organizations are experiencing multiple successful attacks against their
networks [Figure 2]. 59% of respondents said that their organizations network security had been successfully breached at least twice
over the past year. According to the findings, the average cost of one data breach for U.S. organizations participating in this study was
$7.2 million whereas the average cost of one cyber attack was $6.4 million [15].
40%
39%
31%
23%
19%
13% 13%
Disruption to Service Damage to Corporate Reputation Staff adhering to Security Policies Emerging Threats Financial Loss being seen as less important Budget Lack of Expertise
Cyber Security Attacks
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

317 www.ijergs.org


Figure 14 Distribution of different types of attacks
According to Deloitte-NASCIO Cyber security Study Survey conducted in year 2010; it focuses on initiatives chosen by
organizations [Figure 3]. As per as deployment or planning for deployment of variety of security technologies it has been found that
more than 80% of agencies have fully deployed antivirus, firewall, and Intrusion Detection and/or Prevention Systems (IDS/IPS), 25%
of respondents indicated that they were expected to pilot mobile device file encryption, vulnerability management, and data loss
prevention technologies.

Figure 15 Organizations initiatives for security
As per as deployment or planning for deployment of variety of security technologies it has been found that more than 80% of
agencies have fully deployed antivirus, firewall, and Intrusion Detection and/or Prevention Systems (IDS/IPS), 25% of respondents
indicated that they were expected to pilot mobile device file encryption, vulnerability management, and data loss prevention
technologies. [16]
According to the CSI Computer Crime and Security Survey conducted in year 2008 of about 522 computer security
practitioners in U.S. corporations, government agencies, financial institutions, medical institutions and universities; it has been
explored that 53% organizations allocated only 5% or less of their overall IT budget to information security. Figure 1 shows
percentage of the security technologies used by the organizations; it is clearly seen that anti-virus software, firewalls, virtual private
network (VPN) and anti-spyware software are mostly used by the organizations. [17]
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

318 www.ijergs.org


Figure 16 Security technologies used by the organizations
[Source: 2008 CSI Computer Crime and Security Survey]
The report of the Computer Security Institute (CSI/FBI) (Gordon, Martin, William, & Richardson, 2004) states that nearly
66% of all cyber-security breach incidents, in the 280 organizations who responded to the survey, were conducted from inside the
organization by authorized users. Additionally, an overwhelming 72% of organizations reported that they have no policy insurance to
help them manage cyber-security risks [12].
Studies by the Computer Security Institute and Federal Bureau of Investigation reported that approximately 90% of
respondent organizations in 2001 and 2002 detected computer security breaches [13]. These studies found that the losses averaged
over 2 million dollars per organization. In contrast, it was found that companies only spend 0.047% of their revenues on security [14],
and this indicates that many firms are not adequately investing in information security.
Cyber terrorism involves leveraging cyberspace as a primary weapon to generate political or social change. It is important to
recognize that cyber-terrorism is a tactic that can be used to achieve broader strategic objectives. Jeffrey R. DiBiasi [2] in his study
Cyberterrorism: Cyber prevention vs cyber recovery undertakes an analysis of the vulnerability of cyberspace to terrorist attacks.
The first analysis examines the Code Red Worm and the Slammer Worm were highly destructive and spread faster than normal worms,
making them well suited for assessing the existing security of computers and networks. It also examines a staged cyber attack on
critical infrastructure, entitled Attack Aurora. In the Aurora attack, researchers from the Department of Energys Idaho lab hacked into
a replica of a power plants control system. This attack facilitates an analysis of vulnerabilities of critical infrastructures to cyber
terrorism.

CYBER SECURITY PRACTICES
Every business relies on information; computers are used to store information, process information and generate reports.
Information system assets can be classified by locating information assets and their associated systems as high, moderate, or low
impact with respect to the impact of maintaining their confidentiality, integrity, and availability [14]. Computer networks may be
responsible for many crucial and back office operations, so it is necessary to secure these systems and their data.
According to the study conducted by Steffani A. Burd [14] following are the statistics of methodologies used [Table-2] by
organizations to protect sensitive information.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

319 www.ijergs.org

Security Methods Used Organizations
Firewalls 94%
Role-based Access 86%
Physical Separation 83%
Encrypt Data on HD 69%
Identity Management 69%
Encrypt Backup Data 63%
Monitor Use of Backup Media 36%
Table-2 Security Methods Used
Advanced perimeter controls and firewall technologies, encryption technologies, security intelligence systems, access
governance tools, extensive use of data loss prevention tools, enterprise deployment of GRC tools and automated policy management
tools were the various tools used by these organization. Table-2 shows the statistics of these tools used by organizations in this survey.
Security Technologies 2012 2013
Advanced perimeter controls and firewall technologies 58% 52%
Encryption technologies 50% 48%
Security intelligence systems 47% 45%
Access governance tools 42% 41%
Extensive use of data loss prevention tools 38% 41%
Enterprise deployment of GRC tools 37% 39%
Automated policy management tools 35% 36%
Table-3 Security Technologies Used
Shannon Keller et al [19] in their paper Information security threats and best practices in small business suggests best
security practices such as - install and properly configure firewall, update software, protect against viruses/worms/trojans, implement a
strong password policy, implement physical security measures to protect computer assets, implement company policy and training,
connect remote users securely, lock down servers and implement identity services (intrusion detection).
Societys collective security depends on every user being security-aware and exhibiting thoughtful discipline over his or her
personal information and computing resources. It has long been recognized by security experts that the user is in fact the weakest link
in the security chain and technical measures alone cannot and will not solve current cyber security threats [6]. The impact of any
given breach can be reduced if there is an adequate audit trail of application activity and a skilled responder who can assist the
application team in forensics and root-cause analysis[5].
According to Jeffrey R. DiBiasi [2] the advanced security procedures, security checklists need to be revised and in some
cases created to reflect the most current procedures to prevent and recover from cyber attacks which offer a significant second layer of
defense, protecting critical devices from an external or internal attack. Cyber crimes like cyber squatting, internet banking frauds,
threatening email, fraud emails etc. shows that really there is need to study and analysis loopholes in current infrastructure. Though
cyber law is enacted a much more needs to be done in this area.
Dr Rose Shumba [9], in focuses on the identification of currently used practices for computer security, an evaluation of the
practices and reveals the necessity of a public awareness and education program on the importance and relevance of computer security
by conducting survey was among 350 multi-disciplinary IUP (Indiana University of Pennsylvania) students. To protect vital
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

320 www.ijergs.org

information, the companies must set up a sound security system before the network is intruded which involves identification of the
security risks, applying sufficient means of security, and teaching the users data security awareness.
The technology can represent a powerful complement to an organizations networking capabilities. To minimize the security
risks, system administrators can implement a range of measures, including security policies and practice [10]. Since security has
technology, organizational, and critical infrastructure elements, senior management awareness and commitment is required to develop
a control environment that balances the costs and benefits of security controls, keeping in mind the level of risk faced by the
organization [11].
Dorothy E. Denning [3] in his paper An Intrusion-Detection Model describes a model of a real-time intrusion-detection
expert system capable of detecting break-ins, penetrations, and other forms of computer. The model is based on the hypothesis that
security violations can be detected by monitoring a system's audit records for abnormal patterns of system usage. The model includes
profiles for representing the behavior of subjects with respect to objects in terms of metrics and statistical models, and rules for
acquiring knowledge about this behavior from audit records and for detecting anomalous behavior. The model is independent of any
particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-
purpose intrusion-detection expert system. The model is capable of detecting a wide range of intrusions related to attempted break-ins,
masquerading (successful break-ins), system penetrations, Trojan horses, viruses, leakage and other abuses by legitimate users, and
certain covert channels.
Alexandr Seleznyov, Seppo Puuronen [4] in Anomaly Intrusion Detection Systems: Handling Temporal Relations between
Events discusses a temporal knowledge representation of users behavior that is used by data mining tools to construct behavior
patterns. These are used to decide whether current behavior follows a certain normal pattern or differs from all known users behavior
patterns. The networked computing offers almost unlimited possibilities and opportunities for intrusions. In this paper they presented
an information representation method for intrusion detection system which uses data mining techniques to detect anomalous behavior.
The main assumption behind is that the behavior of users follows regularities that may be discovered and presented using the approach
for the recognition of the user. The approach suggests that in cases where the audit trail information is inconsistent it is possible to
expose it using temporal interval representation applying temporal algebra.
Akaninyene Walter Udoeyop [1] in his study Cyber Profiling for Insider Threat Detection introduces a method to detect
abnormal behavior by profiling users. With the help of algorithms to learn a users normal behavior and establish normal user profiles
based on behavioral data. He then compares user behavior against the normal profiles to identify abnormal patterns of behavior. These
results will be helpful to identify abnormal behavior by monitoring user activity. Prevention, detection, counterattacking the attack to
ensure and insure the safety and security of information is not only essential indispensable also. To investigate in this regard, to weigh
cause and effect, and research new methods of detection, policies of prevention and counterattacking is the need of an hour.
CYBER SECURITY MEASURES
To protect private or credential information or data from outside world is necessity of every organization. Security measures
include password protection, software updates, firewall, malware protections as well as authentication, authorization, auditing,
reviewing, vulnerability assessment and storage encryption.
In a survey conducted by Steffani A. Burd [14] it was found that following assessment methods [Table-4] and evaluation
techniques were used by the organizations to protect their sensitive information [Table-5] as a security measures.


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

321 www.ijergs.org

Assessments Methods Organizations
Vulnerability assessment 56%
Audit 51%
Risk assessment 39%
Penetration testing 36%
Application-level testing 33%
Information asset classification 25%
Table-4 Assessment Methods Used
Evaluation Techniques Organizations
Network traffic flow reports 75%
Help desk calls (volume/type) 74%
Firewall logs 71%
Incidents (volume/type) 64%
IDS logs 58%
Web activity monitoring software, 39%
Bot monitoring 33%
Email activity monitoring software, 31%
IPS logs 19%
Table-5 Evaluation Techniques Used
In an organizations network, to exchange information within or outside world effectively, it is necessary to follow certain
parameter or instruction. It is obligatory to give proper permissions to each of the employees or users of the network. To improve
cyber security infrastructure integrity of network component like router, switch, server, work station etc we need to have security
standards. Network connections need to prevent from unauthorized access. Periodic checks of its firewalls should be done to verify
that the rule sets are up to the required security level. Security logs for intrusion detection systems and intrusion prevention systems
can be consistently reviewed and regulated for abnormal patterns of activity. Web filter is used to protect the information being
transferred from within or out of the organization. Security requirements for portable devices like USB drives, portable hard disk,
IPods, mobiles, digital cameras etc that could be connected to the network. To maintain network efficiently; documentation of
topology diagrams of the organization network along with geographical map showing exact location of network cables should be
administered so that all the connection routes can be traced.
CONCLUSI ON
Throughout the literature review it has been found that by and large many organizations are not following cyber security
practices. It is concluded that irrespective of the industry segment, there is a need to conduct research to find out a comprehensive
approach to protect sensitive data and take appropriate action.
REFERENCES:
1. Akaninyene Walter Udoeyop, Cyber Profiling for Insider Threat Detection, 8-2010.
2. Jeffrey R. DiBiasi, Cyberterrorism: Cyber Prevention Vs Cyber Recovery, 12-2007.
3. Dorothy E. Denning, An Intrusion-Detection Model, IEEE transactions on software engineering, Vol. SE-13, No. 2, February
1987, 222-232.
4. Alexandr Seleznyov, Seppo Puuronen, Anomaly Intrusion Detection Systems: Handling Temporal Relations between Events.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

322 www.ijergs.org

5. Cory Scott, Moving Forward, Information Security Magazine, Volume 11-No.9, 10-2009, 32-38.
6. Douglas Jacobson, Security Literacy - Tackling modern threats requires educating the general public about cyber security.
Information Security Magazine, Volume 13-No.9, 10-2011, 23-24.
7. George V. Hulme, SCADA Insecurity-Stuxnet put the Spotlight on critical infrastructure protection but will efforts to improve it
come too late?, Information Security Magazine, Volume 13-No.1, 2-2011, 38-44.
8. Paul Marsh, Controlling Threats, IET Computing & Control Engineering, April/May 2006, 12-17.
9. Shumba Rose, Home Computer Security Awareness, Computer Science Department, Indiana University of Pennsylvania
10. Amitava Dutta and Kevin McCrohan, Management's Role in Information Security in a Cyber Economy, California
Management Review, Volume 45, No. 1, 2002.
11. Michael Naf and David Basin, Two Approaches to an Information Security Laboratory, Communication Of The ACM, Volume
51, No. 12, 12/2008.
12. Power, R., Computer Security Issues & Trends, 2002 CSI/FBI Computer Crime and Security Survey, CSI, Vol. VIII, No. 1.
13. Geer, D., Soo Hoo, K., J., Jaquith, A., Information Security: Why the Future Belongs to Quants, IEEE Security and Privacy,
2003, pp. 32-40.
14. Steffani A. Burd, The Impact of Information Security in Academic Institutions on Public Safety and Security: Assessing the
Impact and Developing Solutions for Policy and Practice, October 2006.
15. Perceptions about Network Security, Ponemon Institute, Research Report, June 2011.
16. State governments at risk: A call to secure citizen data and inspire public trust, The Deloitte-NASCIO Cybersecurity Study, 2010.
17. Robert Richardson, CSI Computer Crime & Security Survey, 2008.
18. ChiChao et al., Cybercrime & Cybercriminals: An Overview of the Taiwan Experience, Journal of computers, Vol. 1, No. 6,
September 2006.
19. Keller et al, Information security threats and best practices in small business, Information Systems Management, Spring 2005,
22, 2, ABI/INFORM Global

















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

323 www.ijergs.org

In Advance, Highlight Affected Hilly Areas Network Roads by Sun Glare
Mohd. I.M. Salah
1
1
Senior Engineer, Department of Lands and Survey, Amman, Jordan
E-Mail- Attamary@hotmail.com
Abstract: This studying aims to display on real time the affected roads of sun glares in advance or real time when traveling on the
3D national transportation networks, and present them on roads or navigations digital maps layers, by calculating the directions and
azimuths of roads and Sun position and test if they were close enough or almost parallel in 3D model where the sun glare direct or
indirect reflected the drivers eyes.
The study would apply on cities that constructed on hilly or mountains areas were no absolutely flat areas found, have mid
topographic surface and crowded with people and vehicles, and the roads had constructed on valleys, foothills and tops.
Keywords: Transportation networks, 3D, DTM, GIS, Sun position, Azimuth angle, Elevation angle, accuracy.
1. INTRODUCTION
Designing and analysis the 3d transportation networks helps the decision makers in strategic management, roads designing and
increasing the roads services levels, displaying the affected roads will improve the 3d transportation networks and achieves several
goals the designing the geographic layers, increasing driving performance and protection from sun glares which reduce Traffic
accidents and reduce eyes harming and etc.
In previous researchs [1] [2] they did not concerns in first designing and calculations of the 3D transportations data and ignored the
topographic surface second NOT in advance but they were interested in sun glare on that effected traffic accidents analysis, by
inspection- on google earth - the area of Chiba Prefecture, Japan [1], it was not absolutely hilly area and have not big change in
elevation topographic surface. The traveling of the earth around the sun and the rotation around its axis with a tilt angle are change
the sun position in the sky, sunrise and sunset directions on earth and varies because of time, date and observers geographic
location[3][4].
Within employing modern technologies software even hardware of geographic information systems (GIS) in designing and analysis
the 3D transportation network that obtained from digital cadastral maps and digital elevation maps(models) owned by Department of
Lands and Surveys(DLS) of the pilot area, this indicates the useful of legal cadastral maps of DLS warehouse in many fields of GIS
systems.
2. SUN POSITION CALCULATIONS
2.1 GENERAL EQUATION
As shown in Figure 1, the solar position is represented by solar zenith angle () and azimuthal angle (). The solar zenith angle is
measured from a vertical line to the direction of the sun, and the solar azimuthal angle is measured from the North Pole direction to
the direction of the sun [1].

Figure 1. Solar position
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

324 www.ijergs.org

Formulae for calculating the solar zenith angle and azimuthal angle are explained here, this method was showed by Murakami, T.
(2010) [5]. Solar declination () and hour angle (t) are indices representing the solar position on the celestial sphere. The north
celestial pole is expressed as +90 degrees and the south celestial pole is expressed as -90 degrees in solar declination (). Hour angle
(t) is the angle between the celestial meridian and the great circle that includes solar position and both celestial poles. The position of
the sun relative to the earth was estimated by calculating solar declination () and hour angle (t). The solar zenith angle () is
determined by latitude and longitude, and the solar zenith angle () was calculated in order to use the solar declination (), hour angle
(t) and latitude and longitude at the accident spot, as shown in Formula (1). The solar azimuthal angle () at the accident spot is also
determined by latitude and longitude, so as a first step in calculating the solar azimuthal angle (), the sine and cosine of were
calculated using solar declination (), hour angle (t) and solar zenith angle () from Formulae (8) and (9). The values of sine and
cosine of are then:
= + (1)
Where
: solar zenith angle (rad)
: solar declination (rad) : latitude (deg., at accident spot)
: longitude (deg., at accident spot)
t: hour angle (deg.)
Where,
= 0.006918 0.399912 +0.070257 0.006758 (2)+ 0.000907 (2)0.002697 (3)+0.00148 (3)
(2)
A = 2/365 (3)
Where,
J: day of the year (Julian day)
15 180 (12) (4)
TST MST ET (5)
= 15 (6)
(0.000075+
0.0018680.0320770.014615(2)0.040849(2))12 (7)
Where,
TST: true solar time MST: mean solar time (local time) GMT: Greenwich Mean Time ET: equation of time
sincossint cos (8)
Coscossinsincoscos sin (9)
The calculated values of sin and cos are classified into cases, and 1 and 2 are calculated as follows. Cos<0, 1=2 Cos>0
sin<0, 1=3+ If cos and sin do not fit the two above-mentioned conditions then 1=+ where, 1>2 212
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

325 www.ijergs.org

If 1 is less than 2, then the calculated 1 is the solar azimuthal angle. If the calculated 1 is more than 2, then the solar azimuthal
angle is 12, which is equal to 2. Therefore, either of the calculated 1 or 2 is the solar azimuthal angle.
2.2 NOAA SOLAR CALCULATOR[6]
2.2.1 GENERAL
The calculations in the NOAA Sunrise/Sunset and Solar Position Calculators are based on equations from Astronomical Algorithms,
by Jean Meeus[7]. The sunrise and sunset results are theoretically accurate to within a minute for locations between +/- 72 latitude,
and within 10 minutes outside of those latitudes. However, due to variations in atmospheric composition, temperature, pressure and
conditions, observed values may vary from calculations.
The following spreadsheets can be used to calculate solar data for a day or a year at a specified site. They are available in Microsoft
Excel and Open Office format. Please note that calculations in the spreadsheets are only valid for dates between 1901 and 2099, due to
an approximation used in the Julian Day calculation. The web calculator does not use this approximation, and can report values
between the years -2000 and +3000.
Day NOAA_Solar_Calculations_day.xls
Year NOAA_Solar_Calculations_year.xls
2.2.2 DATA FOR LITIGATION
The NOAA Solar Calculator is for research and recreational use only. NOAA cannot certify or authenticate sunrise, sunset or solar
position data. The U.S. Government does not collect observations of astronomical data, and due to atmospheric conditions our
calculated results may vary significantly from actual observed values.
2.2.3 HISTORICAL DATES
For the purposes of these calculators the current Gregorian calendar is extrapolated backward through time. When using a date before
15 October, 1582, you will need to correct for this.
The year preceding year 1 in the calendar is year zero (0). The year before that is -1.
The approximations used in these programs are very good for years between 1800 and 2100. Results should still be sufficiently
accurate for the range from -1000 to 3000. Outside of this range, results may be given, but the potential for error is higher.
2.2.4 ATMOSPHERIC REFRACTION EFFECTS
For sunrise and sunset calculations, we assume 0.833 of atmospheric refraction. In the solar position calculator, atmospheric
refraction is modeled as:
Solar Elevation Approximate Atmospheric Refraction Correction ()
85 to 90 0
5 to 85

-0.575 to 5

< -0.575

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

326 www.ijergs.org

The effects of the atmosphere vary with atmospheric pressure, humidity and other variables. Therefore the solar position calculations
presented here are approximate. Errors in sunrise and sunset times can be expected to increase the further away you are from the
equator, because the sun rises and sets at a very shallow angle. Small variations in the atmosphere can have a larger effect.
3. Area of study
3.1 Geography
The area of study located in Jordan which lies on the continent of Asia between latitudes 29 and 34 N, and longitudes 35 and 40 E
(a small area lies west of 35). It consists of an arid plateau in the east, irrigated by oasis and seasonal water streams, with highland
area in the west of arable land and Mediterranean evergreen forestry, the highest point in the country is at 1,854 m above sea level,
while the lowest is the Dead 420 m.
3.2 Topography
The highlands of Jordan separate Jordan's main topographical feature is a dry plateau running from north to south. This rises steeply
from the eastern shores of the Jordan River and the Dead Sea, reaching a height of between 610 and 915 meters This plateau area
includes most of Jordan's main cities and towns the Jordan Valley and its margins from the plains of the eastern desert. This region
extends the entire length of the western part of the country, and hosts most of Jordans main population centers.
3.3 Sun position accuracy determination of area of interest (AOI) extents.
The table 1, shows the differences of the sun position at extents of AOI (10.00 AM Aug 1 2014, (15 km *15 km)).calculated by the
spreadsheet of NOAA Solar Calculator [6].
() degrees latitude longitude Sun Elevation Sun Azimuth
Left lower corner
31.9 35.9 50.9 100.8
Right top corner
32.1 36.1 51.0 101.2
Changing 0.2 0.4
Table 1: Sun position of AOI extents
As shown in the figure 2, The angular diameter of Sun, when seen from Earth is approximately 32 arcminutes (1920 arcseconds or 0.5
degrees)[8] and While differences in elevation and azimuth is less than 0.4 in 15 km lengths , as illustrated above It could be to use
the average of corners of AOI extent, Latitude = 32.0 and longitude = 36.0 for the following calculations related to sun position.


Figure 2: The angular diameter of Sun
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

327 www.ijergs.org

4. Azimuth and zenith calculation of transportation.
To do further analysis on transportation networks layer and sun position it is recommended to add fields and fields values this layer in
the GIS model, Azimuth, Zenith (elevations) of roads and sun position and do the calculation of each record of roads.
The main layers used in these calculation the transportation networks that derived from with high accuracy digital cadastral of DLS
and the digital terrain model (DTM) derived from aerial photography shown in figure 3.

Figure 3: left DTM, right transportation network of AOI
The figure 4, shows the GIS presentation of transportations network that derived from the cadastral maps with high accuracy, The
(DLS) owns reliable, comprehensive and accurate digital information which serves the objectives of comprehensive national
development and is available to clients by easy, equitable and transparent means, to satisfy their needs.

Figure 4: Roads segments and cadastral boundaries.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

328 www.ijergs.org

4.1 Azimuth(direction):
By definition, the azimuth of a line is the direction given by the angle between the meridian and the line measured in a clockwise
direction either from the north or south branch of the meridian [9] (Anderson & Mikhail, 1998).
The geometry of a straight line can be described using a direction and a distance this can be used by the function Adding COGO
attribute to a feature class and update COGO attribute[10] Shown in figure 5, and table 2.






Figure 5 direction and distance



Table 2: results of COGO attributes.



Other way of calculation the azimuth.
Azimuth formula is:
Azimuth = ATAN Y/X (11)
Where
Y = YB YA (12)
X = XB XA (13)

By adding the fields of Start(x,y) and End(x,y) of road segments to attribute tables and calculate the above formulas. [11] [12]
4.2 Zenith
Convert 2D features to 3D features by deriving the height value from a surface. There is a geoprocessing tools that allow you to obtain
3D properties from a surface: Interpolate Shape which Interpolates z-values for a feature class based on elevation derived from a
raster, triangulated irregular network (TIN), or terrain dataset [13]as shown if figure 6 and table 3.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

329 www.ijergs.org








Figure 6: interpolation method.

Table 3: interpolation method tabular results.

Zenith formula is:
Zenith = ATAN (Z / D) (14)
Where
Z = ZB ZA (15)
D= (X2+Y2)
1/2
(16)
By adding the fields of Start (Z) and End (Z) of road segments to attribute tables of the transportation layer and calculate their values
by the above formulas.
5. Data analysis
5.1 Sun position analysis

The purpose of this part is to determine how sun position angles vary with season and time of day, NOAA spread sheet [7] was used to
analyze the elevations and azimuth of the sun of AOI (32, 36) in the year 2014.
The results of this analysis are shown in Table 4, The changing of sun position (elevation, azimuth) during changing the dates and
times, The lowest sun elevations are obviously in morning and evening of days, and decreasing in winter season, it is said that sun
elevation is lowest in winter more than any other season, The changing of Azimuth was obviously minimum values in the morning,
and was maximum values in evening in the summer, while the winter had maximum values in the morning and was minimum values
in evening. The negative sign (red) in the table indicates that the sun below the horizon plans.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

330 www.ijergs.org

The changing of sun position means that the affected roads of sun glares also vary with season and time of day, the effected next
month will not be the same of the effected of today , so affected can be expected and present in advance.

Table 4: sun position analysis results
The results of this analysis are shown in Table 5, the statistical summary of the above tables 4 of Sun Position analysis shows the
interval of elevation angles and their azimuth intervals for the months around sharp clocks intervals.







Table 5. The changing of sun position in different times of a year

date
sun elevation sun Azimuth
Clock (hrs) Clock (hrs)
6 7 8 17 18 19 6 7 8 17 18 19
01/01/2014 -19.90 -7.83 3.75 7.12 -4.08 -16.01 105.30 112.26 120.02 237.29 245.42 252.59
15/01/2014 -20.25 -8.03 3.77 9.35 -1.81 -13.89 102.87 109.94 117.71 237.91 246.27 253.65
01/02/2014 -19.25 -6.79 5.28 12.67 1.45 -11.02 98.61 105.95 113.86 240.45 249.09 256.76
15/02/2014 -17.22 -4.58 7.68 15.56 3.89 -8.63 94.49 102.10 110.16 243.91 252.70 260.57
01/03/2014 -14.26 -1.39 10.92 18.34 6.29 -6.35 90.16 98.00 106.19 248.40 257.21 265.20
15/03/2014 -10.68 2.30 14.68 20.88 8.52 -4.18 85.87 93.84 102.07 253.60 262.29 270.32
01/04/2014 -5.99 6.75 19.37 23.59 10.99 -1.59 80.87 88.83 96.95 260.32 268.70 276.65
15/04/2014 -2.26 10.25 22.91 25.55 12.88 0.67 76.99 84.82 92.71 265.74 273.76 281.56
01/05/2014 1.46 13.55 26.18 27.55 14.92 2.72 72.92 80.52 88.03 271.27 278.83 286.40
15/05/2014 3.54 15.58 28.11 29.14 16.61 4.51 69.83 77.20 84.36 275.16 282.33 289.68
01/06/2014 4.94 16.82 29.21 30.83 18.41 6.44 66.95 74.11 80.92 278.26 285.07 292.16
15/06/2014 5.12 16.89 29.20 31.90 19.51 7.59 65.55 72.65 79.32 279.27 285.91 292.88
01/07/2014 4.39 16.12 28.41 32.51 20.09 8.12 65.28 72.44 79.14 278.64 285.31 292.26
15/07/2014 3.16 14.93 27.29 32.28 19.78 7.69 66.32 73.62 80.51 276.67 283.55 290.61
01/08/2014 1.29 13.14 25.63 30.80 18.19 5.93 69.18 76.71 83.95 272.90 280.21 287.53
15/08/2014 -0.54 11.53 24.14 28.53 15.86 3.51 72.72 80.44 88.02 269.10 276.80 284.38
01/09/2014 -3.05 9.53 22.18 24.67 11.99 -0.28 78.13 86.03 94.03 264.14 272.24 280.08
15/09/2014 -4.93 7.81 20.39 20.85 8.26 -4.49 83.20 91.18 99.48 260.02 268.32 276.28
01/10/2014 -7.08 5.71 18.06 16.24 3.87 -9.01 89.23 97.22 105.75 255.41 263.78 271.73
15/10/2014 -9.04 3.70 15.73 12.39 0.48 -12.62 94.34 102.24 110.85 251.49 259.81 267.62
01/11/2014 -11.60 1.11 12.59 8.52 -3.52 -16.07 99.80 107.51 116.05 246.97 255.11 262.65
15/11/2014 -13.83 -1.51 9.91 6.35 -5.48 -17.82 103.29 110.77 119.15 243.55 251.55 258.85
01/12/2014 -16.35 -4.31 7.06 5.23 -6.34 -18.45 105.80 113.01 121.12 240.28 248.19 255.28
15/12/2014 -18.29 -6.31 5.09 5.45 -5.90 -17.86 106.48 113.50 121.41 238.27 246.21 253.26
01/01/2015 -19.88 -7.82 3.76 7.09 -4.11 -16.04 105.33 112.29 120.06 237.29 245.42 252.58

Clock (hrs)
6 7 8 17 18 19
Elevations angles 1-5 2-17 4-30 7-33 0-20 1-8
Azimuths angles 65-73 72-107 79-12 238-270 257-285 281-292
months May-Aug March-Nov all year all year Feb-Nov Apr-Sep
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

331 www.ijergs.org

The results of this analysis are shown in Table 6, the statistical summary of the previous table 4, Where the Sun Position is less than
10 degrees which may effects and analysis the transportation network.


Clock (hrs)
6 7 8 17 18 19
Elevations angles Less than 10
Azimuths angles 65-120 237-292
months Jan-Apr , Sep-Dec

Table 6 the changing of sun position less than 10 during a year

5.2 Transportation layer analysis

The table 7, shows the result of ArcGis spatial analysis of the transportation layer, there are a lot of roads segments exists have the
ranges of sun elevation and azimuth that analyzed in previous section of sun position analysis.

Clock (hrs)
6 7 8 17 18 19
Elavations angles 1-5 2-17 4-30 7-33 0-20 1-8
Azimuths angles 180(65-73) 180(72-107) 180(79-102) 180(238-270) 180(257-285) 180(281-292)
months May-Aug March-Nov All year All year Feb-Nov Apr-Sep
roads segments 918 2953 1726 991 2559 1549


Table 7: roads segments counts effected by
sun glare in different times.

While the table 8, shows the result of ArcGis spatial analysis of the transportation layer, the elevation of sun and roads less than 10
have the same ranges of time, azimuth and months.



Table 8 roads segments counts less than 10.

6. Final results
6.1 Determine the angle of views.
The front windshields of vehicle is the most important factor of determining the angle of views of the driver eyes where the driver face
sun glare.
6 7 8 17 18 19
Elavations angles
Azimuths angles
Months
roads segments
Clock (hrs)
less the 10
65-120 237-292
Jan-Apr , Sep-Dec
4370
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

332 www.ijergs.org

The figure 7, shows the vertical angle of view approximately equal to 12.5 calculate by sin rule which equal to half of horizontal
angle of view measured from the straight sightline parallel to road segment of driver eyes to the left side of front windshields shown
in the figure 8.




Figure 7 angle of vertical view near
the front windshields.




Figure 8 the horizontal angle of view

6.2 Calculate and Symbolizing Effected by Sun and

So in this study the precision of vertical angle of view will be 12.5 and the range of horizontal angle of view will be two times (25.0
) that will be used in calculation of the affected roads by the sun glare, the only condition of effected roads will:

Road elevation equal to the sun elevation - 12.5, and
Road azimuth equal sun azimuth 12.5
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

333 www.ijergs.org

Calculation the sun position in the attribute table of transport using model builder ESRI[14],ESRIs Model Builder application with
which a model, or geospatial data processing workflows, can be created to streamline or automate geospatial processing tasks, Model
Builder uses a geographical user interface (GUI) which is a graphic interface that looks like a workflow with which many people are
familiar figure 9, One advantage of Model Builder is that computer programming is not required to create a fully functional model;
and thus opens up the application to be used by more people. [15]
















Figure 9: interface Sun position application
By Model builder


The figure 10, shows the procedure of sun position application done by model builder ESRI, the calculation of sun elevation and sun
azimuth in moment of execute the model.





















Figure 10 model builder procedures
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

334 www.ijergs.org

The table 9, shows the real values the sun position of all roads segments records that calculated by model builder before, the
values of effected roads by sun glare and the roads segments azimuth and zenith (elevation) values.


Table 9: final result attributes value of Sun Position.
Figure 11, shows the GIS presentation of transportations network in 2D view of the affected roads by sun glares at specific time
of Model Builder execution and symbolizes them.



Figure 11:2D road map of affected roads by sun glare.


Figure 12, shows the GIS presentation of transportations network ws 3D view of the best route sample of traveling to destination
place using navigation system applications, and symbolizes the affected road of sun glare with special symbol.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

335 www.ijergs.org











Figure 12: 3D view of the affected roads of sun glares.
7. Conclusions
The analyses show the affected roads by sun glare and Adding special symbol on transportation maps or navigations maps that have
been effected on real time on the digital maps of the transportation network or on navigation maps systems, to reach this approach
each transportation layer prepared in 3D model means have the elevation and azimuth attributes and values of all features of the GIS
data.
This study was applied for big area (15km*15km) and it may be used for a specific or single and important road to analysis the effect
of sun glare of each segment of the road in different times and dates and in advance .
The changes of sun position changes in seasons in the hilly areas and more effected the transportation network when the sun elevation
in is lowest in winter and affected more roads than summer days in the north part of earth.
The useful of this study are showing messages on the intelligent traffic signs shows the following road affected by Sun glare, controls
the speed on effected roads, reducing eyes harming, reducing car accidents , drivers can determined the time of travel, and Drivers
change the route of travel.

REFERENCES:
[1] Hagita, K., Mori, K. (2013) The Effect of Sun Glare on Traffic Accidents in Chiba Prefecture, 277-0882, Japan. Proceedings of
the Eastern Asia Society for Transportation Studies, Vol.9, 2013.
[2] Hagita, K., Mori, K. (2011) Analysis of the Influence of Sun Glare on Traffic Accidents in Japan. Journal of the Eastern Asia
Society for Transportation Studies, Vol.9, 1775-1785.
[3]Meeus, Jean (1991). "Chapter 12, Transformation of Coordinates". Astronomical Algorithms. Willmann Bell, Inc., Richmond, VA.
ISBN 0-943396-35-2.
[4] Jenkins, A. (2013). "The Sun's position in the sky". European Journal of Physics 34
[5] Murakami, T. (2010): Methods for Calculating Solar Zenith Angle and Solar Azimuthal Angle, http://www.agr.niigata-
u.ac.jp/~muratac/. (in Japanese)
[6] U.S. Department of Commerce ,National Oceanic and Atmospheric dministration
Earth System Research Laboratory | Global Monitoring Division
http://www.esrl.noaa.gov/gmd/grad/solcalc/calcdetails.html
[7] Jean Meeus (1991): Astronomical Algorithms, ISBN-13: 978-0943396354
[8] Michael A. Seeds; Dana E. Backman (2010). Stars and Galaxies (7 ed.). Brooks Cole. p. 39. ISBN 978-0-538-73317-5.
[9](Anderson & Mikhail, 1998).
[10] Esri Arcgis resources center, Editing Data, Cogo.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

336 www.ijergs.org

[11] Matthew Oliver Ralp L. DIMAL and Louie P. BALICANTA, Philippines.Comparative Analysis of GPS Azimuth and Derived
Azimuth for the Establishment of Project Controls
[12]7th FIG Regional Conference, Spatial Data Serving People: Land Governance and the Environment Building the Capacity
Hanoi, Vietnam, 19-22 October 2009
[13] Esri Arcgis resources center, functional surface toolset, interpolation shape.
[14] Esri Arcgis resources center,Desktop Geoprocessing, ModelBuilder
[15] Susan Hunter Norman(2011), Development and Assessment of Eight Advanced GIS Laboratory Exercises using ArcGIS
ModelBuilder























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

337 www.ijergs.org

Data Collector by Using Wireless Technologies IEEE 802.15.4 and Bluetooth
KSV Pavan Kumar
1
, R.Srinivasa Rao
2
1
Scholars, KITS, Khammam, India
2
Associate Professor, KITS, Khammam, India
E-Mail- pavan101088@gmail.com
Abstract: In this paper we have shown a system based on Bluetooth and ZigBee for hospital environment applications. This system
provides an easy way to monitor the patients health condition.We have used advanced technical methods to design the system like
ZigBee, Bluetooth and ARM9. ARM9 processor requires significantly less transistors than other micro controllers. This reduces the
cost, heat producing by the circuit. We have used Pulse sensor, Temperature Sensor, ECG signal for the health monitoring of the
patient. Finally we have designed a system by using the wireless technologies to monitor an individuals health.
Key Words : ZigBee, Bluetooth, Pulse sensor, ECG Signal, Temperature sensor, ARM9, Micro controller.
INTRODUCTION
Mobile communication devices can be essential in the healthcare environment for good patient management. However, the
electromagnetic interference produced by such equipment may have the potential to affect medical devices. These mobile
communication devices are designed to achieve multiple purposes but mostly are focused on voice and short messaging services [1]
[2]. Wire less technology has the benefit of improving data mobility, using different protocols such as WiFi, Blue tooth, ZigBee. In the
medical field, many studies introduced body sensor networks for health care applications [3] [4].
The main disadvantage in the hospital environments is that time taking for the doctors to check the patient individually. Our proposed
design finds a solution for this problem. By using various sensors and modern wireless technologies wehave designed a system for the
patients monitoring. When comparing all the wireless technologies, ZigBee technology is a low data rate, low power consumption,
low cost; wireless networking protocol targeted towards automation and remote

Table 1 Comparision of wireless network standards

control applications.The table 1 shows the advantages of ZigBee over other wireless networks [5].
In this paper we introduce a new technique to monitor the patients condition which is based on ARM9 micro controller. This paper is
scheduled as First it discusses about the proposed design and the techniques involved in it. Next it shows about the ARM9 micro
controller. Section iii illustrates about the ZigBee module next section discusses about results and Section v concludes the paper.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

338 www.ijergs.org

PROPOSED DESIGN
The proposed data collector system contains three sections. The first section contains all the body sensor networks and the values are
given to the micro controller. The microcontroller sends the data by using the blue tooth module to the second section that is display
section. The display section displays the values and sends the data to the central server to store the data by using ZigBee wireless
modules. In the third section central server receive the data and stores the values.
A. Data collection section
This section contains all the sensors at which we have to collect the data. The Figure 2 shows the data collector section. In our
proposed design we are collecting the data of body temperature, pulse calculation and ECG signal values. These three values from the
sensors are given to the micro controller. Micro controller reads the values of the sensors and it will send the data to the display
section.
Body sensor networks improve the patient monitoring system with the help of modern technology. This can be done by various
wearable sensors equipped with wireless capabilities [3] [4]. Figure 1 shows the various body sensor networks used. Figure 1 (a)
shows the temperature sensor, figure 1(b) shows the pulse sensor figure (c) shows the ECG signal transmitted to the micro controller.
Figure 2 shows the block diagram for the section 1 which collects the whole data from the sensors.

(a)

(b)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

339 www.ijergs.org


(c)
Figure 1 Various body sensor networks

Figure 2 Block diagram for section 1
B. Display section
Display section receives the values of the various values from the section 1 which is transmitted by using blue tooth module. Figure 3
represents the block diagram for the display section. It contains blue tooth module to receive the signals and ZigBee modules for
transmitting the values.


Figure 3 Block diagram for the display section
C. Central server section
Central server section receives the values and stores the values of the particular patient. Figure 4 shows the block diagram for the
central server system.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

340 www.ijergs.org


Figure 4 Block diagram for the Central server section.
ARM9 MICRO CONTROLLER

ARM9 micro controller due to its high performance, low power consumption gained their interest in processor field. ARM is one of
the most licensed and thus widespread processor cores in the world. Used especially in portable devices due to low power
consumption and reasonable Performance [6]. The figure 5 shows the Hardware architecture of ARM9 micro controller.

Figure 5 ARM9 hard ware structure

ZIGBEE and IEEE802.15.4

ZigBee technology is a low data rate, low power consumption, low cost, wireless networking protocol targeted towards automation
and remote control applications. IEEE 802.15.4 committee started working on a low data rate standard a short while later. Then the
ZigBee Alliance and the IEEE decided to join forces and ZigBee is the commercial name for this technology. ZigBee is expected to
provide low cost and low power connectivity for equipment that needs battery life as long as several months to several years but does
not require data transfer rates as high as those enabled by Bluetooth. In addition, ZigBee can be implemented in mesh networks larger
than is possible with Bluetooth. ZigBee compliant wireless devices are expected to transmit 10-100 meters, depending on the RF
environment and the power output consumption required for a given application, and will operate in the RF worldwide (2.4GHz
global, 915MHz Americas or 868 MHz Europe). The data rate is 250kbps at 2.4GHz, 40kbps at 915MHz and 20kbps at 868MHz.
IEEE and ZigBee Alliance have been working closely to specify the entire protocol stack. IEEE 802.15.4 focuses on the specification
of the lower two layers of the protocol(physical and data link layer).
On the other hand, ZigBee Alliance aims to provide the upper layers of the protocol stack(from network to the application layer) for
interoperable data networking, security services and a range of wireless home and building control solutions, provide interoperability
compliance testing, marketing of the standard, advanced engineering for the evolution of the standard. This will assure consumers to
buy products from different manufacturers with confidence that the products will work together. IEEE 802.15.4 is now detailing the
specification of PHY and MAC by offering building blocks for different types of networking known as star, mesh, and cluster tree.
Network routing schemes are designed to ensure power conservation, and low latency through guaranteed timeslots. A unique feature
of ZigBee network layer is communication redundancy eliminating single point of failure in mesh networks. Key features of PHY
include energy and link quality detection, clear channel assessment for improved coexistence with other wireless networks [7].

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

341 www.ijergs.org


Figure 6 ZigBee module

RESULTS
The design is implemented in embedded c programming and executed the output with keil uvision. The below figure 7 shows the kiel
c51 compiler. The designed system is very useful for the hospital environments. The system is also used advanced ARM9 micro
controller, Blue tooth modules in the first section, ZigBee modules in the second section for the hardware implementation.

Figure 7 Kiel c51 compiler
CONCLUSION
We have designed a system which is useful in the hospitals by using Bluetooth and zigbee modules. In this proposed system we have
used pulse sensor, temperature sensor and ECG signals. This system is an advancement over the ordinary wireless networks. The
system also stores the data at time for the further usage. This system can also be implement with WIFI in the future.

REFERENCES:
1. Guidance on the use of Mobile Communication Devices in healthcare premises National services Scotland, 2008.
2. Won-Jae Yi, Weidi Jia, and Jafar Saniie, Mobile Sensor Data collector using Android smartphone IEEE 2012.
3. M-H Cheng; L-C Chen; Y-C Hung; C-N Chen; C.M. Yang; and T.L. YANG;, A vtal wearing system with wireless
capability, Pervasive Health 2008. Second international conference on pp. 268-271, 2008
4. H. Ghasemzadeh; V.Loseu; and R. Jafari; structural action recognition in body sensor networks: Distributed classification
based on string matching, Information Technology in Biomedicine, IEEE transactions on, vol.14 no.2 pp. 425-435, march
2010.
5. Dynamic C, an introduction to ZIGBEE Digi International inc.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

342 www.ijergs.org

6. Sinem Coleri Ergen ZIGBEE/802.15.4 SUMMARY, 2004
7. Steer Jobs, Modulation, Transmitters and Receivers, a text book

























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

343 www.ijergs.org

CDMA Power Control Using Optimization Techniques- A Review
Arti Singh
1
, Kuldeep Bhardwaj
2
1
Research Scholar (M.Tech), ECE Dept, OITM, Hisar, India
2
PG Cordinator, ECE Dept, OITM, Hisar, India
E-Mail- artishehrawat01@gmail.com
Abstract In this paper power control techniques for code division multiple access (CDMA) system is presented. For CDMA
system Power control is the most important system requirement. To function effectively, there is a need to control the power. If power
control is not implemented many problems such as the near-far effect will start to dominate and consequently will lower the capacity
of the CDMA system. However, when the power control in CDMA systems is applied, it allows multiple users to share resources of
the system equally between themselves, leading to increased capacity. With appropriate power control, capacity of CDMA system is
high in comparison to frequency division multiple access (FDMA) and time division multiple access (TDMA).For power control in
CDMA system optimization algorithms i.e. genetic algorithm & particle swarm algorithm can be used which determines a suitable
power vector [1], [2]. These power vector or power levels are determined at the base station and told to mobile units to adjust their
transmitting power in accordance to these levels. So a detailed discussion about CDMA system power control is given here.
Keywords CDMA, TDMA, FDMA, GAME, PSO, TPC,QOS etc.
1.INTRODUCTION
In CDMA system users access the complete bandwidth available [3]. In Frequency Division Multiple Access or FDMA strategies, the
focus is on the frequency dimension. The total bandwidth (B) is divided into N narrowband frequency bands. So several users are
allowed to communicate simultaneously by assigning the narrowband frequency bands to the different users, where the narrow band
frequencies are assigned to a designated user at all time. Since the total bandwidth (B) is subdivided into N frequency bands or
channels, only N users can be supported simultaneously. In TDMA all users use the whole bandwidth but in different time slots.
Unlike FDMA/TDMA the users in CDMA are isolated by codes rather than frequency slots or time slots. Each user is identifying via
orthogonal codes. Sixty four Walsh functions are used to identify forward link channels and 64 long PN codes are used for
identification of reverse link channels user. Due to this frequency reuse in CDMA system is very high which enhances the spectral
efficiency. There is no limit on number of users in CDMA system. Each time a user is added, noise level for another mobile unit
increases. So CDMA system has soft capacity which is more than any other multiple access schemes. In reality it is hard to maintain
the orthogonal nature of the codes, thus this added with the multipath propagation and synchronization problem will result in
interference. In FDMA and TDMA access schemes the number of available frequencies and time slots are the factors which limits the
number of users. When the number of users is more than available frequencies and time slots then blocking occurs. In CDMA
blocking occurs when the interference tolerance limit is exceeded. Therefore in CDMA the level of interference is the limiting factor.
2.Power control
With appropriate power control, CDMA offers high capacity in comparison to FDMA and TDMA. Since in CDMA systems there is
no need of secluding of time or frequency slots among users, the central mechanism for resource allocation and interference
management is power control [5]. So power control is a significant design problem in CDMA systems. Each user changes its access to
the resources by adapting its transmitting power to the changing channel and interference conditions. Therefore power control also
known, as Transmit Power Control (TPC) is a significant design problem in CDMA systems. Power control encompasses the
techniques and algorithms used to manage the transmitted power of base stations and mobiles. Power control helps in reducing co-
channel interference, increasing the cell capacity by decreasing interference and prolonging the battery life by using a minimum
transmitter power. In CDMA systems power control insures distribution of resources among users. If power control is not
implemented, all mobiles will transmit signal with the same power without taking into consideration the fading and the distance from
the base station, so mobiles close to the base station will cause a high level of interference to the mobiles that are far away from the
base station. This problem is known as the near-far effect [4]. The near far problem is shown in Fig 1:-

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

344 www.ijergs.org



Fig.1 Near-Far problem when power control is not used [1].

In the reverse link (mobile to base station) it is necessary to use power control for solving the near far problem. The near-far problem
can be avoided using power control mechanism. This is shown in Fig.2.



Fig.2 Power control overcomes near-far problem [1].
2.1 Main objectives of power control:-
1. To minimize the transmitting power from the mobile units in order to increase the battery lifetime.

2. To ensure that a certain quality of service parameter (QOS) is satisfied. This is done by making the value of
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

345 www.ijergs.org

of the received signals from all mobile units exceed a certain threshold (

.If the received

from a certain unit is lower than


this threshold, this unit is out of service.

is the energy per bit and

is the noise power spectral density..



3. Minimize the near-far effect. This is done by trying to make the received signal levels from mobile units very close to each other.
The objective of the power control is to limit the transmitting power on forward link & reverse link. Due to non-coherent detection at
the base station reverse link power control is more important as compared to forward link power control. Reverse link power control is
essential for CDMA system and it is enforced by IS-95 standard.

2.2 Reverse Link Power Control
Reverse link power control mechanism is used to control the power on access and reverse traffic channels. It is used for establishing a
link while originating a call. The reverse link power control includes open-loop power control (also called as autonomous power
control) and the closed loop power control. The closed loop power control also includes inner-loop power control and outer- loop
power control.
2.2.1 Reverse Link open loop Power Control
In this method base station is not involved. This method is based on the principle that mobile station closer to the base station needs
to transmit less power as compared to a mobile which is far away from the base station or in deep fading condition. The mobile
adjusts its power based on the power received in the 1.23 MHz band i.e. power in pilot, paging, sync and traffic channels. The key
rule is that a mobile transmits in inverse proportion to what it receives. If the received power is low the mobile transmits high power
and on receiving high power it transmits low power value.


Fig. 4. Reverse open loop power control [13]

2.2.2 Reverse Link closed loop Power Control
In this method base station is involved. Base station sends the power control bits to the mobile station for power adjustment. A power
control sub channel continuously transmits on the forward traffic channel. This sub channel runs at 800 power control bits per second.
So a power control bit 0 or 1 is transmitted in every 1.25 ms. A0 indicate that mobile should increase its mean output power level
while 1 indicates that it should decrease its mean output power. The response time of this method is very low (1.25 ms) as compared
to open loop power control (30ms) to counter deep fading condition.
The reverse link closed loop power control has two parts-inner loop power control and outer-loop power control. The inner loop
power control keeps the mobile as close to its target

as possible, but outer loop power control adjusts the base station

for a
given mobile.

is the energy per bit and

is the total interference.



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

346 www.ijergs.org


Fig. 3. Reverse closed loop power control [13]

3. Power control using optimization algorithms
Most of the practical Power control algorithms present today require high number of iteration in order to reach zero probabil ity of
Outage. The main aim of this paper is to represent idea of genetic algorithm and particle swarm algorithm power for control. The
power control algorithm based on genetic algorithm was proposed by M. Moustafa et.al (Oct. 2000) [1] and based on particle swarm
algorithm was proposed by Hassan M. Elkamchouchi Hassan et.al in 24th National radio science conference (NRSC 2007) [2].
3.1 Genetic Algorithm for Mobiles Equilibrium (GAME)
In a CDMA network, resource allocation is critical in order to provide suitable QOS for each user and achieve channel efficiency.
Many QOS measures, including bit error rate, depend on the

given by [1]:
(


+)/


Where W is the total spread spectrum bandwidth occupied by the CDMA signals.

denote the link gain between the base station b


and mobile user i. denotes the thermal noise contained in W and M is number of mobile users.

is the transmitted power by ith


mobile which is limited by a power level 0

for 1 I M.

is the information bit rate transmitted by ith mobile user. GAME


uses a fix value for

(transmission rate of ith user)=15.5 kbps.


When any user increases its transmission power then its

value increases but increases interference to others using CDMA i.e.


decrease in

of other mobile user. So power control means directly controlling the QOS that is specified as a pre specified

. It can
also be stated in terms of probability that

falls below (
Eb
No
)

(Outage probability). Thus, the objective here is to find an non-


negative power vector P = [ p1, p2, ...... , pM ] which maximizes the function F proposed as
= (
1

=1


Where

is a threshold function defined for ith user depending on

value.

=
1 (

)

(
Eb
No
)


0


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

347 www.ijergs.org


Here

maximize the average QOS. Minimization of mobile power is also essential objective. Since low transmitting power means
long battery life & less interference to other user,

is the part which uses minimum power and punishes high power transmitting
mobiles.

= 1


The author has to ensure that received powers from all mobiles are in narrow range so that near-far problem can be reduced. The
received power

is the product of link gain

and

, proposed in the fitness function penalize the solution whose received power components divergence is far away from its mean
value.

=
1 5

0.2


0



Here

is the average received power level and

is the standard deviation.

and

are the non-negative weights indicating the relative importance of an objective over the another. For example if the
objective is to minimize the transmitting power then

would be the highest value.


GAME is a steady state GA which stops evolution after a timeout period [1]. The inputs are current power level from different users.
Additional information like (
Eb
No
)

maximum power level

and the link gains G are also required. In GAME method initial
population of chromosomes is formed by encoding the power levels from mobiles. The chromosome is a string of N bits and it
encodes power level of M mobile users. If each mobile power is encoded using q bits then N=qM. the fitness function is used to
evaluate these chromosomes. The cycle of evolution and reproduction works up to a stopping criterion. The base station transmits the
new power vector to the users. In the meantime, the new solution is being used to initialize the input vectors at the next control period.
The assumption for GAME method was that the base is situated at the center of cell. It was only for single cell with radius of unit
distance. The users are distributed uniformly over the cell area. The loss model used is distance loss model. The link gain is

is the variation in the received signal due to shadow fading, and assumed to be independent and log normally distributed with a
mean of 0 dB and a standard deviation of 8 dB.
The variable

is the large scale propagation loss. Let

is the distance between transmitter j and receiver I then it is assumed that


in decibels

10log

=
127.0 25


<1
127.0 35

3
135.5 80




This model uses three different path loss slopes ant for the 1 unit distance interception assumes -127.0 dB. The required QOS target or
(
Eb
No
)

is 5 dB and

is 1 watt. The transmission rate is fixed as 15.5 kbps and the thermal noise density is -174dbm/Hz.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

348 www.ijergs.org

4. Particle Swarm Optimization (PSO) Algorithm
Particle Swarm Optimization (PSO) is a method for global optimization [5] and it is different from other well-known Evolutionary
Algorithms. As in Evolutionary Algorithms, a population of potential solutions is used to probe the search space, but no operators are
applied on the population to generate new solutions. In PSO, each individual particle, of the population, called swarm, adjusts its
trajectory toward its own previous best position, and toward the previous best position attained by any member of its topological
neighborhood. In the global variant of PSO, the whole swarm is considered as the neighborhood. Thus, global sharing of information
takes place and the particles profit from the discoveries and previous experience of all other companions during the search for
promising regions of the landscape. For example, in the single-objective minimization case, such regions possess lower function
values than others, visited previously. In the local variant of PSO, the neighborhood of each particle in the swarm is restricted to a
certain number of other particles but the movement rules for each particle are the same in the two variants. The basic algorithm of
PSO has two different versions. The first version is the one where the particles are represented by binary strings and the other is the
one where the particles are represented by real numbers in n dimensional space where n is the dimension of the optimization problem
under consideration. First, the author describes the basic algorithm of PSO in real number problems. The Pseudo-code for this
algorithm is:
For i=1 to number of individuals
If G (

) G(

) then do // G() evaluates fitness


For d=1 to dimensions

//

is best so far
Next d
End do
g=i // arbitrary
For j=indices of neighbors
If G(

) G(

) then g=j // g is index of best performer


Next j
For d= 1 to dimensions

(t)=

(t-1)+
1
(

(t-1))+
2
(

(t-1))

, +

(t)=

(t-1)+

(t)
Next d
Next i
Here

is the position of particle i,

is the velocity of particle i,


1
is a random number that gives the size of the step towards
personal best,
2
is a random number that gives the size of the step towards global best (the best particle in the neighborhood) and G is
the fitness function which we are trying to minimize.
For the binary PSO, the component value of

,

and

are restricted to the set {0, 1}. The velocity

is interpreted as a probability
to change a bit from 0 to 1, or from 1 to 0 when updating the position of particles.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

349 www.ijergs.org

Therefore, the velocity vector remains continuous-valued but the value of the velocity needs to be mapped from any real value to a
probability in the range [0, 1]. This is done by using a sigmoid function to squash velocities into a [0, 1] range. Finally, the equation
for updating positions is replaced by the probabilistic update equation:

=
0

( 1) (

)
1

( 1) (

)


()

is a vector representing random numbers, drawn from a uniform distribution between 0 and 1.

4.1 PSO-based Algorithm for Power Control Problem
In PSO based method the author uses basic binary PSO technique. The PSO algorithm is used to maximize a fitness function that takes
into consideration all the objectives of the power control problem.
In PSO-based algorithm, every particle in the swarm should represent a power vector containing power values to be transmitted by all
mobile units in order to be evaluated and enhanced by the algorithm. The particle representation in swarm is similar to the
chromosome representation of the power vector in GAME method with q = 15 bits which gives a good resolution of tuning the power
of the mobile units. If

=1 watt then the resolution by which user can be tune the power of the unit is
1
2
15
= 3.051758 10
5

watts (approx. 30.5 W). It may be seen that this method of representation of the power vector inherently satisfies the maximum
power constraint as we always assign the value of

to the string of 15 ones.


In this method first the author tried to use the same fitness function of GAME method. That fitness function gave good results in terms
of minimization the transmitted power from the mobile units and making the value of

of the received signals from all mobile units


exceed
(
Eb
No
)

but this function failed to fulfill the objective of the minimization of the near-far effect. This is because of the method it used to
handle this objective. The parameter

gives credit to solutions which have received power values close to each other but the
problem here is that this parameter is set to zero for all solutions whose received power components divergence is far away from its
mean value. Assume two solutions, the first has

= 0.3

and the second has

= 3

. The first solution is better than the


second one but the

parameter is set to zero for both cases. This leads to the result that the second solution is not encouraged
updating itself towards the first solution in order to minimize the near-far effect. So to solve that in PSO use a new fitness function
which the author want to maximize is described as below
=
1

=1



The first term of fitness function is same as in GAME method but the second term gives credit to solution with small standard
deviation of received power distribution. This term is not zero for all particles and gives good result in minimization of near far
problem.

is priority weight which indicates the relative importance of near-far problem objective over other objective. To check the
effectiveness of the PSO algorithm, the author first initialized the particles of the swarm to using random bits. But his is not the real
case, because in a real system the mobiles power is updated from the power values in the last frame. After that the author applied
PSO algorithm with the proposed fitness function to these randomly initialized particles [2]. The procedure of proposed algorithm is
shown in Fig.5.
4.2 Treating Updated Distances and Fading Conditions
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

350 www.ijergs.org

In order to work in a real system, the algorithm must react to the updates that are done on the mobile users positions and the fading
conditions of the environment. In a real system, the power control information changes in every frame period of 10 ms and thus it
takes into consideration the updates in the system. To simulate these updates, we assumed that users will move with a maximum
velocity of 5 km/hr. and that the fading in each link may change from a frame to another as long as it follows the same probability
distribution. Using these assumptions, we can calculate the new positions of the users and the new values for fading every frame
period of 10 ms. The author calculate the new link gain for all mobile users and then apply our PSO algorithm. Of course, the best
solution found so far from a previous run is being used to initialize the input vectors at the next control period because we expect that
the new solution after the updates is not far away from the old solution before the updates in the search space. This procedure is
described in Fig. 6.

5. CONCLUSION

- Comparison of GAME & PSO method

o Regarding the average transmitting power for the mobile units, PSO algorithm reached solutions with much smaller values
of transmitting power to serve the same number of users. The author says that the resulting values of PSO algorithm are about
65% on average from those of GA algorithm [1]. This means that PSO algorithm was much better in searching the search
space when we fixed the maximum number of iterations.
o Regarding the average received

the author found that the results obtained by GA algorithm are slightly better than those
obtained by PSO algorithm [1]. This was an expected result because the solutions of GA algorithm used greater values for
transmitting power and thus they probably will achieve better average received

. In spite of the fact that GA results are


better than PSO results, the results for both algorithms are too close to each other.
o Regarding the outage probability, both algorithms achieved zero outage for small number of users. As the number of users
increases, the outage probability appears. In general, the solutions of GA algorithm had outage probability greater than the
solutions of PSO algorithm.

REFERENCES:
[1] M. Moustafa, I. Habib and M. Naghshineh, Genetic Algorithm for mobiles equilibrium, in Proceedings of MILCOM 2000: 21st
Century Military Communications Conference, vol. 1, pp. 7074, Oct. 2000.
[2] Hassan M. Elkamchouchi , Hassan M. Elragal ,Mina A. Makar power control in cdma system using Particle swarm optimization
24th national radio science conference (NRSC 2007)
[3] R. Esmailzadeh and M. Nakagawa , TDD-CDMA for Wireless Communications Artech House, 2003.
[4] Juha Korhonen Introduction to 3G Mobile Communications Artech House, 2003.
[5] Abdurazak Mudesir Power Control Algorithm in CDMA systems International University Bremen

[6] J. Kennedy and R.C. Eberhart, Particle swarm optimization, in Proceedings of IEEE International
Conference on Neural Networks, Piscataway, USA, pp. 19421948, 1995.

[7] J. Kennedy and R. C. Eberhart Swarm Intelligence San Francisco: Morgan Kaufmann Publishers, 2001.
[8] M. G. Omran, Particle swarm optimization methods for pattern recognition and image processing, Ph.D.dissertation, University
of Pretoria, Pretoria, South Africa, Nov. 2004.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

351 www.ijergs.org

[9] C.U. Saraydar, N. Mandayam and D. Goodman, Pareto efficiency of pricing-based power control in wireless data networks in:
WCNC (1999).
[10] C.W. Sung and W.S. Wong, Power control for multirate multimedia CDMA systems, in: Proc. of IEEE INFOCOM, Vol. 2
(1999)
[11] A.J. Viterbi, CDMA Principles of Spread Spectrum Communication (Addison-Wesley, Reading, MA, 1995).
[12] R.D. Yates, A framework for uplink power control in cellular radio systems, IEEE Journal on Selected Areas
[13] Abdurazak Mudesir, Power Control Algorithm in CDMA systems,submitted as a Guided research in International University
Bremen





















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

352 www.ijergs.org

Analysis of the Factors of Influence on Motivation Learn Automotive
Electrical Material for Students of Class XI SMK YP Delanggu Klaten,
Central Java, Indonesia (An Empirical Study)
Joko Rochmadi
1
1
Lecturer, Akademi Teknologi AUB Surakarta, Central Java, Indonesia
Abstract - The purpose of this study is to determine the interest and knowledge, facilities and schools, and how teachers teach to the
motivation of study in vocational automotive electrical SMK YP Klaten Delanggu either partially or simultaneously.The study was
conducted at SMK YP Delanggu Klaten. This study population is all class XI students majoring in automotive, mounting to
approximately 128 students. Sample of 40 students by using purposive sampling techniques. Data analysis techniques using multiple
linear regression analysis, test the accuracy of the model and the assumption of classical test. The results of this study were (1) There
is a partial influence students interest and knowledge of students motivation (0.036 <0.05), (2) There is a partial influence on school
facilities and students' motivation (0.015 <0.05), (3) No partial influence the way teachers teach to students motivation (0.938> 0.05)
and (4) There are influences simultaneously the independent variables interest and knowledge of students, school facilities and
infrastructure, and how teachers teach) to the dependent variable (F = 17.523). R Square = 59% means that changes in students
motivation is influenced by variables in the study by 59% in relative terms and the remaining 41% are influenced by variables other
than research. Effectively research variables affect students motivation to change their interest and knowledge of which is 28.26%,
school facilities and infrastructure amounting to 29.88%, and the way teachers teach by 0.82%.
Keywords: interest and knowledge, facilities and infrastructure, the way teachers teach, motivation to learn and automotive electrical

INTRODUCTION
Motivation encourage somebody doing something to attain goals that want to rise. Motivation determining the rate of successfully
learning activity or failures of the students. Learning without motivation difficult to achieve success optimally (Hamalik, 2005).
Automotive electrical material need to be studied that students can follow the development of the technology's electrical motorbikes
and cars as a means of transport is important, because electricity is one of the main support system in the vehicle, in addition to the
operational system. Electricity automotive are two things that must be understood, electricity is something abstract, they should know
the nature and the laws of electricity. To the functioning of the electricty system there should be support to control of the components
of the electricity therefore have to understand of completeness automotive components of electricity.
Automotive electricity is the subject matter must be followed by vocational high school students study of the technique automotive
program. Subjects, it has the purpose that students know and comprehend technology electricity automotive a very rapid progress
along technology development EFI (Electronic Fuel Injection). But in fact students tend to less motivated in electricity automotive,
attending school because they regard electricity automotive pertaining to a thing an abstract and hard as to be fatal.
Someone motivated, one who executes efforts to substantial, to support the production of unity ex-coworker, some of the causes and
any organization in which he worked. Motivation mean granting motive, the evocation motive or thing which gives rise to an impulse
or the state of being inflict impulse. Motivation is a factor that encourage people to act in certain ways (Martoyo, 2000).
Sabri (1996) students in learning process has a motivation that strongly and distinctly will surely assiduous and succeeded of learning.
Motivation third functions : (1) thruster people to do in achieving its goal, (2) at the purpose of determining the direction of a deed
which is to be accomplished and (3) selecter deed so that the work of the one who has the motivation everlastingly selective and ever
toward the goal to be achieved. The function of motivation can be concluded that motivation going to push in order to work out or
perform something the deed by determine the result of his work.

REVIEW OF LITERATURE
Interest and Knowledge
Interest is a desire of which settled on the self students to be steered on a certain as needs, later on continued to embody in a real
action by the presence of attention on objects he wants it to search for information as insight for his own. Dimyati and Mudjiono
(2000) there are some things that can affect motivation learn students, including: (1) ideals and aspirations of students, (2) the ability
of students, (3) the condition of students, (4) environmental conditions and (5) an attempt in the teaching for students.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

353 www.ijergs.org

Facilities and infrastructure
School facilities is everything that support directly against the smooth process of teaching and learning. Infrastructure is everything
that indirectly can support the success of teaching and learning. Completeness infrastructure will raise up motivation students in
learning and can help teachers in the implementation of the learning process (Suryosubroto, 2004).
In The Manner of Teachers
Teach the ability not just to serve as professional teacher educator but also have the duties of humanity and sociological, however, the
ability of an essential that deals with the main task of the teacher should be owned by a teacher as lecturers and pedagogues.
According to Bafadal (2004) the ability of professional teacher includes: (1) the ability of teaching, making plans, (2) the ability of
teaching including judgment and (3) the ability to hold the relationship between the personal with students.
Research Previous
Otoshi, Heffernan (2011) researched the motivation in school students against the understanding is desired or desire against the lesson
chosen which is the english against the variables of anatomy, relatedness and competence causing the occurrence of the relation of
cause and effect on intrinsic factor motivation and the variables of competence only influential on the dependent variable for english.
Zadok, Leiba and Nachmias (2011) researched the motivation children are playing the game of online, practicing or the test?
Regarding the differences with the object of an online game using analysis the log file. In the online computer or a cyber cafe often
have certainly file biling or to sign the user. Researchers is exploring the use of a file or record of login the log. The aim of this
research is find out motivation school children in behaving learn whether by playing the game is one form of motivation to exercise,
or is it only to play been the motivation, or measure the aptitude yourself (self-test).
Bernaus and Gardner (2008) research on the influence of environmental factors good teachers and students against a motivation in
improve performance language english. Variable influence or factors that affect motivation among others: integrativeness, the attitude
towards the situation, learning intrumental orientation, encouragement, parental and language anxiety. Among the factors overall
impact on scholastic achievement motivation learn of students in english.
Nilsen (2009) research about the relationship of student behavior in learning that is influenced by factors of motivation, self-efficacy
and value expectation. Model that included among others the motivation factors of university student academic behavior in learning
and instruction. His research results showed that the motivation of learning a student influenced by internal factors: self-efficacy, and
the values of hope against the majors or courses as to increase learning.
Lee and Yuan (2010) exploring motivation learn, about an effect the total lecturer and quality learning assisten. Factors influencing
increased capacity assisten students and lecturer in learning one affected by motivation their study. Conclusion this research is
motivation affected by the field of study are ability in teaching quality and learning peer-assisted.

PROBLEM OF STATEMENT
1. Whether a factor of interest and knowledge students having influence against motivation class XI in following lessons of
electricity automotive in SMK YP Delanggu Klaten?
2. Whether a factor of facilities and infrastructure of school has an influence upon motivation class XI in following lessons of
electricity automotive in SMK YP Delanggu Klaten?
3. Whether a factor in the manner of teachers teach having influence against motivation class XI in following lessons of electricity
automotive in SMK YP Delanggu Klaten?
4. Whether a factor of interest and knowledge , facilities and infrastructure at public schools and the manner of teachers teach
simultaneously having influence against motivation class XI in following lessons of electricity automotive in SMK YP Delanggu
Klaten?

OBJECTIVE OF RESEARCH
Objectives to be achieved of this research is to find out the influence of:
1. A factor of interest and knowledge students against motivation class XI in following lessons of electricity automotive in SMK YP
Delanggu Klaten.
2. A factor of their facilities and infrastructures against motivation class XI in following lessons of electricity automotive in SMK YP
Delanggu Klaten.
3. A factor in the manner of teachers teach against motivation class XI in following lessons of electricity automotive in SMK YP
Delanggu Klaten.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

354 www.ijergs.org

4. A factor of interest and knowledge, their facilities and infrastructures and means teacher taught simultaneously against motivation
class XI in following lessons of electricity automotive in SMK YP Delanggu Klaten.

CONTRIBUTION
1. For researchers can be used to know and comprehend factors anything that affects motivation class XI in attending school
electricity automotive in SMK YP Delanggu Klaten.
2. For teachers can be used as a reference for increasing the quality of students to willing to learn electricity automotive in their
schools and as material adder intellectual wealth
3. For schools is input for SMK YP Delanggu Klaten in implementation of the learning process electricity automotive, to hold
change, improve and maintain the learning strategy to knowledge and skill students on matter electricity automotive increase.

RESEARCH METHODOLOGY
Type and Design Research
This research is research descriptive quantitative with factors of learning through research the population. Design research used is
research associative or correlational used to seek the relation or influence between variables free with a variable bound.
Population and Sampling
Population in this research is all the students of an automotive engine SMK YP Delanggu Klaten are 128 students. To determine the
size of the sample in this research, based on Arikunto (2005) the sample is up to 30% and this research set large sample of the
population is 30% x 128 = 42 students rounded up to 40 students.
Techniques and Data Collection
Techniques and methods of data collection is to use research instrument in the questionnaire closed. Question form is a way of
collecting data by using a checklist or list of questions has been prepared and arranged in such a way that respondents fill out or mark
the answer sheet provided. This questionnaire method was used to obtain primary data i.e. data about the intrinsic factor consists of
interest and knowledge of students of electrical automotive and extrinsic factors of data consisting of the means and manner of teacher
teaching.
Technique of Data Analysis
Technique that uses the technique of correlation and multiple linear regression test, because the relationships between variables that
happen is linear between the variables affecting the interests and knowledge, the school facilities and infrastructure, and how teachers
teach to the variables that influenced the motivation of students in improving learning automotive. multiple linear regression model
used is: Y = c + a
1.
X
1
+ a
2.
X
2
+ a
3.
X
3
+ e.
Test of Normality
Testing aims to test his regression models in an independent variable, dependent variable, or both have a normal distribution or not.
Good regression models is data distribution normal or close to normal. To test normality in this study using One Sample Kolmogorov
Test Sminorv. Basis in decision making is if 0.05, then regression models meet the assumptions of normality and vice versa (Gujarati,
2003).
Linearity
Test is important, because this test and can be used to see whether the specification of the model use is correct or not. This research
will be used in a tested Ramsey. Test this by way of comparing Ramsey value of F
count
with F
table
. If F
count
< value F
table
, then the zero
hypothesis which states that the specifications in the form of linear is true cannot be denied (Gujarati, 2003).
Multicollinearity
Analyzes whether there is multicollinearity problem used the Variance Inflation Factor (VIF), tolerance and correlation between
independent variables are quantity. The regression model is guideline that non tolerance value is multicollinearity a low equals values
VIF above 10. So, a low tolerance value equal to the value of the VIF is high (VIF = 1 tolerance) and showed a high collinearity
(Gujarati, 2006.).
Heteroskedastisity
Testing heteroskedastisity in this research using Glejser test. The technique of using test Glejser absolute residual value if the values
of the independent variable to variable dependent significantly affect the absolute value, residual then there are the symptoms
heteroskedastisity research (Gujarati, 2006).

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

355 www.ijergs.org

RESULTS
A descriptive can be explained (Table 1) about interest and knowledge students against matter of electricity automotive with an
average value amounting to score quisionner 44.12 (of the total the highest value is 5 x 12 items statement = 60, and total the lowest
value = 1 x 12 items statement = 12) showing in a qualitative manner that interest and knowledge students against electricity
automotive average height matter with standard deviation as much as 7,275 mean value score against interest and knowledge students
between the 44.12-7,275 namely 36,845 until 44.12 + 7,275 namely 51,395, thus, interest and knowledge students against matter of
electricity automotive on average is quite high with a score of higher than 30 of the total score highest 60.

Table 1: Description of Research Data
Variabel Average Standard Deviation
Interest and knowledge students against matter of electricity automotive 44,12 7,275
School facilities and infrastructure support 36,52 6,114
Teacher teaching material as well as the practice of electricity automotive 36,25 6,246
Motivation students in studying electricity automotive 37,25 6,105
(Source: Primary data, 2013)

Perception students against their facilities and infrastructures obtained from the results of the spread of poll (1), the table a descriptive
can be explained that the average score obtained by 36,52. As known that the number of statement as much as 10 item with the lowest
and highest scale 15 so that the total value of the score is 50 and highest score total lowest value is 10. Thus perception students
against their facilities and infrastructures of 36,52 higher than 25 (50% of the total score which is 50). Thus can be explained that
score 36,52 was in a qualitative manner having understanding more than enough or can be interpretation enough approaching high or
moderately high with standard deviation 6,114 mean value score against their facilities and infrastructures between 36,52 - 6,114
namely 30,406 until 36,52 + 6,114 42,634, namely that the perception students against their facilities and infrastructures is quite high
or students feel that facilities and infrastructure that there was at school. They have even support their students against his need
whether it's a means of learning as well as a means of practice. A descriptive can be explained (Table 1) about the way that teachers
teach against matter of electricity automotive with an average value amounting to score quisionner 36.25 (of the total the highest value
is 5 x 10 items statement = 50, and total the lowest value = 1 x 10 items statement = 10 ) showing in a qualitative manner that the way
the teacher taught against electricity automotive average height matter with standard deviation as much as 6,246 mean value score
against the way the teacher taught between 36.25 - 6,246 namely 30.5 until 36.25 + 6,246 namely 42,5. Thus, in the manner of
teachers teach against matter of electricity automotive on average is high with a score of higher than 25 of the total score highest 50.
A descriptive inexplicable (Table 1) about motivation against electricity automotive matter with an average value score quisionner
59.22 - 37.25 (highest score of total 5 x 10 items statement = 50, and total the lowest value = 1 x 10 items statement = 10) indicating
that motivation qualitative students learning to electrical average height matter automotive with standard deviation by 6,105 mean
value score motivation students learning between 37.25 - 6,105 namely 31,5 hingga 37.25 + 6,105 namely 43,45. Thus, motivation
students learning materials of automotive against electricity on average with a score of considerable higher than 25 of the total score
top 50.
Multiple Liniear Regression

Table 2: The Result of Testing Multiple Linear Regression
Variable Coeficient t count Sig.
Constanta 6,818
Interest in and knowledge of students of electrical automotive learning material (X1) 0,334 2,183 0,036
School facilities and infrastructure that support (X2) 0,417 2,559 0,015
How teachers teach material or practical automotive electrical (X3) 0,013 0,079 0,938
F count = 17.253 Sig. = 0.001
R Square = 0.590
Adjusted R Square = 0.556
(Source: Primary data, 2013)

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

356 www.ijergs.org

In accordance with the result of testing in SPSS 16 windows as it seems at Table 2 : Y = 6,818 + 0,334X1 + 0,417X2 + 0,013X3 + e
the constant = 6,818 it means that if within a model research the relation between motivation learn students who influenced by third
variables such above that is, (X1, X2, and X3) did not influence, the magnitude of motivation learn students have a fixed value is as
much as 6,818. The regression coefficient (X1) = 0,334 it means that if within a model research this relationship variable influence
only affected by interest and knowledge students about electricity automotive (the other variables are considered to be fixed or no
effect) by the presence of change of the variables interest and knowledge of 1 would affect motivation learn students of 0,334.
The regression coefficient (X2) = 0,417 it means that if within a model research this relationship variable influence only affected by
their facilities and infrastructures (the other variables are considered to be fixed or no effect) by the presence of change of the
variables their facilities and infrastructures of 1 would affect motivation learn students of 0,417.
The regression coefficient (X3) = 0,013 it means that if within a model research this relationship variable influence only effected by
means of teachers teach (the other variables are considered to be fixed or no effect) by the presence of change of the variables way of
teaching of 1 would affect motivation learn students of 0,013.
The results of the test t variable interest and knowledge (X1) obtained the result that t
count
(2,183) > t
table
(2,021) then H
0
rejected,
mean interest and knowledge students affect motivation learn students. Test t variable their facilities and infrastructures (X2) obtained
the result that t
count
(2,559) > t
table
(2,021) then H
0
rejected, means of facilities and infrastructure affect motivation learn school
students. The teacher t variable manner of teaching (X3) obtained the result that t
count
(0,079) < t
table
(2,021) then H
0
accepted, mean
the the way the teacher taught does not affect motivation learn students. Based on the testing f
count
known > amounting to (17,253) > f
table
(2,84) then H
0
rejected, means there significant influence between interest and knowledge students (X1), their facilities and
infrastructures (X2), and the manner of teaching ( X3), teachers simultaneously against motivation learn students (Y).
Based on the result analysis of R-square obtained value amounting to 0,590 (Table 2 ) means a variable interest and knowledge
students, facilities and infrastructure at public schools and the manner of teachers teach affect motivation learn students of 59,0 %
while the rest of (41,0%) influenced by another factor that is not incorporated in this research, for example, factor cost factor the
quality of teachers, or other factors that has correlation high with motivation learn students.

DISCUSSION
Research conducted in SMK YP Delanggu Klaten that is vocational school education automotive the engineering practice supported
by accoutrements sufficient. Contributions and knowledge, a variable interest facilities, schools teacher motivation and manner of
teaching students to learn on subjects electricity automotive. Overall contribution variable influence towards motivation learn this
research students on subjects automotive electricity indicated by influence simultaneously namely F
count
= 17,253 significant on
standards o = 0,05 (=0,001).
The influence of the simultaneous interest and knowledge, infrastructure and facilities of the school and how teachers teach supports
previous research by Zadok, Leiba, and Nachmias (2011) the results of his research that is practice, game or test? Exercises, games,
and tests all three affect the interest of users of the online games to play games. The research equation with Zadok, Leiba, and
Nachmias (2011) is on the dependent variable, the variable is examined by Zadok, Leiba, and Nachmias (2011) is an online game
where users motivation majority wearer is school children who seek their dexterity in the game play. The difference is in that affect
his motivation, games and tests while in this study is of interest and knowledge, the school facilities and infrastructure, as well as how
teachers teach.
Contribution of knowledge and interest in learning motivation of students in automotive electronics. Interest and knowledge of a
positive and significant effect on student learning motivation on automotive electrical known from t
count
= 2,183 (p = 0,036)
significance level significant at o = 0.05. The regression coefficient of 0,334 showed that students increased knowledge and interest
will increase the motivation of his studies of 0,334 units with a note that there are no other variables that affected it in addition to the
interest and knowledge of the automotive electrical students. This research supports research Nilsen (2009) is self-efficacy affect the
academic motivation of students in improving their learning activities. Self efficacy is a condition on the motivating yourself or desire
or interest to able or willing to do something on the basis of such knowledge.
School facilities and infrastructure contribution towards the study motivation of students in automotive electronics. The influence of
school facilities and infrastructure against known motivations of significant t
count
= 2,559 ( = 0,015) on significance level o = 0.05.
The regression coefficient of 0,417 pointed out that when the means and means of increasing the school will improve student learning
motivation of 0,417 unit, shows that the school has the facilities and infrastructure impact on learning motivation of students in a
positive and significant. Research that supports research and Yuan Lee (2010) the effect of learning motivation one is affected by the
variable total quality of teaching is one of variabelnya is the means or equipment learning school.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

357 www.ijergs.org

Contributions to the way teachers teach to the student learning motivation in automotive electronics. How teachers teach in its effects
on the motivation of school learning has no effect or can be said to have influence but not significant. This note is based on t
count
=
0,079 insignificant (p = 0,938) at level of significance of o = 0.05. Research of its kind that is how teachers teach conducted by
Gardner and Bernaus (2008), which is about the influence of the environmental factors both teachers and students to increase
achievement motivation in language of United Kingdom. Bernaus research and Gardner (2008) the influence of teacher in teaching of
indirect influences of intrumental orientation. Results of the research the way teachers teach to the student learning motivation in
mind how teachers teach has no significant influence. How is the teacher teaching student perceptions of the viewpoints of teacher
teaching students turned out to have no significant effects on the motivation of their learning. However, the third such variables
simultaneously affecting learning motivation of students. Change of student learning motivation variation can be explained by
changes in these variables i.e. third variation of (R Square) = 0,590 or 59%, third variables can affect student learning motivation of
59% while the remaining influences of 41% is affected by other factors not included in this research.

CONCLUSSION
1. There is a positive influence and significant factor interest and knowledge students against motivation learning in attending school
electricity automotive. From results test hypotheses where the value of significant (0,036 < 0,05) and influence in partial value t
count (2,183) > t
table
(2,021) then H
0
rejected, mean interest and knowledge students affect motivation learning students in
attending school electricity automotive.
2. There are influence them were found and significant factor their facilities and infrastructures against motivation learning students
in attending school automotive. From results test hypotheses where the value of significant (0,015 < 0,05) and is partial value t
count
(2,559) > t
table
(2,021) then H
0
rejected, mean their facilities and infrastructures affect motivation learning students in
attending school electricity automotive.
3. There are negative influence and no significant factor in the manner of teachers teach against motivation learn students in
following lessons of electricity automotive.This can be seen from the results of the test hypotheses where the value of significant
(0,938>0.05) and in partial value t
count
(0,079) < t
table
(2,021) then H
0
accepted, means perception of students in the way the
teacher taught does not affect motivation of learning from the lessons of electricity automotive.
4. There are the influence of them were found and significant factor siswa, interest and knowledge facilities and infrastructure at
public schools and the manner of teachers teach betawi together against motivation learned in the following lessons of electricity
automotive.This can be seen from the results of the test hypotheses where the value of significant (0.0001 < 0.05) and
simultaneously value F
count
(17,253) > F
table
(2,84) then H
0
rejected, mean interest and knowledge, facilities and infrastructure at
public schools and the manner of teachers teach simultaneous to affect motivation of learning in following lessons of electricity
automotive.

SUGGESTION
1. By understanding interest, ability, early students and knowledge the teacher could devise a strategy choose the concepts and
methods of learning proper and students will be motivated follow the subject matter that is delivered by the teacher.
2. Facilities and infrastructure is media of education to help teachers in teaching. Teacher told the subject matter assisted by means
of the lesson and props to lesson better. In other words, the teacher could not in the classroom, superseded by the media.
According to the perception from the research facilities and infrastructure has been sufficient however, the facilities reck function
and conformity with matter taught, in this respect, especially conformity with technology development.
3. From the research students according to perception means teacher taught not affecting been the motivation, then that students in
learning electricity automotive always motivated should teachers matter electricity need to increase competence and skill.
4. In general this research result showed that motivation students learning against matter electricity automotive good. But there is a
factor that not affecting motivation, learn especially manner teachers teach students according to perception of no influence, there
should be other researcher with additional factors on the performance and means teacher teaching.

REFERENCES:
1. Alisuf, Sabri. 1996. Psikologi Pendidikan Berdasarkan Kurikulum Nasional, Jakarta: CV. Pedoman Ilmu Jaya.
2. A.M, Sardiman. 2006. Interaksi dan Motivasi Belajar Mengajar. Jakarta: Rajawali Pers.
3. Arikunto, Suharsimi. 2006. Prosedur Penelitian Suatu Pendekatan Praktik. Jakarta: Rineka Cipta.
4. Asep, Sudjana. 2004. Paradigma Baru Manajemen Ritel Modern. Yogyakarta: Graha.Ilmu.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

358 www.ijergs.org

5. Bafadal, Ibrahim. 2006. Peningkatan Profesionalisme Guru Sekolah Dasar (SD), Jakarta: Bumi Aksara.
6. Darajat, Zakiah. 1995. Metodik Khusus Pengajaran Agama Islam Cet. ke-1. Jakarta: Bumi Aksara.
7. Dharma, Surya. 2009. Manajemen Kinerja Falsafah Teori dan Penerapannya. Yogyakarta: Pustaka Pelajar
8. Dimyati & Mudjiono. 2006. Belajar dan Pembelajaran. Jakarta : PT Rineka Cipta.
9. Djojonegoro, Wardiman. 1999. Pengembangan Sumber Daya Manusia Melalui SMK. B. Jakarta: Balai Pustaka.
10. Gujarati. 2006. Dasar-dasar Ekonometrika Jilid 2. Jakarta: Erlangga
11. Hamalik,Oemar. 2005. Kurikulum dan Pembelajran. Jakarta:Bumi Antariksa
12. Hasibuan, H. Malayu SP, 2006. Manajemen Sumber Daya Manusia. Jakarta: Bumi Aksara.
13. Makmun S., Abin & Saud S. Udin. 2006. Perencanaan Pendidikan Suatu Pendekatan Komprehensif. Bandung. PT. Remaja
Rosdakrya.
14. Martoyo, S. 2000. Manajemen Sumber Daya Manusia. Edisi 4, Yogyakarta: BPFE.
15. Nawawi, H. Hadari, 2001. Manajemen Sumber Daya Manusia untuk Bisnis yang Kompetitif. Yogyakarta.: Gajah Mada University
Press.
16. Rismayani, 2007. Usaha Tani dan Pemasaran Hasil Pertanian, Cetakan I. Medan: USU Press.
17. Setyaningsih, Sri Oktafia. 2008. Kajian Tentang Problem Based Learning (PBL) Terhadap Hasil Belajar dan Keterampilan
Pemecahan Masalah Siswa Kelas XI SMAN 7 Malang Pada Materi Pokok Larutan Penyangga. Program Studi Pendidikan Kimia
FMIPA Universitas Negeri Malang.
18. Sirojuzilam. 2008. Disparitas Ekonomi dan Perencanaan Regional. Medan: Pustaka Bangsa Press.
19. Slameto. 2003. Belajar dan Faktor-faktor yang Mempengaruhinya. Jakarta: Rineka Cipta
20. Sugiyono. 2002. Metode Penelitian Administrasi. Bandung : Alfabeta.
21. Sirodjuddin, Ardan. 2008. SMK Lebih Menjanjikan Masa Depan Dibandingkan Dengan SMA. (online).
http://ardansirodjuddin.wordpress.com/2008/06/03/smk-lebih-menjanjikan-masa-depan-di-banding-sma/. Diakses tanggal 25 Juni
2011 pukul 22:10-22:30.
22. Suryosubroto, 2004. Manajemen Pendidikan di Sekolah. Jakarta: Rineka Cipta.
23. Tuu. 2004. Peran Disiplin Pada Perilaku dan Prestasi Siswa. Jakarta: PT. Gramedia Pustaka Utama.
24. Wahyosumidjo, 1997. Kepemimpinan dan Motivasi, Cetakan III, Ciawi-Bogor: Ghalia Indonesia













International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

359 www.ijergs.org

Coherence among Electrocardiogram and Electroencephalogram Signals as
Non- Invasive Tool of Diagnosis
Abhishek Chaudhary
1
, Bhavana Chauhan
1
, Gavendra Singh
1
, Monika Jain
1

1
Dept. Of Elecronics and Instrumentation Engineering, Galgotias College of Engineering and Technology, Greater Noida
E-mail- abhiguru.005@gmail.com
Abstract-Degree of association or coupling of frequency spectra between the ECG (electrocardiogram) and EEG
(electroencephalogram) signals at a particular frequency is presented in this paper. Degree of association or coupling of frequency
spectra between two signals is called Coherence.ECG orelectrocardiogram and EEG or electroencephalogram are very important
parameters when it comes to diagnosis and treatment of human heart and brain related problems. For this reason signal processing of
such signals are most important. A continuous non-invasive, low cost and accurate monitoring of functioning of heart and brain have
been proven to be invaluable in various diagnostics and clinical applications. In this paper coherence between simultaneously taken
ECG signals and EEG signals of four different subjects is presented.The EEG signals acquired from the four different positions; the
Frontal(F
p
1
F
p
2
), Central(C
3
C
4
), Parietal(P
3
P
4
) and Occipital (O
1
O
2
) Brain Regions. Coherence is analysed by obtaining
magnitude squared coherence parameters at a certain frequency band (Very Low, Low and High) using Welch method of Power
Spectrum Estimation.
Keywords Auto-Power Spectral Density, Magnitude Squared Coherence (MSC), Welch Method, ECG, EEG, Coherence, Parietal,
Occipital, Frontal, Cerebellum.
1. Introduction-
An electrocardiogram or ECG is today used worldwide as a relatively simple tool of diagnosis of conditions of heart. An ECG is a
recording of the small electric waves being generated during heartbeat. There are specialized cells producing electricity, are called
natural pacemaker cells. These cells produce electricity by quickly changing their electrical charge from positive to negative and again
from negative to positive. The first electric wave in a heartbeat is initiated by sinoatrial node placed at top of heart. Heart muscle cells
have ability to spread its electric charge to adjacent heart muscle cells and this initial wave will be enough to start a chain reaction. An
electroencephalogram or EEG Signal reflects the electrical activity of human brain. Neurons or nerve cells transmit information
throughout the body electrically and they create electrical impulses by the diffusion of sodium, calcium, and potassium ions across the
cell membranes. When a person is thinking, watching television or reading, different parts of the brain are stimulated. It creates
different electrical signals that can be monitored by an EEG .There are five major brain waves distinguished by their different
frequency ranges and amplitudes. These frequency bands from low to high frequencies respectively are called alpha (), theta (), beta
(), delta (), and gamma (). These frequency bands are seen in different states of mind.
Coherence is the degree of association of frequency spectra between the ECG and EEG signals at a particular frequency. The
magnitude squared coherence (MSC) estimate between two signals x (ECG Signal) and y (EEG Signal) is given below:

C
xx
f =
P
xy

(f)
2
P
xx
(f)P
yy
(f)
(1)
Here C
xx
(f) is magnitude squared coherence estimate between two signals x (ECG signal) and y (EEG signal).
If MSC between two signals is positive its mean changing nature of that two signals are same. If MSC of two signals is negative its
mean changing nature of that two signals is opposite. MSC zero mean there is no relationship between two signals. Coherence phase is
given as


f
= tan
1

ImP
xy
f
Re P
xy
f
(2)

Where P
xx
(f) is the power spectral estimation of x (ECG) signal and P
yy
(f) is the power spectral estimation of y (EEG) signals. P
xy
(f)
is the cross power spectral estimation of the ECG and EEG signals. Estimation of coherence amongPhysiological signals is used as
low cost and accurate non-invasive tool of diagnosis of brain. [1-5]. In this research work ECG and EEG signals are acquired from
four different patients. Length of these signals is five second. These signals are sampled at 1000 sample per second. Coherence
between ECG and EEG is analysed by using Welch method. Main objective of this work is to find the region of brain which has
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

360 www.ijergs.org

maximum association with brain. Coherence among ECG and EEG is used for differentiating between normal and abnormal brain
activity. It is used as cost effective tool of non-invasive diagnosis.

2. Method-

Technique used in this research work is based on classical Welch method of power spectrum estimation. Which utilizes the 50%
overlap of data segment along with the use of hamming window. [6,7]
Using Welch method power spectral density (PSD) of ECG signal and EEG signal is given below
PSD of x (ECG Signal):

()

(3)

PSD of y (EEG Signal):

()

(4)

Cross power spectral density (CPSD) of x and y:

()

(5)
Or,

()

(6)

MATLAB is utilised for the implementation of Welch method. Windowing technique used in Welch method is changed and on the
place of default hamming window Kaiser Window is utilised. In Welch method signals are divided into K segments. Over lapping of
segments is kept 50%. For our research work 1024 point FFT is used. [8,9]

3. Analysis and Results-
Coherence between the ECG and corresponding EEG signals acquired from the four prominent brain regions named as the
Central(C
3
C
4
), Frontal(F
p
1
F
p
2
), Occipital (O
1
O
2
) and Parietal (P
3
P
4
)is investigated. All data are collected from healthy
subjects under the age group (21-36 years old) at the sampling rate is 1000samples/second.

Figure.1 Box plot of MSC of first subject.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 4
BOXPLOT OF COHERENCE OF FIRST SUBJECT
CEREBELLUM OCCIPITAL
FRONTAL
PARIETAL
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

361 www.ijergs.org

From figure.1 Maximum mean of Magnitude squared coherence 0.1501 is at cerebellum. No of coherence points greater than 0.5 at
cerebellum, frontal, occipital and parietal are respectively 14, 15, 11 and 13.

Figure.2 Box plot of MSC of second subject.
From figure.2 Maximum mean of Magnitude squared coherence 0.1691 at cerebellum. No of coherence points greater than 0.5 at
cerebellum, frontal, occipital and parietal are respectively 14, 8, 14 and 14.

Figure.3 Box plot of MSC of third subject.
From figure.3 Maximum mean of Magnitude squared coherence 0.1510 is at cerebellum. No of coherence points greater than 0.5 at
cerebellum, frontal, occipital and parietal are respectively 12, 10, 16 and 15.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 4
OCCIPITAL CEREBELLUM FRONTAL
PARIETAL
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 4
PARIETAL
OCCIPITAL
FRONTAL
CEREBELLUM
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

362 www.ijergs.org


Figure.4 Box plot of MSC of fourth subject.
From figure.4 Maximum mean of Magnitude squared coherence 0.1413 is at parietal. No of coherence points greater than 0.5 at
cerebellum, frontal, occipital and parietal are respectively 6, 7, 9 and 7.

Table.1 Final Estimated Result


Mean of coherence Coherence points greater than 0.5
Cerebellu
m
Frontal Occipital Parietal Cerebellum Frontal Occipital Parietal
1
st

Subject


0.1326


0.1266


0.1484


0.1501


4


5


11


13


2
nd

subject


0.1394


0.1446


0.1474


0.1578


9


8


8


19

3
rd

subject



0.1510

0.1433


0.1491


0.1542


12


10



12



15


4
th

subject


0.1351

0.1332

0.1413

0.1359

6

9

6

7


0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 4
CEREBELLUM FRONTAL
OCCIPITAL
PARIETAL
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

363 www.ijergs.org

From the table.1 it is clear that maximum coherence > 0.5 is at Parietal for first three subjects and also mean of coherence is maximum
for first three subjects at parietal. Mean of Coherence at frontal is minimum for three different subjects. For first subject maximum
mean of coherence is .1501 at parietal and minimum mean of coherence is 0.1266 at frontal. For second subject maximum mean of
coherence is 0.1578 at Parietal and minimum mean of coherence is 0.1394 at cerebellum. For third subject maximum mean of
coherence is 0.1542 at parietal and minimum mean of coherence is 0.1433 at frontal. For fourth subject maximum mean of coherence
is 0.1413 at occipital and minimum mean of coherence is 0.1332 at frontal.
4. Acknowledgements-
It gives us a great sense of pleasure to present the research paper on the basis of the B. Tech Project undertaken during B.Tech Final
Year. We owe special debt of gratitude to our supervisors Dr. Monika Jain (Prof. and HOD dept. EIE) and Mr. Gavendra Singh, (Asst.
Professor EIE) Department of Electronics and Instrumentation Engineering, Galgotias College of Engineering & Technology for their
constant support and guidance throughout the course of our work. Their sincerity, thoroughness and perseverance have been a constant
source of inspiration for us. It is only their cognizant efforts that our endeavours have seen light of the day.
We are also thankful to Dr. R.Sundaresan, Director of college for providing their support and all the facilities for completing this
project.
5. Conclusion application and future scope-
ECG and EEG signals are completely coherent if the magnitude squared coherence is equal to 1 and if MSC is equal to zero then two
signals are independent to each other. In this research work MSC is non zero between ECG and EEG signals for all four subjects, its
mean there is association between ECG and EEG signals. It may possible to acquire the EEG signal information from the ECG signal
by PSD estimation using welch method. Estimation of coherence among Physiological signals are used as a tool for analysis of
association between two Physiological organs. Coherence among different ECG and EEG signals is help full in finding the difference
between normal and abnormal mental activity. It is also helpful in finding the defective brain region. Brain region where is large
deviation in mean of coherence from standard mean of coherence is termed as defective region. After investigating the defective brain
region we can diagnosis and supply drugs to that particular brain region rather than whole brain. It is used as non-invasive tool of
diagnosis. Mean of coherence of N (N any large no.) subjects is used as standard mean. Any deviation from this standard is identify as
abnormality. It is a cost effective and accurate tool of non-invasive diagnosis.

REFERENCES:
[1]. David M. Simpson*, Member, IEEE, Daniel A. Botero Rosas, and Antonio Fernando C. Infantosi, Estimation of Coherence
Between Blood Flow and Spontaneous EEG Activity in Neonates, IEEE Transactions On Biomedical Engineering, VOL. 52,
NO. 5, MAY 2005, 0018-9294/$20.00 2005 IEEE.
[2]. Gavendra Singh, Dilbag Singh, Varun Gupta, Coherence Analysis between ECG signal and EEG signal,International Journal
of Electronics & Communication Technology (IJECT) Vol. 1, Issue 1, December 2010.
[3]. M.Grewel, T.Ning, and J.D.Bronzino Dept of Engineering &Computer Science Trinity College, Hart forth. CT 06108,
Coherence Analysis of EEG via Multichannel AR Modeling.CH 2666-6/88/0000-0245-$1.00 Q 1988 IEEE
[4]. John G. Proakis, Dimitris G. Manolakis, Digital Signal processing, Fourth Edition 2007 Pearson Education, Chapter 14,
power Spectrum estimation.
[5]. Gavendra Singh Jadun, Setu Garg, Neeraj Kumar, Shikhar, Coherence Analysis between ECG Signal and EEG Signal,
International Journal for Research in Applied Science & Engineering Technology (IJRASET) VOl. 1, Issue I, April 2013.
[6]. Sanqing Hu, Senior Member, IEEE, Matt Stead, Qionghai Dai, Senior Member, IEEE, and Gregory A. Worrell, On the
Recording Reference Contribution to EEG Correlation, Phase Synchrony, and Coherence. IEEE Transactions on Systems,
Man, and Cybernetics Part B: Cybernetics, Vol. 40, and No. 5, October 2010, 1083-4419/$26.00 2010 IEEE.
[7]. Alexander Klein, Tomas Sauer, Andreas Jedynak, and Wolfgang Skrandies, Conventional and Wavelet Coherence Applied
toSensoryEvoked Electrical Brain Activity, IEEE Transactions On Biomedical Engineering, Vol. 53, No. 2, February 2006,
0018-9294/$20.00 2006 IEEE.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

364 www.ijergs.org

[8]. Douglas A. Newandee, M.S, Stanleys. Reisman, Ph.D. Measurement of the Electroencephalogram (EEG) Coherence In
group meditation. 0-7803-3204-0/96$05.00 1996 IEEE.
[9]. N Saiwaki, H. Tsujimoto, S.Nishida and S. Inokuchi, Faculty of Engineering Science, Oskar University, Direct Coherence
Analysis of EEG Recorded during music listening, 18
th
Annual International conference Of IEEE engineering in Medicine &
Biology society, Amsterdam. 0-7803-381 1 -1/97/$10.00 IEEE
[10]. AK Kokkinos, EM Michail, IC Chouvarda, NM Maglaveras Aristotle University of Thessaloniki, Thessaloniki,
Greece , A Study of Heart Rate and Brain System Complexity and Their Interaction in Sleep-Deprived Subjects. Computers
in Cardiology 2008; 35: 969971.
[11]. Richard C. Watt, Chris Sisemore, Ansel Kanemoto, J. Scott Polson, Bicoherence of EEG can be used to
Differentiate Anesthetic Levels, 18th Annual International Confeyence of the IEEE Engineering in Medicine and Biology
Society, Amsterdam 1996,7.4.2: Medical Informatics, 0-7803-381 1 - 1/97/$10.00 IEEE.
[12]. Yan Xu, Simon Haykin and Ronald J. Racine, Multiple Window Time-Frequency Distribution and Coherence of
EEG Using Slepian Sequences and Hermite Functions, IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL.
46, NO. 7, JULY 1999, 00189294/99$10.00 1999 IEEE


















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

365 www.ijergs.org

Pulse Oximeter System
Bindu Madhavi Kalasapati
1
, Lavanya Thunuguntla
1

1
Hyderabad Institute of Technology and Management, Hyderabad, A.P, India
Abstract: A persons heart forces the blood to flow through the arteries. As a result the arteries throb in sync
with the beating of the heart. This throbbing can be felt at the persons wrist and other places over the
body.Electronically this throbbing can be sensed using LDR and LED sensor. The LDR resistance changes with
the intensity of the light falling on its surface. The variations in the light intensity due to blood flow are
exploited in this project.The counter is configured such that it counts the pulses for 1 minute. The process is
initiated by a start condition and the count terminates at the end of 60 seconds. The result is displayed on the
LCD.Although the pulse rate can be manually measured by us, an electronic digital heart beat counter gives the
opportunity to measure tit automatically and continuously. Our heart beat monitor has the following salient
features like Light dependent resistor is used as transducer, blinking LED for visual indication of heat
beats,counts are automatic and are displayed on LCD, Continuous monitoring can be done, the processed signal
can be fed to data logger for future reference, Works on AC mains or batteries.
Description:
Design of model bio-telemetric system is embedded system of distributed nature aimed at monitoring of
patient's vital functions, among others heart rate and Carbon dioxide saturation. Model bio-telemetric system
can be partitioned into two basic parts. Inner part located in patient's home and outer part located in monitoring
centre. Both parts are sub partitioned into participating elements. Inner part of model bio-telemetric system is
located in space where patient spends most of his time. Main purpose of this subsystem is to acquire bio-
telemetric data and to hand them over to outer part of model bio-telemetric system.
The future will see the integration of the abundance of existing specialized medical technology with pervasive,
wireless networks. They will co-exist with the installed infrastructure, augmenting data collection and real-time
response. An example of area in which future medical systems can benefit the most from wireless sensor
networks is in-home assistance. In-home pervasive networks may assist residents by providing memory
enhancement, control of home appliances, medical data lookup, and emergency communication.

Block diagram:

When the load is removing from a switching mode power supply with a LC low-pass output filter, the
only thing the control loop can do is stop the switching action so no more energy is taken from the source. The
energy that is stored in the output filter inductor is dumped into the output capacitor causing a voltage
overshoot. The magnitude of the overshoot is the vector sum of two orthogonal voltages, the output voltage
before the load is removed and the current through the inductor times the characteristic impedance of the output
filter, Zo = (L/C)^1/2.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

366 www.ijergs.org



Implementation:
The principle of pulse oximetry is based on the red and infrared light absorption characteristics of
oxygenated and deoxygenated hemoglobin. Oxygenated hemoglobin absorbs more infrared light and allows
more red light to pass through. Deoxygenated (or reduced) hemoglobin absorbs more red light and allows more
infrared light to pass through. Red light is in the 600-750 nm wavelength light band. Infrared light is in the 850-
1000 nm wavelength light band. he use of pulse oximeters is limited by a number of factors: they are set up to
measure oxygenated and deoxygenated haemoglobin, but no provision is made for measurement error in the
presence of dyshemoglobin moieties such as carboxyhemoglobin (COHb) and methemoglobinemia. COHb
absorbs red light as well as HbO, and saturation levels are grossly over-represented.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

367 www.ijergs.org


Arterial gas analysis or use of co-oximetery is essential in this situation. Co-oximeters measure reduced
haemoglobin, HbO, COHb and methemoglobin. Abnormal movement, such as occurs with agitated patients,
will cause interference with SpO
2
measurement. Low blood flow, hypotension, vasoconstriction and
hypothermia will reduce the pulsatility of capillary blood, and the pulse-oximeter will under-read or not read at
all.
Conversely, increased venous pulsation, such as occurs with tricuspid regurgitation, may be misread by
the pulse-oximeter as arterial blood, with a low resultant reading. Finally, it is generally accepted that the
percentage saturation is unreliably reported on the steep part of the oxyhemoglobin dissociation curve. While
the trend between the SaO
2
(arterial saturation) and SpO
2
appears accurate, the correlation between the two
numbers is not. Thus a drop in the SpO
2
below 90% must be considered a significant clinical event.
Conclusion

This results in accurate measurement of heart beat and it detects hypoxemia ,which is caused due to
deficiency of oxygen content in blood.The P.O. Pro is a wireless solution to every household allowing parents
to monitor their childs pulse rate and blood oxygen content. This design will provide this information
wirelessly giving flexibility to the parents. The product displays information in a straightforward manner to
ease interpretation of the information by the users. This product will be readily available to the general public
at retail stores at a competitive price.
The final product will consist of a sensor module, a monitor and an alarm. A watch shaped sensor
module which will be placed on the infants ankle will transmit data to the monitor which can be place within
thirty feet from the sensor. This monitor will transmit data to the beeper like alarm that can be carried around
by the caretaker provided it is within one hundred feet of the monitor. The alarm will sound if an abnormal
level of oxygen or pulse rate is detected or if the battery is low. In addition to infants and toddlers being the
primary target, the product is designed in such a way that it can easily be modified to other target age groups.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

368 www.ijergs.org

REFERENCES:
[1] A. Milenkovic, C. Otto, and E. Jovanov, Wireless Sensor Networks for Personal Health Monitoring: Issues
and an Implementation, in Computer Communications, vol. 29 (13- 14), August 2006.
[2] Y. Shieh, Y. Tsai, A. Anavim, M. Shieh, and M. Lin, Mobile Healthcare: Opportunities and Challenges, in
International Journal of Electronic Healthcare, 4(2), 208-219, 2008.
[3] F. Tay, D. Guo, L. Xu, M. Nyan, and K. Yap, MEMS Wearbiomonitoring System for Remote Vital Signs
Monitoring, in Journal of the Franklin Institute, 346(6), 531-542, August 2009.
[4] A. Sagahyroon, H. Raddy, A. Ghazy, and U. Suleman, Design and Implementation of a Healthcare
Monitoring System, in International. Journal of Electronic Healthcare, 5(1), 68-86, 2009.
[5] K. Takizawa, Huan-Bang, L. Kiyoshi, H. Kohno, Wireless Vital Sign Monitoring using Ultra Wideband-
Based Personal Area Networks, in Proc. of the International Conference of the IEEE Engineering Medicine in
Biology Society, 1798-1801, August 2007.
[6] S. Sneha and U. Varshney, A Wireless ECG Monitoring System for Pervasive Healthcare, in International
Journal of Electronic Healthcare, 3(1), 32-50, 2007.
[7] Y. Zhang and H. Xiao, Bluetooth-Based Sensor Network for Remotely Monitoring the Physiological
Signals of Patient, in IEEE Trans. on Information Technology in Biomedicine, 13(6), 1040-1048, November
2009













International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

369 www.ijergs.org

Remote Data Acquisition of Physical Parameters Using Master-Slave Utility of
Microcontroller
A. P. Bhat
1
, S.J. Dhoble
3
, K. G. Rewatkar
2

1
Depatrment of Electronics, RTM Nagpur university Nagpur-440033 India.
2
Department of Physics, Dr. Ambedkar college, Nagpur-440010 India.
3
Department of Physics RTM Nagpur University, Nagpur- 440033 India
anup_b5@yahoo.com, kgrewatkar@gmail.com

Abstract Improve the performance of remotely situated device control and data monitoring, acquisition with parameter such
as temperature, pressure, vibration, humidity on real time basis give the data modularity as well as low data processing time. The
infrastructure of the existing RF network is used, which is based on supervisory control, monitoring and data acquisition
demands constant development and applicability in research. Versatile and highly accurate data acquisition system is
investigated. In the present work, an embedded data acquisition system at operating frequency of 16 MHz is designed around the
Atmega-16A microcontroller. The application of RF and microcontr oller infrastructure is proposed. Wireless data acquisition
deals with the creation of an inexpensive, adaptable and easy to use data acquisition within a network. The wireless data acquisition
system, wh i ch i s setup the temperature, pressure and vibration monetring system with the precision readout, is designed using
locally available electronics component. The necessary control have been dumped or embedded as software in the
microcontroller to add the intelligence to the data accusation system sufficiently recorded from remote location and store in
personal computer memory using hyper terminal utility. The designed system is compact, stand-alone, reliable, accurate and
portable with on-board display of the acquired the data from remote place or system under observation. The properly designed
Data Acquisition system saves time and money by eliminating the need of service personal to visit each site for inspection, data
collection logging or make adjustments.
Keywords Remote monitoring system, RF, Sensors, Microcontroller, supervisory control, hyper terminal, etc.
1. INTRODUCTION
In the resent year numerous developments in VLSI give new era to the development of microcontroller based system call as smar t
system. This development is being coupled with numerous applications and continued with development changes compared with
traditional philosophy of data acquisition. Traditional scheme based on simple ADC interface have been replaced in many
situation where there is the need to collect information faster than a human, data loggers can possibly collect the information and in
cases where accuracy is essential. A data logger is a device that can be used to store and retrieve the data [1]. Data logging also
implies the control of how sensor collects analyzes and store the data. It is commonly used in scientific experiments. Data loggers
automatically make a record of the readings of the instruments located at different places. The user determines the type of
information recorded. Their advantage is that they can operate independently of a computer. The range includes simple
economical single channel multi sensor and function loggers to more powerful programmable devices capable of handling hundreds
of inputs [2].
Data loggers are often confused with data acquisition devices; note that these two terms do not necessarily describe the same
device. The former refers to a device that records data over relatively long periods of time for analysis of slow-varying trends,
whereas the logger latter refer s to a device that records data at a much higher sampling rate and shorter time. Temperature,
pressure and vibration is the ever-changing parameter because of exposition to huge array of stimuli from their environment. All
of them infer temperature by sensing some change in a physical characteristic. One must be careful when measuring temperature,
pressure and vibration to ensure that the measuring instrument (thermometer, hygrometer, vibration meter, etc) is really the same
temperature, vibration and humidity as the material that is being measured. Under some conditions heat from the measuring
instrument can cause a temperature gradient, so the measured parameter is different from the actual temperature of the system. In
such a case the measured parameter will vary not only with the temperature of the system, but also with the heat transfer
properties of the system and associated parameter as vibetion, pressure and humidity [3].
The task of data acquisition and logging is unique in the predefined environment is behind less complicated system but if we defined
the task of remote data acquisition with the developing technology then the task is become complicated. The problem is resolve using
microcontroller interfacing method with the wireless communicable environment such as RF environment. This wireless
communication helps to acquire the data from the remote place and received data is show on disply device or with some extra
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

370 www.ijergs.org

development interface with the personal computer (PC). The primary goal of this work is to design an digital system using AVR
Atmega-16 Microcontroller Family with their communication feature (Rx, T
x
) with the RF communication module (cc2205)
communication protocol. The prototype work is to use data logging for temperature, pressure vibration and humidity measurements.
In order to meet the above requirements, a low cost, versatile, portable data logger is designed. The temperature, pressure and
Vibration acquiring is designed using microcontroller At mega 8 and At-Mega 16. A particular value of temperature pressure and
Vibration is acquired by At mega 8 designed unit which work as slave and it send to main controller board designed using Atmega-
16 work as master control, which connected with the PC at the data collection centre.
2. Relevant Theory
2.1 Introduction to data loggers
The data logger is an invaluable tool to collect and analyze experimental data, having the ability to clearly present real time analysis
with sensors and probes able to respond to parameters that are beyond the normal range available from the most traditional
equipment [4].
Definition of Data Loggers
Data logger is an electronic device that automatically scans, records, and retrieves the data with high speed and greater efficiency
during a test or measurement, at any place with time [4]. The type of information recorded is determined by the user i.e. whether
temperature, relative humidity, light intensity, voltage, pressure or shock is to be recorded, therefore it can automatically measures
electrical output from any type of transducer and log the value. These electronic signals are converted into binary data and
become easily analyzed by software and stored on memory for post process analysis.
2.2 Characteristics of Data Loggers
Data loggers possess the following characteristic [5]
1) Modularity: Data loggers can be expanded simply and efficiently whenever required, without any interruption to the working
system.
2) Reliability and Ruggedness: They are designed to operate continuously without interruption even in the worst industrial
environments.
3) Accuracy: The specified accuracy is maintained throughout the period of use.
4) Management Tool: They provide simple data acquisition, and present the results in handy form.
5) Easy to use: These communicate with operators in a logical manner, are simple in concept, and therefore easy to understand,
operate and expand..
3. Experimental Development
3.1 Operation of data logger
The ability to take sensor measurements and store the data for future use is definition, a characteristic of a data logger. However, a
data-logging application rarely requires only data acquisition and storage. Inevitably, the ability to analyze and present the data to
determine results and make decisions based on the logged data is needed. A complete data-logging application typically requires
most of the elements illustrated




Figure 1. Block diagram of DAQ System
Temperature
sensor
Pressure
sensor

PROCESS
OR
DISPLAY
UNIT
Vibretion
sensor

RF
U
NI
T
MASTER
CONTROL
DATA
VALID
ATION
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

371 www.ijergs.org

For the design and development of the system, the methodology and used involves the software-hardware implementation. The actual
implementation of the system involves the following steps:
1) System Definition: Broad definition of system hardware including microcontroller and its interface with display, ADC, memory,
keypad etc.
2) Circuit Design: Selection of Atmega-16 microcontroller and other interfacing devices, as per system definition. Design of
hardware circuit and testing on laboratory with microcontroller software routines.
3) PCB Design and Fabrication: Generation of schematic diagrams and the production of circuit board layout data for the
procurement of the circuit board.
4) Hardware Modifications: Making any hardware changes found necessary after the initial hardware tests, to produce a revised
circuit board schematic diagram and layout.
5.) Software Design: Developing algorithm for the system, allocating memory blocks as per functionality, coding and testing.
6.) Integration and Final Testing: Integrating t he entire hardware and software modules and its final testing for data logging
operation.
3.2 Complete Design
It involves the details of the set of design specifications.
1) Hardware Implementation.
2) Software Implementation.
3.2.1Hardware Implementation
The hardware design consists of, the selection of system components as per the requirement, the details of sub- systems that are
required for the complete implementation of the system and full hardware schematics for the PCB layout. Design of the circuit
and its testing has been carried out. It involves the component selection, component description and hardware details of the system is
1) Component selection and description.
2) Hardware details of the system designed.
3.2.2 Selection of Suitable Transducer
For measuring the temperature, the choice of sensor is of utmost importance [7]. The sensors are used in many fields includes
Thermocouples, Resistive temperature devices and bimetallic devices. The factors for the selection of sensor that we take into
account includes the inherent accuracy, durability, range of operation, susceptibility to external noise influences, easy of maintenance
and installation, handling during installation (delicacy), ease of calibration, and type of environment it will be used in.
3.2.3 Criteria for choosing microcontroller
1) The first and foremost criterion for choosing a microcontroller is that it must meet the task at efficiently and cost effectively [7].
In analyzing the needs of a microcontroller-based project, it is seen whether an 8-bit, 16-bit or 32-bit microcontroller can best
handle the computing needs of the task most effectively. Among the other considerations in this category are:
(a) Speed What is the highest speed that the microcontroller supports?
(b) Packaging Does it come in 40-pin DIP (dual inline package) or a QFP (quad flat package), or some other packaging
format? This is important in terms of space, assembling, and prototyping the end product.
(c) Power consumption This is especially critical for Battery-powered products.
(d) The number of I/O pins on the chip.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

372 www.ijergs.org

(g) Cost per unit this is important in terms of the final cost of the product in which a microcontroller is used.
2) The second criterion in choosing a microcontroller is how easy it is to develop products around it. Key considerations include the
availability of an assembler, debugger, a code efficient compiler, technical support.
3) The third criterion in choosing a microcontroller is its ready availability in needed quantities both now and in the future.

Figure 2. Schematic of designed prototype
The prototype is designed using the same schematic with the temperature, vibration, pressure sensor in the system. as LM-35,
MXP10-40, LM393 and then it compares with the standard, and this analogue data is then given to the ADC which is already inbuilt
in a AVR Atmega16 microcontroller. AT-Mega16 microcontroller is used to control the control action is done. The Atmega 16
microcontroller is programmed in embedded C language with Code Vision AVR IDE.
a. Pressure sensor
The MPX10 series silicon pizoresisitive pressure sensor provides accurerate and linear voltage output, diretcly proportional to the
applied pressure. These satndard, low cost, uncompenseted sensor permitrs manufacturer to design and add their own external
tempure compensation and signal conditioing network. In this project the companseation circuit alonge with the vibretion scanning is
carryforwarded in the ASCII formate and converted the logical part in to the fixed didgital enviornment.

Figure 3. Snapshot of pressure sensor
b. Vibration sensor
The vibration is the main source of disturbing the performance of the system and same data analysis so we observed the vibration of
system. Also the in prospective application the vibration data give the information of the material characterization and status of
material. The more vibration is shows the poor performance of the system and durability of the system is less. Here we used the
capacitive vibration, which give the smallest vibration

Figure 4. Snapshot of vibration sensor RF module
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

373 www.ijergs.org

Here we use the CC2500 RF module for communication of the data from master to slave. This module work on the I2C protocol
which make simplicity to configured the system in master salve configuration the circuit is intended to work for industrial, scientific
and medical and short range device at the frequency of 2400-2483.5MHz. the RF transceiver is integrated with the highly configurable
baseband modem. The modem support various modulation scheme formats and has configurable baseband module at the data rate up
to 500kbps. The communication range can be increase by enabling the forward error correction integrated with the modem. The main
operating parameter is 64 byte transceiver as FIFO with the controlling protocol as SPI

Figure 5. snapshot of RF Module RF module cc2500

Figure 6. Complete system prototype
4. RESULT
In this system, Temperature, pressure and vibration measurements from the three sensors are taken. The performance of the three sensors
is distinguished on the basis of their accuracy. All the sensors are configured with the specified accuracy in data sheet. The accuracy
indicates how closely the sensor can measure the actual or real world parameter value. The more accurate a sensor is, better it will perform.
As the sysytem is capebale to transmit all the data from remote place, the minimum error enviornment in terms of bit(LSB) is maintain to
make the digital trnsformetion with minimum error. The resolution of the ADC is adjusted in such way the the bit remain error free, for that
the complet system operate at the 4.86V in recpect of 5V supply voltage.

Figure 7. Working module of prototype
DAQ REMOTE MODULE
RF
MODULE
Controller kit
RS232 MODULE
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

374 www.ijergs.org

The readings are taken under different conditions for some time interval. Also the readings are taken at different temperatures in a
time interval. Comparing the readings obtained from the three channels under the different conditions the most accurate channel
among them is found.
Table 1. Result at Room temperature condition
Time
(in minutes)
Standard
temp(C)
System
acquired
temp (C)
Pressure
(mbar)
Vibration
(ppm)
8:05am 33.3 33.35 04 00
8:15 am 33.3 33.39 10 50
8:25 am 33.3 33.39 08 00
8:35 am 33.5 33.50 08 230
8:45 am 33.5 33.53 03 100
9:00 am 33.5 33.53 07 37
8:15 am 33.8 33.9 03 37
8:30 am 33.9 33.9 05 37
8:45 am 34.5 33.9 3 37


Figure 8. Grapho of temp error between manual and system acquired.
Table 2. Result at moderate temperature condition
Time
(in
minutes)
Standard
temperatur
e
(C)
Sys. temp
(C)
Pressure
(C)
vibration
(C)
10:05 am 35 34.8 05 00
10:20 am 35.5 35.3 10 35
10:40 am 35.5 35.5 15 25
11:00 am 35.5 35.5 20 23
11:20 am 35.8 35.9 25 25
11:40 am 35.8 35.9 30 22
12:00 pm 35.8 35.9 35 22
12:20 pm 35.8 35.9 35 22
12:40 pm 36 36.3 40 22
1:00 pm 36.4 36.28 45 22
1:15 pm 36.8 37.00 45 22
1:30 pm 37 37.4 45 25
32.8
33
33.2
33.4
33.6
33.8
34
34.2
34.4
T
e
m
p
Time
Temp Vs Time
Series1
Series2
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

375 www.ijergs.org

5.CONCLUSION
From the above tables of readings obtained by comparing standard temperature with the temperature of channels, the accuracy of the
channels is discussed as the temperature mesurment is done using conventinal methods have some mersurment error as compaired
with the new system. Accuracy is the degree of conformity of a measured analog and digital quantity to its actual (true) value that is the
quality of errorless to the truth or the true value.
The application was designed and developed to prove a couple of concepts about the data acquisition in general and some notions about the
possibility of adding remote controlling/monitoring. This has a teaching purpose: it is being used for a series of experiments between several
laboratories, at the moment. From one point of view one can process the experimental data gathered from a real process, but one can also see
the result of one remote command sent to industrial equipment in the real time.
Acknowledgements
The author will tanks the Department of Electronics and Computer Science RTM Nagpur university for providing the facility and
environment for the instument development. Also author would express their gatitude toward the National Instument (India) Ltd. for
providing the real time data analysis methodalogy.

REFERENCES:
[1] A. J. Thompson, J . L. Bahr and N. R Thomson, Low power data logger, proceedings of conference department of
physics, university of Otego, Dunedin2012
[2] Ding Sheng, Fan Zhiguo and Sun Chuang. Design of a 2D Scanning Information Acquisition System of Atmospheric
Polarization [J]. Journal of Hefei University of Technology (Natural Science), vol. 7, 2011.
[3] Li Xi ul i . Design of Data Acquisition and Transmission System Based on MSP430 [J].Mechanical Engineering
and Automation, vol. 8, 2011.
[4] Liu Xiaoqiu. Social Demand Decide Development of Monolithic Circuit Control System [J]. Industrial Control
Computer, vol. 3, 2008.
[5] Ai Yu. Research on Solar Battery Data Acquisition System Based on micrcontroller[D]. Wuhan University of
Technology, 2010.
[6] Shen Qiang, Yang Denghong and Li Dongguang. Research and Implementation of Ballistic Resolving Algorithm Based
on MSP430 [J]. Journal of Beijing Institute of Technology, vol. 2, 2011.
[7] Li Jicheng, Gao Zhenjiang, Xiao Hongwei, Meng Hewei and Kan Za. Design and Experiment on Dairy Cow Precise
Feeding Equipment Based on MCU [J]. Transactions of the Chinese Society of Agricultural Machinery, vol. 1, 2011.
[8] Lian Xiangyu, Tang Liping and Zhao Zuyun. Research on Dynamic Configured Control System for MCU Application [J].
Journal of Donghua University (Natural Sciences), vol. 10, 2010.
[9] Ding Baohua, Zhang Youzhong, Chen Jun and Meng Fanxi. Experimental Teaching Reforms and Practices of MCU
Principle and Interface [J]. Experimental Technology and Management, vol. 1, 2010.
[10] Jiang Juan and Zhang Huoming. Software Design of Data Acquisition Boards Based on MCU [J]. Journal of China
Jiliang University, vol. 3,2011
If acknowledgement is there wishing thanks to the people who helped in work than it must come before the conclusion and must be
same as other section like introduction and other sub section



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

376 www.ijergs.org

= 1 S S
i

= 2 1 Zi
A Mathematical Model for the Flexural Strength Characteristics of Concrete
Made with Unwashed Local Gravel Using the Second- Degree Polynomial
Umeonyiagu Ikechukwu Etienne
1
, Adinna B.O.
2

1
Dept. of Civil Engineering, Anambra State University, P.M.B.02,Uli
2
Dept. of Civil Engineering, Nnamdi Azikiwe University, Awka
E-mail- umeonyiaguikechukwu@yahoo.com

ABSTRACT: This research work set out to develop a model for the flexural strength characteristics of concrete made with unwashed
local aggregatebased on the second-degree polynomial. The unwashed local gravelwas from Abagana and the river sand from
Amansea, both in Anambra State of Nigeria. These aggregates were tested for their physical and mechanical properties based on BS
812: Part 2& Part 3:1975. Sixty concrete beams of dimensions 150 mm X 150mm X 600mmthree beams for each experimental
point were made, cured and tested according to BS 1881:1983. The model equation developed was = -366.27Z
1
+ 249.99Z
2
-15.93Z
3

-20.24Z
4
+18.68Z
1
Z
2
-1675.23Z
1
Z
3
+ 605.84Z
1
Z
4
+1458.06Z
2
Z
3
-290.71Z
2
Z
4
+78.14Z
3
Z
4
.The students t-test and the Fishers test
were used to test the adequacy of this model. The strengths predicted by the model were in complete agreement with the
experimentally obtained values and the null hypothesis was satisfied.
Key words: Optimization, Concrete,Flexural strength, Second-degree polynomial, Fishers test, Model, Taylors theorem.
1. INTRODUCTION
1.1. OSADEBES CONCRETE OPTIMIZATION THEORY
Concrete is a four-component material of mixing water, cement, fine and coarse aggregate. These ingredients are mixed in rational
proportions to achieve desired strength of the hardened concrete [1]. Let us consider an arbitrary amount, S, of a given concrete
mixture and S
i
, the portion of the i
th
component of the four constituent materials of the concrete where i = 1,2,3,4, then in keeping with
the principle of absolute volume or mass [2]:

Dividing through by S and substituting Z
i
for S
i
/S gives:

Then, the compressive strength of concrete can be expressed as equation 3:
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

377 www.ijergs.org

3 ) (
i
Z f Y =
6
0 0
b Z b
i
=
7

+ =
j i ij i i
Z Z Z Y | |
jj ii ij ij ii i i
b b b b b b + = + + = | | and
0
( ) 4 , 3 , 2 , 1 , = j i
( )
| | | || | 8 Z B y
k
=
| | | |
( )
| | 9
k T T
y B Z =
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( ) 4 ...... ..........
! 2
1
! 2
1
2
0
3
1
2
0 2
0 0
0
3
1
4
1
0 2
4
1
0
0
0
+
c
c
+

c
c
+
c
c
+ =


=
= = =
i i
i
i
j j i i
i j
j i
i
i i
i
Z Z
Z
Z f
Z Z Z Z Z
Z Z
Z f
Z Z
Z
Z f
Z f Z f
( ) 5 .. ..........
2
4
1
3
1
4
1
4
1
0
+ + + + =

= = = =
i
i
ii j
i j
i ij
i
i i
Z b Z Z b Z b b Z f

Using Taylors theorem and the assumption that Y is continuous, equation 3 becomes:



If b
0
= f(0), b
i
= cf(0)/ cZ
i
, b
ij
=cf
2
(0)/ cZ
i
cZ
j
and b
ii
= c
2
f(0)/cZ
2
i
, then eqn. 4 can be written as follows:

Multiplying eqn.2 by b
0
we have

Also, multiplying eqn. 2 by Z
1
, Z
2
, Z
3
and Z
4
in succession, making Z
1
2
, Z
2
2
, Z
3
2
and Z
4
2
the subject of the formula, substituting into
eqn. 5 and factorizing gives:

where
1.2 THE COEFFICIENTS OF THE REGRESSION EQUATION
If the K
th
response (compressive strength for the serial number k) is y
(k)
, substituting the vector of the corresponding set of variables,
i.e., Z
(K)
= [Z
1
(K)
, Z
2
(K)
, Z
3
(K)
, Z
4
(K)
]
T
(see Table 1) into eqn.7 generates the explicit matrix of equation 8:

Re-arranging eqn.8 yields:

Solution of eqn.9 gives the values of the unknown coefficients of the regression equation ( eqn 7).
1.3 THE STUDENTS t-TEST
The unbiased estimate of the unknown variance S
2
is given by [3],
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

378 www.ijergs.org

1
2
2

|
.
|

\
|

=


n
y y
i
Y S

+ =
2 2
ij
i a a c
( ) c +
|
|
.
|

\
|
A
= 1
Y
S
n y
t
2
2
2
1
S
S
F =
10

If a
i
= z
i
(2z
i
1), a
ij
= 4 z
i
z
j
; for ( 1 sis q) and (1 sis j s q) respectively.
Then, 11
wherec is the error of the predicted values of the response.
The t-test statistic is given in [3]
12
whereAy =y
0
y
t
; y
0
= observed value, y
t
= theoretical value; n = number of replicate observations at every point; c = as defined in
eqn.11.
1.4 THE FISHERS TEST
The Fishers-test statistic is given by
13
The values of S
1
(lower value)and S
2
(upper value) are calculated from equation 10.
2. MATERIALS AND METHOD
2.1 PREPARATION, CURING AND TESTING OF CUBE SAMPLES
The aggregates were sampled in accordance with the methods prescribed in BS 812: Part 1:1975 [4]. The test sieves were selected
according to BS 410:1986 [5]. The water absorption, the apparent specific gravity and the bulk density of the coarse aggregates were
determined following the procedures prescribed in BS 812: Part 2: 1975 [6]. The Los Angeles abrasion test was carried out in
accordance with ASTM. Standard C131: 1976 [7]. The sieve analyses of the fine and coarse aggregate samples were done in
accordance with BS 812: Part 1: 1975 [4] and satisfied BS 882:1992[8]. The sieving was performed by a sieve shaker. The water used
in preparing the experimental samples satisfied the conditions prescribed in BS 3148:1980 [9]. The required concrete specimens were
made in threes in accordance with the method specified in BS 1881: 108:1983 [10]. These specimens were cured for 28 days in
accordance with BS 1881: Part 111: 1983 [11]. The testing was done in accordance with BS 1881: Part 117:1983 [12] using flexural
testing machine.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

379 www.ijergs.org

TABLE 1 SELECTED MIX RATIOS AND COMPONENTS FRACTION BASED ON OSADEBES SECOND DEGREE
POLYNOMIAL

S/NO
MIX RATIOS

COMPONENT'S FRACTION

S
1
S
2
S
3
S
4
Z
1
Z
2
Z
3
Z
4

1 0.88 1 2.5 4 0.105 0.119 0.298 0.477
2 0.86 1 2 4 0.109 0.127 0.254 0.509
3 0.855 1 2 3.5 0.116 0.136 0.272 0.476
4 0.86 1 2 3 0.125 0.146 0.292 0.437
5 0.855 1 2.5 3.5 0.109 0.127 0.318 0.446
6 0.865 1 3 4 0.098 0.113 0.338 0.451
7 0.87 1 3 4.5 0.093 0.107 0.320 0.480
8 0.86 1 1.5 3 0.135 0.157 0.236 0.472
9 0.86 1 2.75 3.4 0.107 0.125 0.343 0.424
10 0.865 1 2 4.25 0.107 0.123 0.246 0.524


CONTROL
11 0.858 1 2.43 4 0.104 0.121 0.293 0.483
12 0.86 1 1.75 3 0.130 0.151 0.265 0.454
13 0.855 1 2.4 3.5 0.110 0.129 0.309 0.451
14 0.86 1 2 4.33 0.105 0.122 0.244 0.529
15 0.862 1 2.25 3.13 0.119 0.138 0.311 0.432
16 0.858 1 2 2.83 0.128 0.150 0.299 0.423
17 0.858 1 2.67 3.29 0.110 0.128 0.342 0.421
18 0.86 1 3 4.13 0.096 0.111 0.334 0.459
19 0.855 1 2 3 0.125 0.146 0.292 0.438
20 0.8595 1 2.75 4 0.100 0.116 0.319 0.465
LEGEND: S
1
= water/cement ratio; S
2
=Cement; S
3
=Fine aggregate; S
4
=Coarse aggregate, Z
i
= S
i
/S
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

380 www.ijergs.org

TABLE 2 Z
T
MATRIX
Z
1
Z
2
Z
3
Z
4
Z
1
Z
2
Z
1
Z
3
Z
1
Z
4
Z
2
Z
3
Z
2
Z
4
Z
3
Z
4

0.105 0.119 0.298 0.477 0.013 0.031 0.050 0.036 0.057 0.142
0.109 0.127 0.254 0.509 0.014 0.028 0.056 0.032 0.065 0.129
0.116 0.136 0.272 0.476 0.016 0.032 0.055 0.037 0.065 0.129
0.125 0.146 0.292 0.437 0.018 0.037 0.055 0.042 0.064 0.127
0.109 0.127 0.318 0.446 0.014 0.035 0.049 0.041 0.057 0.142
0.098 0.113 0.338 0.451 0.011 0.033 0.044 0.038 0.051 0.153
0.093 0.107 0.320 0.480 0.010 0.030 0.045 0.034 0.051 0.154
0.135 0.157 0.236 0.472 0.021 0.032 0.064 0.037 0.074 0.111
0.107 0.125 0.343 0.424 0.013 0.037 0.046 0.043 0.053 0.146
0.107 0.123 0.246 0.524 0.013 0.026 0.056 0.030 0.065 0.129

TABLE 3 RESPONSES OF THE MIX RATIOS
S/NO S
1
S
2
S
3
S
4
RESPONSES[N/mm
2
]
1 0.88 1 2.5 4 2.51
2 0.86 1 2 4 2.61
3 0.855 1 2 3.5 2.77
4 0.86 1 2 3 2.91
5 0.855 1 2.5 3.5 2.75
6 0.865 1 3 4 1.96
7 0.87 1 3 4.5 1.85
8 0.86 1 1.5 3 3.16
9 0.86 1 2.75 3.4 2.82
10 0.865 1 2 4.25 2.56
LEGEND: S
1
= water/cement ratio; S
2
=Cement; S
3
=Fine aggregate; S
4
=Coarse aggregate
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

381 www.ijergs.org

PERCENTAGE vs SIEVE SIZE [mm]
0
10
20
30
40
50
60
70
80
90
100
pan 10 13.2 19 22.4 26.5 32 45 53
SIEVE SIZE [mm]
%

P
A
S
S
I
N
G

2.2 TESTING THE FIT OF THE QUADRATIC POLYNOMIALS
The polynomial regression equation developed was tested to see if the model agreed with the actual experimental results. The null
hypothesis was denoted by H
0
and the alternative by H
1
.








FIGURE 1 GRADING CURVE FOR THE FINE AGGREGATE












FIGURE 2 GRADING CURVE FOR THE UNWASHED LOCAL GRAVEL
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

382 www.ijergs.org

3. RESULTS AND DISCUSSION
3.1PHYSICAL AND MECHANICAL PROPERTIES OF AGGREGATES
Sieve analyses of both the fine and coarse aggregates were performed and the grading curves shown in Figures 1 and 2. These grading
curves showed the particle size distribution of the aggregates. The maximum aggregate size for the local gravel was 53 mm and 2mm
for the fine sand. The local gravel had water absorption of 4.55%, moisture content of 53.25%, apparent specific gravity of 1.88, Los
Angeles abrasion value of 60% and bulk density of 1302.7 kg/m
3
.

3.2 THE REGRESSION EQUATION FOR THE COMPRESSIVE STRENGTH TESTS RESULTS
Solution of Eqn.9, given Z
T
values of Table 2 and the responses (average flexural strengths) in Table 3 gave the values of the unknown
coefficients of the regression equation (Eqn.7) as follows:
1
= -366.27,
2
= 249.99,
3
= -15.93,
4
= -20.24,
12
= 18.68,
13
= -1675.23,

14
= 605.84,
23
= 1458.06,
24
= -290.71,
34
= 78.14. Thus, from Eqn.7, the model equation based on second-degree polynomial was
given by: = -366.27Z
1
+ 249.99Z
2
-15.93Z
3
-20.24Z
4
+18.68Z
1
Z
2
-1675.23Z
1
Z
3
+ 605.84Z
1
Z
4
+1458.06Z
2
Z
3
-290.71Z
2
Z
4

+78.14Z
3
Z
4,
where represented the flexural strength of the mixture in N/mm
2
.

3.3 FIT OF THE POLYNOMIAL
Selected mix ratios and components fraction based on Osadebes second degree polynomial was shown in Table 1. The polynomial
regression equation developed = -366.27Z
1
+ 249.99Z
2
-15.93Z
3
-20.24Z
4
+18.68Z
1
Z
2
-1675.23Z
1
Z
3
+ 605.84Z
1
Z
4
+1458.06Z
2
Z
3

-290.71Z
2
Z
4
+78.14Z
3
Z
4
was tested to see if the model agreed with the actual experimental results. There was no significant difference
between the experimental and the theoretically expected results. The null hypothesis, H
0
was satisfied.







International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

383 www.ijergs.org

TABLE 4 t STATISTIC FOR THE CONTROLLED POINTS, UNWASHED LOCAL GRAVEL CONCRETE
FLEXURAL TEST, BASED ON OSADEBES SECOND DEGREE POLYNOMIAL
LEGEND: c
i
=response; a
i
= z
i
(2z
i
- 1); a
ij
= 4 z
i
z
j ;
c = Ea
2
i
+Ea
2
ij
; = experimentally-observed value; = theoretical value; t = t-test
statistic
3.4 t -VALUE FROM TABLE
The students t-test had a significance level, o = 0.05 and t
o/l(ve)
= t
0.005(9)
=3.69 from the standard table [13]. This was greater than any
of the t values calculated in Table 4. Therefore, the regression equation for the unwashed gravel concrete was adequate.
3.5 F-STATISTIC ANALYSIS
The sample variances S
1
2
and S
2
2
for the two sets of data were not significantly different (Table 5). It implied that the error(s) from
experimental procedure were similar and that the sample variances being tested are estimates of the same population variance. Based
on eqn.10, we had that S
K
2
= 9.647/9 = 1.072, S
E
2
= 10.428/9 = 1.159 &F = 1.072/1.159 = 0.925.From Fishers table[13], F
0.95(9,9)
=
3.3, hence the regression equation for the flexural strength of the unwashed gravel concrete was adequate.
RESPONSE
SYMBOL




I j a
i
a
ij
a
i
2
a
ij
2
c


t
C
1
1 2
-0.082

0.050

0.007

0.003

0.4835

3.03

3.17

-0.2069

1 3
-0.082

0.121

0.007

0.015

1 4
-0.082

0.120

0.007

0.0399

2 3
-0.092

0.142

0.01

0.020

2 4
-0.092

0.233

0.013

0.0543

3 4
-0.121

0.566

0.016

0.3202

4
-0.017

0.001





0.052 0.4316

Similarly
C
2
0.4809 3.09 3.03 0.09253
C
3
0.9234 3.05 3.22 -0.2824
C
4
0.4642 3.08 3.19 -0.15354
C
5
0.5053 2.55 2.44 0.15628
C
6
0.4966 2.98 2.73 0.37210
C
7
0.5707 2.77 2.55 0.32260
C
8
0.5624 2.48 2.75 -0.40003
C
9
0.4949 2.97 3.20 -0.34129
C
10

0.5236 2.81 2.88 -0.10948
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

384 www.ijergs.org

TABLE 5 F STATISTIC FOR THE CONTROLLED POINTSBASED ON OSADEBES SECOND DEGREE
POLYNOMIAL
Response symbol

Y
K
Y
E
Y
K
-
K

Y
E
-
E
(Y
K
-
K
)
2

(Y
E
-
E
)
2
C
1
3.03 3.17 -0.812 -0.729 0.659 0.532
C
2
3.09 3.03 -0.748 -0.871 0.559 0.759
C
3
3.05 3.22 -0.787 -0.676 0.619 0.458
C
4
3.08 3.19 -0.761 -0.715 0.579 0.511
C
5
2.55 2.44 -1.289 -1.456 1.662 2.119
C
6
2.98 2.73 -0.856 -1.171 0.733 1.370
C
7
2.77 2.55 -1.074 -1.349 1.153 1.821
C
8
2.48 2.75 -1.361 -1.153 1.852 1.330
C
9
2.97 3.20 -0.872 -0.698 0.760 0.487
C
10
2.81 2.88 -1.035 -1.021 1.071 1.041

28.806 29.161 9.647 10.428

Legend: =EY/n where y is the response and n, the number of observed data (responses)
Y
k
is the experimental value (response)
Y
E
is the expected or theoretically calculated value(response)

CONCLUSION
The strengths (responses) of concrete were a function of the proportions of its ingredients: water, cement, fine aggregate and coarse
aggregates. Since the predicted strengths by the model were in total agreement with the corresponding experimentally -observed
values, the null hypothesis was satisfied. This meant that the model equation was valid.

REFERENCES:
[1] Neville, A. M., Properties of Concrete, Third Edition, Pitman, London, 1995, pp.268-358.
[2] Osadebe, N.N., Generalised Mathematical Modelling of Compressive Strength of Normal Concrete as a Multivariant function of
the properties of its constituent components, Lecture, University of Nigeria, Nsukka, 2003, pp.1 11.
[3] Biyi, A., Introductory Statistics, Abiprint& Pak Ltd., Ibadan, 1975, pp.89 120.
[4] BS 812: Part 1 Sampling, shape, size and classification. Methods for sampling and testing of mineral aggregates, sands and fillers.
British Standards Institution Publication, London, 1975, pp. 2 5.
[5] BS 410 Specification for test sieves. British Standards Institution Publication, London, 1986, pp. 1 4.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

385 www.ijergs.org

[6] BS 812: Part 2 Methods for sampling and testing of mineral aggregates, sands and fillers. Physical properties. British Standards
Institution Publication, London, 1975, pp. 2 6.
[7] ASTM. Standard C 131 Tests for Resistance to Abrasion of Small Size Coarse Aggregate by Use of the Los Angeles Machine.
American Society for Testing and Materials Publication, New York, 1976, 1 6.
[8] BS 882 Specification for aggregates from natural sources for concrete. British Standards Institution Publication, London, 1992,
pp.2 6.
[9] BS 3148 Tests for water for making concrete. British Standards Institution Publication, London, 1980, pp. 1 8.
[10] BS 1881: Part 108 Method formaking test cubes from fresh concrete. British Standards Institution Publication, London, 1983, pp.
1 4.
[11] BS 1881: Part 111 Method of normal curing of test specimens (20
o
C). British Standards Institution Publication, London, 1983,
pp.1 5.
[12] British Standard 1881: Part 117Method for determination of flexural strength of concrete cubes. British Standards Institution
Publication, London, 1983, pp.1 6.
[13] Nwaogazie, I.L., Probability and Statistics for Science and Engineering Practice, University of Port Harcourt Press, Port
Harcourt, 2006, pp. 274 280












International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

386 www.ijergs.org

Adaptive Configuration Workload Queue in Cloud Computing
Rupinder Kaur
1
, Sahil Vashist
2

1
Student, Department of Computer Science, Chandigarh Engineering College, Landran,Mohali
2
Assitant Professor, Department of Computer Science, Chandigarh Engineering College,Landran,Mohali,
E-mail- , rupinder225@gmail.com
Abstract - With the emergence of cloud computing, on-demand resources usage is made possible. This allows applications to scale
out elastically according to the load. In this work we have tried to maximize the cloud profit and efficiency by efficient placement of
virtual machines. The target of this algorithm is to maximize the service providers in the case of current resources are not enough to
process all the requests in time. In this strategy, the requests are ranked according to the profits they can bring. Simulation results
show the efficiency of our framework.
Keywords - Efficiency, migration, response time, Isolation messaging queue, Sharing message queue, CSP, SLA
1. INTRODUCTION
Cloud Computing is the evolving paradigm with changing their definitions but in this research project, it is defined in the term of
a virtual infrastructure which can be provides the shared information and services of communication technology, via the internet
Cloud for access of external multiple users through use of the Internet or the large-scale private networks. Cloud Computing is
providing a computer user access to the Information Technology services i.e., data servers, storage, applications, without requiring
understanding of a technology or even the ownership of infrastructure. Cloud computing is associated with a new paradigm for the
provision of computing infrastructure. This paradigm shifts the location of this infrastructure to the network to reduce the costs
associated with the management of hardware and software resources. Hence, businesses and users become able to access application
services from anywhere in the world on demand. Therefore, it represents the long-held dream of envisioning computing as a utility
where the economy of scale principles help to drive the cost of computing infrastructure effectively down. Big players such as
Amazon, Google, IBM, Microsoft and Sun Microsystems have begun to establish new data centers for hosting Cloud computing
applications in various locations around the world to provide redundancy and ensure reliability in case of site failures. The Cloud
Computing is a subscription-based service where one can obtain networked storage space and the computer resources. Resources seem
infinite, growing and shrinking supported the demand placed on the applying or service being delivered.
Cloud computing is fragmented into three segments which are storage, applications and connectivity. Each segment serves a
different function and offers different products for businesses and individuals around the world. A study conducted by V1 in June
2011 [9], found that 91% of senior IT professionals actually don't know exact about cloud computing and two-thirds of senior finance
professionals are clear by the concept, highlighting the upcoming nature of the technology.. An Aberdeen Group study found that
disciplined companies achieved on average about 68% increase in their IT expense because cloud computing and only a 10%
reduction in data center power costs. A simple example of cloud computing is Gmail Hotmail or Yahoo email etc. Just an internet
connection is required and you can start sending and retrieving emails. The server and email information management software are
available on the cloud and is totally managed by the cloud service provider of Yahoo, Google etc. The consumer gets to use the
software alone and access the benefits.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

387 www.ijergs.org


Figure 1: Cloud Computing
Cloud computing provides opportunities for organizations reduce costs and speed time with eliminating the investment in IT
capital and operational expenses traditionally related to running a business. Available solutions in the market nowadays including
publicly hosted clouds, private internal clouds or a hybrid setting, make sure that the userwhether or not it's associate enterprise,
service supplier or developercan notice the best cloud computing answer to satisfy their desires and quickly gain the benefits that
enable them to become additional agile. The success of Amazon, eBay and Google has led to the increase of cloud computing as a
replacement, proven design for the way the standard datacenter is constructed and managed.
2. RELATED WORK
There have been several efforts in studying the deployment of scientific and high performance applications on various cloud
computing platforms [1], [2]. Our work differs by showing the deployment of a high performance application on multiple cloud
computing platforms through a cloud computing framework. We also establish guidelines for the design, implementation, and
identification of cloud computing frameworks in this work. There have also been several efforts in building and migrating bio-
molecular applications to distributed computing environments [4], [5]. The work in [4] present a framework that provides fault-
tolerance and failure recovery for running replica-exchange simulations on distributed systems. This is achieved through check
pointing and an external interface that monitors the execution of distributed applications. Work Queue differs by offering these
functionalities inherently without overheads. The authors in [5] describe their experiences in running replica-exchange simulation
software, NAMD, on the Condor grid. They add a dedicated set of resources to speedup slow replicas executing on the grid and notice
improvement in the efficient usage of available resources. The authors go on to present database architecture for storing and retrieving
bio-molecular simulation results. The work in [6] describes experiences in using Legion, an operating system that provides
abstractions for managing and utilizing grid resources, to run replica-exchange simulations built using MPI. This work provides good
insights on the effectiveness of abstractions in providing a seamless transition for users porting applications to run on grids.. A work in
[6] represents lightweighted, asynchronous high performance messaging queue for the cloud to calculate the capability of message
queue with errors. The author in [7] describes the multitasking system under message queue in real time system. In [8] author
illustrates the modeling and simulations of cloud computing environment.
3. SYSTEM MODEL
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

388 www.ijergs.org

In this Figure 2 we can see that broker has managed to get all the required information of other CSPs. By getting information from
other CSPs we mean the free VMs which the broker can lend from the other DC within or outside CSPs so that more optimized results
can be generated.

Figure 2: Schematic Model
The VMs details are being offered by the other CSPs on regular basis and each broker is capable to view and compare all the VMs
from its and other sources. If we look at the algorithm as shown below broker is responsible for maintaining all the VMs. Each VM is
tagged by the health card which checks the capability of VMs for a particular load. Only after this checking the VMs are allocated to
the algorithm for efficient VM migration.
4. SIMULATION RESULTS
To setup our experiments, proposed algorithm is implemented is done using CloudSim as a framework in the simulator
environment. This work considers various Datacenter, Virtual Machines (VM), host and cloudlet components from CloudSim for
execution analysis of the algorithms. To evaluate the results we have set up the mechanism of base paper and then recorded the results,
after that we have used the same experimental setup and implemented our framework on same parameters as of base paper. In the base
paper the authors implemented same experimental setup for different resource allocation policy and compare it with its own. In the
authors algorithm main focus is on to increase revenue of each resource which further accumulate to make a big profit for the CSPs.
However in order to perform this work they focused to reduce SLA violations to keep profit increasing. But in our case we focused on
the configuration of the queue as shown in Algorithm below:
Algorithm for Adaptive Configuration Workload Message Queue
1. let N := total number of VM in Cloud Service Provides (CSPs)
2. let M := total number of VM currently under use
3. let x := N-M (total number of unused/guests VMs)
4. let page[N] := set of all guest VM pages
5. let pivot := 0; bubble := 0
6. ActivePush (Guest VM)
7. while bubble < max (pivot, N-pivot) do
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

389 www.ijergs.org

8. get_health_card(vmID);
9. Vm_in_Queue for transmission
10. bubble++
11. PageFault (Guest-page X)
12. discard pending queue
In our approach we track the total number of VMs available in a CSP and put them in a queue. We use two queues; in this
case the other queue is used for the reserved virtual machines which may or may not be collected from other CSP. The use of
other CSPs VM is totally depending upon the unavailability of the VM. The experimental setup for comparative study of our
technique with base paper is as under:-
Table 5.1: Cloud Configuration Details
DCs 10
Hosts 2
VMs/Host 4
Hypervisor Xen
Requests Frequency 1~1000
The VM is of same size i.e. each VM constitutes of Mips = 1000, RAM size = 512 MB, B/W = 1000 kbps and no. of CPUs cores = 1.
In the figure 3 we can see that the number of served requests for VM isolated vs. Adaptive configuration.
In the Figure 3 we have noticed that with the increase in the availability rate the requests served has also being increased.
However in our algorithm the virtual machine shortage is overcome by other CSPs and all the available VMs are kept in queue.
Therefore the number of requests are processed are 10% faster in our algorithm.
.

Figure 3: Number of Requests Served
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Base
Adaptive
Availability Rate
R
e
q
u
e
s
t
s

S
e
r
v
e
d
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

390 www.ijergs.org

In Figure 4 we can see that Adaptive configuration do not allow service level violation due to reserved VM in the queue. In this case
we have supposed the SLA threshold is 50ms and with the increase in the number of requests our algorithm do not let the processing
to cross this threshold and get the VM from the reserved queue if the required VM is not available.


Figure 4: Restrictions for SLA Violations

5. CONCLUSION AND FUTURE SCOPE
In this paper we have evaluated the response time and cost of the proposed frame work and found that the efficiency of the proposed
framework is far superior to base paper. Further we can add the parameter of SLA violation and will try to minimize it. The effect of
minimization could be very crucial in overall performance increasing and managing virtual machines.
We evaluate the comparison between VM isolated and adaptive configuration message queue. VM isolated method is better
way to handle with the massive requirement of resources at a particular time and resources get hire it from other CSPs. Under such
condition, when load got raised on the server, sharing can also be the possible way to tackle with this but if sharing effects the
efficiency of the system then adaptive message queue should is preferred so that it will work according to the on-demand
requirements. Additionally, in previous research hiring CSPs approach is considered as better approach and we continued it by
maintaining message queue according to the on-demand desire.
For future scope, as we consider adaptive configuration by hiring CSPs (whenever required) to generate efficient result by
which we can enhance massive number of available CSPs so we can handle more number of requests (in case of shortage of CSPs) at
a time along quality of service and better efficiency.
References
[1] K. Keahey, R. Figueiredo, J. Fortes, T. Freeman, and M. Tsugawa, Science clouds: Early experiences in cloud computing for
scientific applications, 2008.
[2] G. Juve, E. Deelman, K. Vahi, G. Mehta, B. Berriman, B. Berman, and P. Maechling, Scientific workflow applications on
amazon ec2, in E-Science Workshops, 2009 5th IEEE International Conference on, December 2009, pp. 5966.
0
10
20
30
40
50
60
1
0
0
0
2
0
0
0
3
0
0
0
4
0
0
0
5
0
0
0
6
0
0
0
7
0
0
0
8
0
0
0
9
0
0
0
1
0
0
0
0
1
1
0
0
0
1
2
0
0
0
1
3
0
0
0
1
4
0
0
0
1
5
0
0
0
Achieved time
SLA time
Number of Requests
P
r
o
c
e
s
s
i
n
g


T
i
m
e
(
m
s
)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

391 www.ijergs.org

[3] A. Luckow and et al., Distributed replica-exchange simulations on production environments using saga and migol, in IEEE
Fourth International Conference on eScience, 2008, December 2008, pp. 253260.
[4] C. J. Woods and et al., Grid computing and biomolecular simulation, Philosophical Transactions: Mathematical, Physical and
Engineering Sciences, vol. 363, pp. 20172035, 2009.
[5] A. Natrajan, M. Crowley, N. Wilkins-Diehr, M. Humphrey, A. Fox, A. Grimshaw, and I. Brooks, C.L., Studying protein folding
on the grid: Experiences using charmm on npaci resources under legion, in Proceedings of the 10th IEEE International Symposium
on High Performance Distributed Computing, 2001, pp. 1421.
[6] Joerg Fritsch and et.al. represents A lightweighted asynchoronous high-performance message queue in cloud computing in
Journal of cloud computing: advances, system and applications,2012.
[7] M. Joseph and P.Pandya, Finding Response time of message queue in a real-time system, BCS Computer Journal. 29(J):390-
395,2002.
[8] Rodrigo N.Calheiros and et. al., A novel framework for modeling and simulations of cloud computing infrastructure and
services, published in Journal of Computing and InformationTechnology, CIT-16-2008, 4-235-246.
[9] http:// asyncoronous message queues in cloud computing.org/ accessed in June,14
[10] D. Ardagna, M. Trubian, and L. Zhang, SLA based profit optimization in multi-tier systems, Proceedings of the 4th IEEE
International Symposium on Network Computing and Applications, Cambridge, Massachusetts, USA,
July 27-29, 2005.
[11] Mohammed Alhamad, Tharam Dillon and Elizabeth Chang, Conceptual SLA Framework for Cloud Computing, 4th IEEE
International Conference on Digital Ecosystem and Technologies, 2010.
[12] Hyun Jin Moon, Yun Chi and Hakan Hacigumus, SLA-Aware Profit Optimization in Cloud Services via Resource Scheduling,
IEEE 6th World Congress on Services, 2010












International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

392 www.ijergs.org

Implementation of Decimation Filter for Hearing Aid Application
Prof. Suraj R. Gaikwad, Er. Shruti S. Kshirsagar and Dr. Sagar R. Gaikwad
Electronics Engineering Department, D.M.I.E.T.R. Wardha
email: surajrgaikwad@gmail.com
Mob. No: +91982389441


Abstract - A hearing aid is a small electronic device that one wears in or behind his/her ear who have hearing loss. A hearing aid can
help to the people to hear more in both quiet and noisy situations. It makes sounds louder so that a person with hearing loss can listen,
communicate, and participate better in daily activities. In this paper, we implemented a digital filter which is used for hearing aid
application. The implemented filter is based on the multirate approach in which high sampling rate signal is decimated to low
sampling rate signal respectively. This proposed decimated filter is designed and implemented using the Xilinx System Generator and
Matlab Simulink.


Keywords - Digital Filter, CIC filter, FIR filter, Half band filter and Oversampling Concept.

INTRODUCTION

Filters are a basic component of all signal processing and telecommunication systems. The primary functions of a filter are one or
more of the followings: (a) to confine a signal into a prescribed frequency band or channel (b) to decompose a signal into two or more
sub-band signals for sub-band signal processing (c) to modify the frequency spectrum of a signal (d) to model the input-output relation
of a system voice production, musical instruments, telephone line echo, and room acoustics [2].

Hearing aids are primarily meant for improving hearing and speech comprehensions. Digital hearing aids score over their analog
counterparts. This happens as digital hearing aids provide flexible gain besides facilitating feedback reduction and noise elimination.
Recent advances in DSP and Microelectronics have led to the development of superior digital hearing aids [6]. Many researchers have
investigated several algorithms suitable for hearing aid application that demands low noise, feedback cancellation, echo cancellation,
etc., however the toughest challenge is the implementation [8].

DIGITAL FILTER

A digital filter uses a digital processor to perform numerical calculations on sampled values of the signal. The processor may be a
general-purpose computer such as a PC, or a specialized DSP (Digital Signal Processor) chip [3]. The analog input signal must first be
sampled and digitized using an ADC (analog to digital converter). The resulting binary numbers, representing successive sampled
values of the input signal, are transferred to the processor, which carries out numerical calculations on them. These calculations
typically involve multiplying the input values by constants and adding the products together [7]. If necessary, the results of these
calculations, which now represent sampled values of the filtered signal, are output through a DAC (digital to analog converter) to
convert the signal back to analog form. In a digital filter, the signal is represented by a sequence of numbers, rather than a voltage or
current. The figure1: shows the basic setup of such a system.

Figure1: Basic set-up of a digital filter
CIC FILTER

In 1981, E. B. Hogenauer introduced an efficient way of performing decimation and interpolation. Hogenauer devised a flexible,
multiplier-free filter suitable for hardware implementation that can also handle arbitrary and large rate changes. These are known as
cascaded integrator-comb filters (CIC filters) [14].
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

393 www.ijergs.org


The simplest CIC filter is composed of a comb stage and an integrator stage. The block diagram of three-stage CIC filter is shown in
figure 2.

Fig: 2 (a) Three Stage Decimating CIC Filter

Fig: 2 (b) three Stage Interpolating CIC Filter
FIR FILTER

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is
of finite duration, because it settles to zero in finite time.This is in contrast to infinite impulse response (IIR) filters, which may have
internal feedback and may continue to respond indefinitely (usually decaying) [12]. The impulse response of an Nth-order discrete-
time FIR filter (i.e., with a Kronecker delta impulse input) lasts for N + 1 samples, and then settles to zero. The non-recursive nature of
FIR filter offers the opportunity to create implementation schemes which significantly improve the overall efficiency of the decimator.

We have designed and implemented a conventional comb-FIR-FIR decimation filter. FIR filters offer great control over filter shaping
and linear phase performance with waveform retention over the pass band.

OVERSAMPLING CONCEPT

In signal processing, oversampling is the process of sampling a signal with a sampling frequency significantly higher than the Nyquist
frequency. Theoretically a bandwidth-limited signal can be perfectly reconstructed if sampled at or above the Nyquist frequency.
Oversampling improves resolution, reduces noise and helps avoid aliasing and phase distortion by relaxing anti-aliasing
filter performance requirements [3].

IMPLEMENTATION OF CIC-FIR-FIR DECIMATION FILTER STRUCTURE

The incoming oversampled signal at the rate of 1.28 MHz has to be down-sampled at the rate of 20 KHz. We have chosen passband
frequency of 4 KHz because the human ear is sensitive to all the sounds within the range of 4 KHz. Figure 3 shows that the proposed
decimation filter structure using CIC-FIR-FIR filter.

Fig. 3: Simulink model of CIC-FIR-FIR Decimation filter

This Simulink model of CIC-FIR-FIR Decimation filter is designed using Matlab Simulink and Xilinx System Generator. In this
design, the incoming sampling rate is 1.28 MHz which is first down sampled by using Xilinx CIC filter and then two Xilinx DAFIR
filters. These FIR filters are based on the Distributed Arithmetic principle, which results in less hardware and less power
consumption compared to other decimation filters.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

394 www.ijergs.org

The overall frequency specification of CIC filter is given in table 1.

No. of Stages (N) 4
Sampling Frequency (Fs) 1.28 MHz
Decimation Factor (R) 16
Bit gain (G) 65536
No. of output bits (Bout) 32
Filter Gain (Gf) 1
Scale Factor (S) 1
Table 1.Frequency specification of CIC filter


Fig. 4: Magnitude response of 4 stage CIC filter
Above figure shows that the magnitude response of 4 stage CIC filter in which the attenuation is obtained is about 48 dB. This
magnitude response is plotted with N = 4, R = 16 and M = 1.
FIRST FIR FILTER DESIGN
By considering the application requirements, FIR filter and IIR filter structures can be used to meet the design specifications. FIR
filters offer great control over the filter shaping and linear phase performance with the waveform retention over the pass band. Due to
its all-zero structure, the FIR filter has a linear phase response necessary for audio application, but at the expense of the high filter
order. IIR filter can be designed with much smaller orders than the FIR filters at the expense of the nonlinear phase. It is very difficult
to design a linear phase IIR filter. Thus, we have designed FIR filter as a compensation filter. The filter specification of this FIR filter
is given in table 2.
Sampling Frequency (Fs) 80 KHz
Passband Frequency (Fpass) 20 KHz
Stopband Frequency (Fstop) 35 KHz
Transition width (f) 0.1875
Passband Attenuation (Apass) 1 dB
Stopband Attenuation (Astop) 85 dB
Filter Length (N) 12
Table 2: Filter specification of first FIR filter

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

395 www.ijergs.org


Fig: 5: Magnitude response of FIR filter
Above figure shows that the magnitude response of first FIR filter in which the stop band attenuation is obtained is about 85 dB. This
magnitude response is plotted Fpass = 20 KHz and Fstop = 35 KHz.
SECOND FIR FILTER DESIGN
An additional FIR filter is designed to push out of band undesired signals. The FIR filter is used in the last stage instead of a shaping
filter for less power consumption because a shaping filter has more taps than an FIR filter. Second FIR filter is used as corrector filter
that having passband of 4 KHz because the human ear is sensitive to all the sounds within the range of 4 KHz. From the frequency
response of second FIR filter it can be seen that stop band attenuation of more than 100 dB is obtained which is suitable for this
corrector filter. Filter specification of second filter is given in the table 3.
Sampling Frequency (Fs) 40 KHz
Passband Frequency (Fpass) 4 KHz
Stopband Frequency (Fstop) 15 KHz
Transition width (f) 0.275
Passband Attenuation (Apass) 1 dB
Stopband Attenuation (Astop) 100 dB
Filter Length (N) 8
Table 3: Filter specification of second FIR filter


Fig: 6: Magnitude response of second FIR filter
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

396 www.ijergs.org

Above figure shows that the magnitude response of second FIR filter in which the stop band attenuation is obtained is more than 100
dB. This magnitude response is plotted Fpass = 4 KHz and Fstop = 15 KHz.
CIC-HALF BAND FIR-FIR DECIMATION FILTER STRUCTURE
This decimation filter is implemented by using CIC-Half band FIR-FIR filter and the block diagram of this filter is shown in fig. 6.8.
The operation of this filter is very similar to the CIC-FIR-FIR filter. The incoming oversampled signal at the rate of 1.28 MHz has to
be down-sampled at the rate of 20 KHz. We have chosen passband frequency of 4 KHz because the human ear is sensitive to all the
sounds within the range of 4 KHz.
A half-band IIR filter can have fewer multipliers than the FIR filter for the same sharp cutoff specification. An IIR elliptic half-band
filter when implemented as a parallel connection of two all-pass branches is an efficient solution. The main disadvantage of elliptic
IIR filters is their very nonlinear phase response [9]. To overcome the phase distortion one can use optimization to design an IIR filter
with an approximate linear phase response, or one can apply the double filtering with the block processing technique for real-time
processing. For the appropriate usage of digital filter design software in half-band filter design, it is necessary to calculate the exact
relations between the filter design parameters in advance and accurate method can be found in the FIR half-band filter.

We have designed a CIC-Half band FIR-FIR decimation filter using Matlab Simulink model and Xilinx system Generator for the same
specification of CIC-FIR-FIR decimation filter and the designed Simulink model of CIC-Half band FIR-FIR filter shown in figure 7.

Fig. 7: Simulink model of CIC-Half band FIR-FIR Decimation filter
This Simulink model of CIC-Half band FIR-FIR Decimation filter is designed using Matlab Simulink and Xilinx System Generator. In
this design, the incoming sampling rate is 1.28 MHz which is first down sampled by using Xilinx CIC filter and then two Xilinx
DAFIR filters. In this case, first DAFIR filter is set as a half band FIR filter. These FIR filters are based on the Distributed
Arithmetic principle, which results in less hardware and less power consumption compared to other decimation filters.


Fig: 8: Magnitude response of Half-band FIR filter
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

397 www.ijergs.org

Above figure shows that the magnitude response of half band FIR filter in which the stop band attenuation is obtained is more than 50
dB.
RESULT
Decimation Filter
Architecture
Number of
Taps
Number of
Slices
Number of
Flip-Flops
LUT IOB
CIC-FIR-FIR Filter 20 2644 4769 3561 32
CIC-Half band FIR-FIR Filter 14 2548 4729 3394 32
Table 4: Comparison between Decimation filter architectures
The table 4 shows that the cell used for CIC-FIR-FIR and CIC-half band FIR-FIR filter design. The CIC-Half band FIR-FIR filter
required less number of taps due to half-band filter and also it uses less number of slices, flip-flops and LUTs as compared to CIC-
FIR-FIR filter. Thus the area used and power consumption is less using the CIC-Half band FIR-FIR filter design compared to the CIC-
FIR-FIR filter design. Hence we have concluded that the designed CIC-Half band FIR-FIR decimation filter is a hardware saving
structure.
CONCLUSION

The decimation filter is designed using oversampling sampling rate for audio application. CIC-FIR-FIR filter and CIC-Half band FIR-
FIR filter are designed and compared in terms of storage requirement, area used and power consumption for same specifications. It is
observed that the CIC-Half band FIR-FIR filter required less storage for filter coefficients, less area and less power consumption than
the CIC-FIR-FIR filter. Hence, CIC-Half band FIR-FIR filter is highly efficient than CIC-FIR-FIR filter.


REFERENCES:
[1] L.C Loong and N.C Kyun, Design and Development of a Multirate Filters in Software Defined Radio Environment,
International Journal of Engineering and Technology, Vol. 5, No. 2, 2008.
[2] Suraj R. Gaikwad and Gopal S. Gawande, Implementation of Efficient Multirate Filter Structure for Decimation, International
Journal of Current Engineering and Technology, Vol.4, No.2 , April 2014.
[3] Fredric J. Harris and Michael Rice, Multirate Digital Filters for Symbol Timing Synchronization in Software Defined Radios,
IEEE Journal vol. 19, no. 12, December 2001.
[4] Ronald E. Crochiere and Lawrence R. Rabiner, Further Considerations in the Design of Decimators and Interpolators, IEEE
Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-24, no. 4, August 1976.
[5] Suraj R. Gaikwad and Gopal S. Gawande, Design and Implementation of Efficient FIR Filter Structures using Xilinx System
Generator, International Journal of scientific research and management, volume 2 issue 3 March 2014.
[6] University of Newcastle upon Tyne, Multirate Signal Processing, EEE305, EEE801 Part A.
[7] Ljiljana Milic, Tapio Saramaki and Robert Bregovic, Multirate Filters: An Overview, IEEE Journal, 1-4244-0387, 2006.
[8] L. D. Milic and M.D. Lutovac, Design multirate filtering, Idea Group Publishing, pp. 105-142, 2002.
[9] S.K. Mitra, Digital Signal Processing: A Computer based approach, The McGrow-Hill Companies, 2005.
[10] Yonghao Wang and Joshua Reiss, Time domain performance of decimation filter architectures for high resolution sigma delta
analogue to digital conversion, Audio Engineering Society ConventionPaper 8648 Presented at the 132nd Convention, April 2012.
[11] Kester, Mixed-signal and DSP Design Techniques. Norwood, MA: Analog Devices, Ch.3, pp. 16-17. 2000.
[12] Ljiljana D. Mili, Efficient Multirate Filtering, Software & Systems Design, 2009.
[13] Damjanovi, S., Mili, L. & Saramki, T., Frequency transformations in two-band wavelet IIR filter banks, Proceedings of the
IEEE Region 8 International Conference onComputer as a Tool, EUROCON 2005.
[14] Hogenauer, E. An economical class of digital filters for decimation and interpolation, IEEE Transactions on Acoustics,
Speech and Signal Processing,. Vol. 29, No. 2, pp. 155-162. 1981.
[15] N. J. Fliege, Multirate digital signal processing, New York: John Wiley & Sons, 1994.
[16] P.P. Vaidyanathan, Multirate systems and filter banks. Englewood Cliffs, NJ: Prentice Hall, 1993.
[17] A.I. Russel, "Efficient rational sampling rate alteration using IIR filters," IEEE Signal processing Letters, vol. 7, pp. 6-7, Jan.
2000.
[18] M. D. Lutovac, and L. D. Milic, "Approximate linear phase multiplierless IIR half-band filter," IEEE Signal Processing Letters,
vol.7, pp. 52-53, March 2000.
[19] Matthew P. Donadio, CIC Filter Introduction, m.p.donadio@ieee.org ,18 July 2000.
[20] Fredric J. Harris and Michael Rice, Multirate Digital Filters for Symbol Timing Synchronization in Software Defined Radios,
IEEE Journal vol. 19, no. 12, December 2001

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

398 www.ijergs.org

Mechanical Properties of Epoxy Based Hybrid Composites Reinforced with
Sisal/SIC/Glass Fibers
Arpitha G R
1
, Sanjay M R
1
, L Laxmana Naik
1
, B Yogesha
1

1
Department of Mechanical Engineering, Malnad College of Engineering, Hassan, Karnataka, India
E-mail- arpithagr11@gmail.com
Abstract - Development of the Polymer Composites with natural fibers and fillers as a sustainable alternative material for some
engineering applications, particularly in aerospace applications and automobile applications are being investigated. Natural fiber
composites such as sisal, jute, hemp and coir polymer composites appear more attractive due to their higher specific strength,
lightweight and biodegradability and low cost. In this study, sisal/glass/Sic fiber reinforced epoxy composites are prepared and their
mechanical properties such as tensile strength, flexural strength and impact strength are evaluated. Composites of silicon carbide filler
(without filler, 3, 6 & 9Wt %) sisal fiber and glass fiber are investigated and results show that the composites without filler better
results compared to the composites with silicon carbide filler.

Key words - Sisal fiber, Glass fiber, Silicon carbide, epoxy, polymers, Hand layup, Mechanical property,

1. INTRODUCTION
Development of polymer composites with natural fibers and fillers as a sustainable alternative material for some engineering
applications, particularly in aerospace applications and automobile applications are being developed [1]. Natural fibers show superior
mechanical properties such as stiffness, flexibility and modulus compared to glass fibers [2]. Some of the natural fibers are sisal, Jute,
hemp, coir, bamboo and other fibrous materials [3]. The main advantages of natural fibers are of low cost, light weight, easy
production and friendly to environment [4].Composite materials are intended to combine desired characteristics of two or more
distinct materials. The reinforcement can be synthetic (e.g. glass, carbon, boron and aramid) or of natural sources (e.g. curaua, sisal,
jute, piassava, hemp, coir, ax and banana). The main benefits of exploitation of natural bers are: abundance and renewability, low
cost, non-abrasiveness, simple process, non-toxicity, high exibility, acoustic insulation and low density [1, 2]. On the other hand,
there are some drawbacks such as their poor mechanical properties and high moisture absorption. The latter is due to their hydrophilic
nature that is detrimental to many properties, including dimensional stability [3]. Nevertheless, some composite components (e.g. for
the automotive sector), previously manufactured with glass bers are now produced with natural bers. Applications including door
panels, trunk liners, instrument panels, interior roofs, parcel shelves, among other interior components, are already in use in European
cars due to the more favorable economic, environmental and social aspects of the vegetable bers [4]. N.Venkateshwaran et.al [5]
reported Mechanical and water absorption behavior of banana/sisal reinforced hybrid composites. They observed that the effect of
fiber length and weight percentage increases the flexural modulus and impact strength when increase in length of fiber and weight
percentage of fiber. Leandro Jose da silva et.al [6] investigated on apparent density, apparent porosity and water absorption property
on sisal fiber and silica micro particles, they concluded that the low level of volume fraction of fibers provided not only higher
modulus of elasticity and mechanical strength under tensile and flexural loadings but also have values of apparent density, apparent
porosity and water absorption. Kai Yang et.al [7] studied the thermal conductivity of epoxy nano composites filled with single filler
system and hybrid filler system was performed. The applications of hybrid filler system not only obtain higher thermal conductivity of
epoxy composites but also resistance the existence of big filler agglomeration.
The addition of filler consisting of a combination of Silicon and carbon black powders decrease the negligible amount residual free
Silicon but increased the amount of internal reaction bonded SIC and filler reduced the flexural strength indicating damage to the fiber
but it drastically improved the wear resistance characteristics of the composites [8].
Sisal /GFRP composites sample passes good tensile strength and Jute/GFRP composites specimen showed the maximum flexural load
[9]. The maximum strength is achieved when the length of the fiber in the laminate is equal to the critical fiber length. The strength of
short fiber composites depends on the type of fiber matrix, fiber length, fiber orientation, fiber concentration and the bonding between
the fiber and matrix [10]. Thermal properties such as TGA and DSC were investigated by H.Ranganna et.al and they concluded that
influence of change in fibre length (treated and untreated hybrid composites) shows significant improvement in tensile, flexural, and
compressive strengths of the sisal/glass hybrid composite [11]. A.Gowthami et.al. studied on Effect of silica on sisal fiber reinforced
polyester composites result shows that the tensile strength of composite with silica is 1.5 times greater than that of composite without
silica and 2.5 times greater than that of pure resin. Tensile modulus of composite with silica is 1.809 GPa, whereas for composites
without silica is about 1.67 GPa. The impact strength of composite with silica is 80% greater than that of matrix [12]. Hemalata Jena
et.al. investigated on effect of bamboo fiber composite filled with cenosphere, result sow that the impact property of bio-fiber
reinforced composite is greatly influenced by addition of cenosphere as filler and the impact strength is increased with addition of
filler up to a certain limit and after which it is decreased on further addition [13]. Sandhyarani Biswas, et.al. investigated on effects of
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

399 www.ijergs.org

ceramic fillers on bamboo fiber they conclude that the incorporation of particulate fillers to the fiber tensile strengths of the
composites are found to be decreased. Among the particulate filled bamboo-epoxy composites, least value of void content are
recorded for composites with silicon carbide filling and for the composites with glass fiber reinforcement minimum void fraction is
noted for red mud filling

[14]. According to S.Husseinsyah et.al. Effect of filler content on properties of coconut shell filled polyester
composites high filler content adversely affect the processability, ductility and strength of the composites. The effect of coconut shell
content of polyester composites on mechanical properties swelling behavior and morphology was studied. The results show that the
tensile strength, youngs modulus and water absorption of polyester composites increased with the increasing polyester content but
elongation at break decreased . Morphology studied indicates that the tendency of filler- matrix interaction improved with the
increasing filler in polyester matrix

[15].
2. EXPERIMENTAL
2.1 MATERIALS METHOD
In the present investigation Sisal fiber (Agave Sisalana), Glass fiber (woven mat form), silicon carbide fillers (240 mesh) are used.
Sisal fibers were obtained from Dharmapuri District, Tamilnadu, Chennai, India. The Glass-Fiber Reinforced Polymers (GFRPs) used
for the fabrication is of unidirectional mat having 360gs M/s supplied by Suntech fiber ltd. Bangalore. Silicon carbide supplied by M/s
Mysore pure chemicals, Mysore, Karnataka, India. Commercially available epoxy (LY-556) and hardener (HY-951) supplied by M/s
zenith industrial suppliers, Bangalore.

2.2 SISAL AND GLASS FIBER
In the recent two decades, there has been a dramatic increase in the utilization of natural bers, for example, ber
extraction from sisal, jute, coir, ax, hemp, pineapple and banana for making another environment agreeable and
biodegradable composite materials. A pack of bers are mounted or braced on a stick to encourage isolation. Every ber
is divided as per ber sizes and assembled appropriately. To bunch the ber, each ber is separated and knotted to the end
of an alternate ber manually. The partition and knotting is repeated until bundles of unknotted bers are nished to
structure a long persistent strand. This Sisal ber could be utilized for making variety of products. E-glass variety of fiber
is used as reinforcement in the FRP preparation which has the fallowing properties. Its bulk strength and weight properties
are also very favorable when compared to metals. The plastic matrix may be epoxy, a thermosetting plastic (most often
polyester or vinylester) or thermoplastic. Table 1, Table 2 and Table 3 shows the physical properties of sisal and glass
bers and silicon carbide.

Table 1. Physical properties of sisal ber Table 2.Physical properties of Glass ber






Table 3. Properties of silicon carbide
Physical property Silicon carbide
Density ( gm/cc) 3.1
Flexural strength (Mpa) 550
Elastic Modulus (Gpa) 410
Compressive strength(Mpa) 3900
Hardness (kg/mm
2
) 2800


Properties Glass fiber
GSM 360 gsm
Orientation plain-woven fabric
UTS 40 Gpa
Modulus 1.0 Gpa
Density 1.9 g/cc
Physical property Sisal ber
Density (kg/m
3
) 1350
Elongation at break (%) 2-3
Cellulose content (%) 63-64
Lignin content (%) 5
Tensile strength (MPa) 54
Youngs modulus (GPa) 3.4878
Lumen size (mm) 5
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

400 www.ijergs.org

2.3 PREPARATION OF COMPOSITE SPECIMEN
In the present investigation composite materials are fabricated by hand layup process. Sisal mat and glass bers were cut into the
dimensions of length and breadth is of 250250mm was used to prepare the specimen. The composite specimen consists of totally 6
layers of glass fiber and 5 layers of sisal fibers for the preparation of different samples. A measured amount of epoxy is taken for
different volume fraction of fiber and mixed with the hardener in the ratio of 10:1 and Silicon carbide filler is added into that mixer
(3,6,9 wt.%) using tip ultrasonicator. The layers of bers were fabricated by adding the required amount of epoxy resin. The glass
ber is mounted on the table and then epoxy resin is applied on it. Before the resin gets dried, the second layer of natural ber is
mounted over the glass ber. The process is repeated till six layers of glass fiber and five layers of sisal fiber got over. The epoxy
resin applied is distributed to the entire surface by means of a roller. The air gaps formed between the layers during the processing
were gently squeezed out. The processed wet composite were then pressed hard and the excess resin is removed and dried. Finally
these specimens were hydraulic pressed to force the air present in between the bers and resin, and then kept for several hours to get
the perfect samples. After the composite material dried completely, the composite material was taken out from the hydraulic press
and rough edges were neatly cut and removed as per the required ASTM standards. Two types of composites were prepared one is
with addition of silicon carbide filler (3, 6, 9 wt. %) and another one is without addition of silicon carbide filler.

3. RESULT AND DISCUSSION
In this study natural fiber are added to glass fiber and silicon carbide fiber and their effect on tensile, flexural and impact properties are
evaluated. The different specimens used for tensile (Fig 1), flexural (Fig 2) and impact testing (Fig 3) is presented. The results for
the tensile, flexural and impact testing of the hybrid composites samples are given in Table 4.

Fig. 1. Tensile test specimen. Fig. 2. Flexural test specimen


Fig. 3. Impact test specimen

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

401 www.ijergs.org

Table 4. Test results of specimens
Sample (Wt %) Tensile strength
[N/mm]
Tensile modulus
[N/mm]
Flexural strength
(Mpa)
Impact
strength(KJ/m
2
)
Without Sic 158.167 2747.42 414.87 33.71
3% Sic 156.882 3620.66 558.6 32.0
6% Sic 114.81 1882.03 404.06 28.25
9 % Sic 91.331 1619.9 467.75 31.85

3.1 TENSILE PROPERTIES
The composite samples were tested in the universal testing machine (UTM) and stress-strain curve was plotted. The typical graph
generated directly from machine for tensile test for sisal/Glass composite without silicon carbide filler and sisal/glass with silicon
carbide filler composites plotted in Fig 4, 5, 6 and 7.
The results indicate that the ultimate tensile strength for the composite without silicon carbide is higher than the composite with
silicon carbide filler.

Fig. 4. Stress-strain curve of sample without Sic


Fig. 5. Stress-strain curve of sample with 3% Sic
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

402 www.ijergs.org


Fig. 6. Stress-strain curve of sample with 6% Sic

Fig. 7. Stress-strain curve of sample with 9% Sic
3.2 FLEXURAL PROPERTIES
The composite samples are tested in the universal testing machine (UTM) and stress-strain curve is plotted. The typical graph
generated directly from machine for flexural test for sisal/Glass composite without silicon carbide filler and sisal/glass with silicon
carbide filler composites plotted in Fig 8, 9, 10 and 11.
Flexural properties of different composite samples are tested and results are plotted. The results indicate that the ultimate flexural
strength for the composite with silicon carbide of 3% filer is higher than the other composite with silicon carbide filler and without
silicon carbide filler.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

403 www.ijergs.org


Fig. 8. Stress-strain curve of sample without Sic


Fig. 9. Stress-strain curve of sample with 3% Sic

Fig.10. Stress-strain curve of sample with 6% Sic
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

404 www.ijergs.org


Fig.11. Stress-strain curve of sample with 9% Sic

3.3 IMPACT PROPERTIES
For analyzing the impact property of the different specimens an impact test is carried out. Impact test carried out for the present study
is Charpy impact test. The energy loss is obtained from the Charpy impact machine. The impact response in Sisal/SIC/Glass
composites of Charpy impact test is presented in Fig.12. The results indicated that the maximum impact strength is obtained for zero
percent of silicon carbide of sisal/glass composites.


Fig.12. Impact load comparison of different composite materials.

CONCLUSION
The Sisal//SIC/Glass hybrid composite specimens are prepared and subjected to Tensile, Flexural and impact loadings.
From the experimental results following observations can be made.
- The sisal/Glass composite samples possess good tensile strength and can withstand the strength up to 158.167 N/mm.
- The sisal/Glass fiber filled with 3% of silicon carbide possesses good flexural strength and can withstand the strength up to
558.6 Mpa.
25
27
29
31
33
35
0 3 6 9
I
m
p
a
c
t

S
t
r
e
n
g
t
h

1
0

(
N
/
m
)
Filler %
Impact Strength
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

405 www.ijergs.org

- The sisal/Glass composites possess good impact strength up to 33.71 KJ/m
2
compared to other composites filled with silicon
carbide filler.
- From results it can be concluded that sisal/glass composites without filler showing good tensile strength and sisal/glass with
3% of silicon carbide filler showing good flexural strength compare to other composites and also the composites without
filler of sisal/glass performing good impact strength compare to composites filled with silicon carbide filler.
- The performance of these natural fibers with glass fiber is more than that of composites with silicon carbide filler; it can be
used in many applications which needed lower strength.

REFERENCES:

[1]. Silva RV. Composito de resina poliuretana derivada de oleo de mamona e fibras vegetais. Tese (Doutorado) - Escola de
Engenharia de So Carlos, Universidade de So Paulo, Sao Carlos, 2003, p. 139.
[2]. Goulart S.A.S., Oliveira T.A., Teixeira A., Mileo P.C., Mulinari D.R., Mechanical behaviour of polypropylene reinforced palm
fibers composites, Procedia Engineering;2011;10:2034-2039.
[3]. Geethamma VG, Thomas Mathew K, Lakshminarayanan R, Sabu Thomas. Composite of short coir fibers and natural rubber:
Effect of chemical modification, loading and orientation of fiber, Polymer 1998;6:148390.
[4]. Joshi SV, Drzal LT, Mohanty AK, Arora S. Are natural fiber composites environmentally superior to glass fiber-reinforced
composites, Compos Part A 2004;35:3716.
[5]. N. Venkateshwaran, A. ElayaPerumal ,A. Alavudeen M. Thiruchitrambalam. Mechanical and water absorption behaviour of
banana/sisal reinforced hybrid composites 32 (2011) 4017-4021.
[6]. Leandro Jos da Silva, Tlio Hallak Panzera ,Vnia Regina Velloso , Andr Luis Christoforo ,Fabrizio Scarpa, Hybrid polymeric
composites reinforced with sisal fibres and silica microparticles. Composites: Part B 43 (2012) 34363444.
[7]. Kai Yang , Mingyuan Gu, Enhanced thermal conductivity of epoxy nanocomposites filled with hybrid filler system of
triethylenetetramine-functionalized multi-walled carbon nanotube/silane-modified nano-sized silicon carbide. Composites: Part A 41
(2010) 215221.
[8]. Se Young Kim ,In Sub Han ,Sang Kuk Woo, Kee Sung Lee ,Do Kyung Kim. Wear-mechanical properties of filler-added liquid
silicon infiltration C/CSiC composites. Materials and Design 44 (2013) 107113.

[9]. M.Ramesha, K.Palanikumar, K.Hemachandra Reddy. Comparative Evaluation on Properties of Hybrid Glass Fiber- Sisal/Jute
Reinforced Epoxy Composites. Procedia Engineering 51 (2013) 745 750.
[10]. R. Velmurugan, V. Manikandan, Mechanical properties of palmyra/glass fiber hybrid composites. Composites: Part A 38 (2007)
22162226.
[11].H.Ranganna, N.Karthikeyan, V.Nikhilmurthy, S. Raj Kumar, Mechanical & Thermal Properties of Epoxy Based Hybrid
Composites Reinforced with Sisal/Glass Fibres. ISSN 2277-7156, 2012
[12]. A.Gowthami. K.Ramanaiah, A.V.Ratna Prasad, K.Hema Chandra Reddy, K.Mohana Rao (2012).Effect of silica on thermal and
mechanical properties if sisal fiber reinforced polyster composites, Journal of material environment science, vol.4 (2), pp.199-204.
[13]. Hemalata Jena, Mihir Ku, Pandit, and Arun Ku (2012), Study the impact property of laminated Bamboo-Fiber composite filled
with cenosphere,International journal of environmental science and development,vol.3(5).pp.456-459.
[14].Sandhyarani Biswas, Alok Satapathy, Amar Patnaik Effect of Ceramic Fillers on mechanical properties of Bamboo fiber
Reinforced epoxy composites, pp.1-6.
[15]. S.Husseinsyah and M.Mostapha @ Zakaria (2011), The effect of filler content on properties of coconut shell filled polyster
composites, Malaysian polymer journal, vol.6.pp.87-9







International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

406 www.ijergs.org

Stress Analysis of an Aircraft Wing with a Landing Gear Opening Cutout of
the Bottom Skin
Kiran Shahapurkar
1
, Ravindra Babu G
2

1
Lecturer, Department of Mechanical Engineering, Jain College of Engineering, Belgaum, Karnataka, India
2
Assistant Professor, Department of Mechanical Engineering, Sahyadri College of Engineering and Management, Mangalore India
E-mail- kiranhs1588@gmail.com

Abstract: In this paper, work is addressed to a stiffened panel of a landing gear opening cutout of a typical transport aircraft wing.
Cut-outs required for fuel access and landing gear in the bottom skin of the wing will introduce stress concentration because the lower
part of the wing is subjected to tensile force due to the upward bending of the wing during flight. The stress analysis of the landing
gear opening cutout is carried out. This identifies the location of high tensile stresses which are the potential sites for fatigue crack
initiation.
Keywords: aircraft, stiffened panel, wing, cut-out, stress concentration, stress analysis, fatigue.

1. INTRODUCTION
Aircraft is a vehicle that is able to fly by gaining support from the air. It needs to be strong and stiff enough to withstand the
exceptional circumstances under which it has to operate. The main sections of an aircraft the fuselage, tail and wing determine its
external shape. The load bearing members of these main sections, those subjected to major forces, are called the airframe.
Cut outs are essential in airframe structures to provide the following:
- Fuel access cutout at the bottom skin of wing and fuselage.
- Inspection for maintenance (medium sized cut - outs called hand holes).
- Landing gear opening and retracting at the bottom skin of the wing or fuselage.
- Lightening holes in webs.
- Window cutout in fuselage.
- Accessibility for final assembly and maintenance. (e.g., man holes in wing lower surfaces, crawl holes in wing ribs, etc.)
Airframe engineers view any cut-outs in an aircraft with disfavor because these cut-outs not only increases the overall cost of
aircraft and adds weight to the aircraft due to the reinforcement incurred in compensating for the cut-outs but also serve as stress
concentration raisers due to the sudden or abrupt change in area. These stress raisers are a problem for both static and fatigue strength.
Fatigue is a phenomenon associated with variable loading or more precisely to cyclic stressing or straining of a material. Just
as we human beings get fatigue when a specific task is repeatedly performed, in a similar manner metallic components subjected to
variable loading get fatigue, which leads to their premature failure under specific conditions. Fatigue cracks are most frequently
initiated at sections in a structural member where changes in geometry, e.g., holes, notches or sudden changes in section, cause stress
concentration.
This paper addresses the issues for a typical aircraft wing of transport aircraft. As the aircraft takes off, the aircraft bends
upwards, this is due to the fact that the wing supports the total weight of the aircraft. Therefore the stress analysis of a wing under
various load distributions that the airframe is going to be subjected is done. This identifies the locations of high tensile stresses which
are the potential sites for fatigue crack initiation.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

407 www.ijergs.org


2. GEOMETRICAL CONFIGURATION
The geometric modeling of the stiffened panel with landing gear opening cut-out is carried out by using Solidworks 2013.
The stiffened panel with landing gear consists of a panel with integrated stiffeners and cut-out with rivet holes. The number of
stiffeners used in the stiffened panel is six. The Solidworks model of the stiffened panel with landing gear opening cut out is shown in
the fig 1 below.

Fig 1 : CAD model of stiffened panel with landing gear opening cut out
3. MATERIAL SPECIFICATION
Aluminum alloys have a low density while their tensile properties are low compared to steels, they have excellent strength to
weight ratios. The 2024 alloys have excellent fracture toughness and slow crack growth rate as well as good fatigue life. They are
most widely used in the lower skin of the aircraft because during flight the lower skin will be undergoing fatigue loading due to the
cyclic tensile stresses acting on the lower skin of the aircraft.
The material considered for the landing gear cutout part is Al 2024T3, with the following properties.
- Youngs Modulus, E = 72 GPa
- Poissons Ratio, = 0.33
- Density = 27.7 KN/m
3

- Yield Strength = 362 MPa
- Ultimate strength = 483 MPa

4. LOAD ACTING ON THE STIFFENED PANEL WITH LANDING GEAR OPENING CUTOUT
The class of the aircraft is 8 seater civilian transport aircraft. And the load case is Level flight load with maximum speed.
Span of the wing = 7060 mm = 7.06 m
Total weight of the aircraft = 1800kg = 17.658 KN
Design load factor = 3g.
Therefore load acting on the aircraft = 5400 kg = 52.974 KN
Factor of safety = 1.5
Ultimate load acting on the aircraft = 8100 kg = 79.461 KN
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

408 www.ijergs.org


According to the distribution of the lift load on the wing and the fuselage.
The wing experiences 80% of the lift load and the remaining 20% of the lift load is experienced by the fuselage.
Total load acting on the wings = 80% of 79461N
= 0.8 x 79461 = 63568.8 N
Load acting on each wing = 31784.4 N =31.7844 KN

Fig 2 : Wing structure with landing gear opening cutout
The resultant load is acting at a distance of 3130 mm (from aerodynamic calculations from the root of the wing).
The maximum bending moment at the wing root = 31784.4 x 3130 =99.485 x 10
6
N-mm
The B.M at the root of the wing box = 31784.4 x 2930 = 93.12 x 10
6
N-mm
The load at tip of the wing box = 93.12x 10
6
/1200 = 77607 N
The total edge length of the cutout where load is applied = 1800 mm
Total UDL load applied for the component = 77607/1800
= 43.115 N/mm

5. FINITE ELEMENT ANALYSIS
5.1 Introduction
Today the finite element method (FEM) is considered as one of the well established and convenient technique for the
computer solution of complex problems in different fields of engineering: civil engineering, mechanical engineering, nuclear
engineering, biomedical engineering, hydrodynamics, heat conduction, geo-mechanics, etc. From other side, FEM can be examined as
a powerful tool for the approximate solution of differential equations describing different physical processes.
The success of FEM is based largely on the basic finite element procedures used: the formulation of the problem in
variational form, the finite element discretisation of this formulation and the effective solution of the resulting finite element
equations. These basic steps are the same whichever problem is considered and together with the use of the digital computer present a
quite natural approach to engineering analysis.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

409 www.ijergs.org

5.2 Finite element model of a stiffened panel with landing gear opening cutout
The stiffened panel with landing gear opening cutout model is first prepared in the Solid works 2013 modeling software and
then extracted into the software where finite element meshing and analysis is carried out. The software used for analysis here is MSC
NASTRAN. Finite element meshing is carried out for all the components of the stiffened panel.
The Fig 3 shows the details of the finite element mesh generated on each part of the structure using MSC NASTRAN.

Fig 3 : Finite element meshing of stiffened panel with LG opening cutout
6. LOADS AND BOUNDARY CONDITIONS
The loads and boundary conditions are applied. The boundary conditions are fixing one end of the stiffened panel by
constraining all the 6 degrees of freedom. To avoid bending, the translation in the direction perpendicular to the stiffened panel, ie z
direction is constrained for all nodes of the stiffened panel. And the uniformly distributed load of 43.115 N/mm is applied on the other
end of the stiffened panel. The loads and boundary conditions applied to the finite element model of the stiffened panel with landing
gear opening cutout are shown in the Fig 4.

Fig 4: Loads and Boundary conditions applied to the stiffened panel with landing
gear opening cutout
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

410 www.ijergs.org

7. STRESS AND DISPLACEMENT ANALYSIS OF A STIFFENED PANEL WITH LANDING GEAR OPENING
CUTOUT
Once load and boundary conditions are applied to the finite element model, stress analysis is done to find the maximum stress
concentrated region. Also the maximum displacement of the stiffened panel is found out using the analysis. The Fig 5 shows the stress
analysis and the Fig 7 shows the displacement analysis of the stiffened panel with landing gear opening cutout.
From the analysis, the maximum stress obtained is 42.99 N/mm
2
and the maximum displacement is 2.5 mm for the applied
boundary condition and uniformly distributed load of 43.115 N/mm. The Fig 6 shows the maximum stress region, in this model the
maximum stress concentration takes place at the rivet hole as shown in the figure.
The maximum stress concentration region, we can say that crack will get initiated from that maximum stress location and
propagate perpendicular to the applied load direction.

Fig 5: Stress analysis of the stiffened panel with LG opening cutout

Fig 6: Maximum Stress location in the stiffened panel cutout

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

411 www.ijergs.org


Fig 7 : Displacement contour of the stiffened panel with landing gear opening cutout

8. RESULTS AND DISCUSSIONS
The stress contour indicates a maximum stress of 42.99 N/mm2 at landing gear opening cutout of wing bottom skin as shown in the
figure 6. The maximum stress value obtained is within the yield strength of the material. The point of maximum stress is the possible
location of crack initiation in the structure due to fatigue loading.
9. CONCLUSIONS
- Stress analysis of the landing gear cutout of wing bottom skin is carried out and maximum tensile stress is found out.
- FEM approach is followed for the stress analysis of the landing gear cutout of wing bottom skin .A validation for FEM
approach is carried out by considering a plate with a circular hole.
- Maximum tensile stress of 42.99N/mm
2
and maximum displacement of 2.5mm is observed in the landing gear cutout of wing
bottom skin.
- The maximum tensile stress is acting near the rivet holes, the rivet holes are the stress raisers. A fatigue crack normally
initiates from the location of maximum tensile stress in the structure, if these stresses are undetected then they may lead to a
sudden catastrophic failure and result in loss of life.
REFERENCES:
[1]. Grigory I. Nesterenko, Service life of airplane structures, Central Aerohydrodynamic Institute (TsAGI), Russia, 2002.
[2]. A Rama Chandra Murthy, G S Palani Nagesh R Iyer, Damage tolerant evaluation of cracked stiffened
panels under fatigue loading, Sadhana Vol. 37, Part 1, February 2012, pp. 171186.
[3]. Adarsh Adeppa, Patil M S and Girish K E (2012), Stress Analysis and Fatigue Life Prediction for Splice Joint in an Aircraft
Fuselage through an FEM Approach, International Journal of Engineering and Innovative Technology (IJEIT), Vol. 1, No.
4, pp. 142-144.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

412 www.ijergs.org

[4]. J. C. Newman, Jr, Advances in fatigue and fracture mechanics analyses for aircraft structures, Mechanics and Durability
Branch, NASA Langley Research Center, Hampton, VA, USA.
[5]. A. Brot et al., The damage-tolerance behaviour of integrally stiffened metallic structures, Israel Annual Conference on
Aerospace Sciences, 2008
[6]. F.H.Darwish, G.M.Atmeh, Z. F. Hasan Design (2012) Analysis and Modelling of a General Aviation Aircraft Volume 6,
Number 2, ISSN 1995-6665 Pages183 191.
[7]. Michael Chung Yung Niu, Airframe Structural Design, Conmilit press Ltd, 1989, pp. 90-117
.Michael Bauccio (1993), ASM Metals Reference Book, 3rdEdition, ASM International, Materials Park, OH.
[8]. Fawaz, S. A. and Brje Andersson. Accurate Stress Intensity Factor Solutions for Unsymmetric Corner Cracks at a
Hole. Proc. of the Fourth Joint NASA Conference on Aging Aircraft, vol 15 (2000),pp 135-139

[9]. C.S. Kusko, J.N. Dupont, A.R. Marder, Influence of stress ratio on fatigue crack propagation behavior of stainless steel
welds. Welding Journal, vol 19,(2004), pp 122-130

10]. N. Ranganathan, H. Aldroe, F. Lacroix, F. Chalon, R. Leroy, A. Tougui.Fatigue crack initiation at a notch. International
Journal of Fatigue, vol 33, (2011), pp 492-499

[11]. Newman, J.C. A crack opening stress equation for fatigue crack growth. International Journal of Fracture, vol 24(2003), pp
131-135

[12]. Lance Proctor et al, local analysis of fastener holes using the linear gap technology using MSC/NASTRAN, Presented at
MSC Aerospace Users Conference, 2000, pp1-24













International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

413 www.ijergs.org

Catalytic Decomposition of Hydrocarbon Gas over Various Nanostructured Metal oxides
for Hydrocarbon Removal and Production of Carbon Nanotubes
M.H.Khedr
a,d
, M.I.Nasr
b
, K.S.Abdel Halim
b
, A.A.Farghali
a,d
, N.K.Soliman
c*
a Chemistry Department, Faculty of Science, Beni-Sueif University, Beni-Suef, Egypt
b Central Metallurgical Research & Development Institute (CMRDI), Minerals Technology, 87 Helwan, Cairo, Egypt
C Basic Science Department, Faculty of Oral and Dental Medicine, Nahda university (NUB), Beni-suef , Egypt
d Material Science and nanotechnology department , faculty of post graduate studies for advanced science, Beni-Sueif University,
Beni-Suef, Egypt
* Corresponding author: Assistant lecturer. N.K.Soliman, (E-mail: nofal_kh7@yahoo.com )
Abstract: Nanosized CuO-CeO
2
, CuO-Fe
2
O
3
, CuO-CeO
2
-Al
2
O
3
and CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
were prepared by co-precipitation and
wet impregnation techniques and were used for Catalytic decomposition of acetylene to produce carbon nanotubes (CNTs). Weight
gain technique was used to follow up the catalytic reactions. The results revealed that catalyst chemical composition, catalytic
temperature, acetylene flow rate and catalysts weight have a significant effect on the acetylene conversion percent. It was found that
maximum acetylene conversion percent occurs over CuO-CeO
2
-Fe
2
O
3
-Al
2
O
3
and CuO-Fe
2
O
3
and it increase by increasing,
temperature from 400-600
o
C, decreasing acetylene flow rate from 150-50 ml/min and increasing catalyst weight from 0.25-1g. With
further increase in catalyst weight, acetylene conversion% decrease. The scanning electron microscope (SEM) image shows that some
of catalyst particles are observed at the tips of CNTs indicating that its formation proceeds by tip growth mechanism.
` The prepared samples and CNTs were characterized by X-ray diffraction, inductive coupled plasma atomic emission
spectroscopy (ICP-AES), surface area apparatus, Transmission electron microscope (TEM) and SEM.
Keywords - metal oxides, nanocrystallite, microstructure, hydrocarbon gas, Acetylene, chemical vapor deposition, Carbon
nanotubes,

1. Introduction
Unburned hydrocarbons (HC) are one of the main pollutants released from internal combustion engines and cause many
environmental and health problems[1, 2], for example, Volatile organic compounds ( VOCs) enter in the formation of ground level
ozone ,ozone depletion ,and they act as greenhouse gases[3, 4] . Decomposition of hydrocarbon gases to its constituents is considered
to be one of the most important ways to remove such gases. Transition metal oxides showed very high catalytic activity toward the
decomposition of hydrocarbon gases to carbon nanotubes [5-7] which have novel properties that have led to realistic possibilities of
using them in many applications [8-16].
CNTs have been synthesized by a lot of techniques, such as plasma-enhanced ,arc discharge, laser ablation, and chemical
vapor deposition (CVD) of hydrocarbon gases (methane, ethane, and acetylene) at relatively high temperature over a nanocatalysts
[17-19] , chemical vapor deposition (CVD) has been extensively used for the production of large scale carbon nanotubes with
extraordinary properties with high yield and it can produce CNTs at much lower temperature and with low cost [20-22].
The Effect of temperature on the kinetics of acetylene decomposition over freshly reduced iron oxide nanoparticles for the
production of carbon nanotubes was studied by khedr et al [6], The results showed that both the crystal size of iron oxide and catalytic
decomposition temperature are very effective on percentage yield of carbon deposited. The percentage yield of the produced CNTs
increased by decreasing crystal size of the catalyst from 150 to 35 nm, and increasing acetylene decomposition temperature to certain
temperature limit and after that it decrease.
The present work is designated to synthesize various nanostructured metal oxides catalysts for the decomposition of
acetylene to produce carbon nanotubes. The influence of different factors such as catalyst chemical composition, catalytic
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

414 www.ijergs.org

temperature, acetylene flow rate and weight of catalysts on the rate of catalytic reaction was investigated using weight gain technique.
The experimental data were used to clarify the mechanism of the reaction.
2. Experimental
2.1. Catalyst preparation Experiment
Different Nanocrystallite materials of CuO-CeO
2
with molar ratio 5:95 and CuO-Fe
2
O
3
with molar ratio1:1, were
successfully prepared by co-precipitation route. The metal precursors solutions (Cerium (III) sulphate, cupper (II) sulphate and iron
(II) nitrate) with required molar ratios were co-precipitated using potassium hydroxide as a precipitating agent, the precipitating agent
was added drop wise to the precursors solutions during ultra sonication for 0.5 hr then the precipitate was washed with distilled water
and ethanol, dried at 105
o
C and finally fired at 500
o
C for about 3 hrs.
Nano-sized CuO-CeO
2
supported Al
2
O
3
is prepared by weight impregnation technique [6] where a catalyst of the composition
40% CuO-CeO
2
:60% Al
2
O
3
was prepared as follows: a suspension of nanosized CuO-CeO
2
was mixed with Al
2
O
3
powder and stirred
for 1 hr at 60
o
C to form a paste and to achieve a homogeneous impregnation of catalyst in the support. The impregnate was then dried
in an oven at 100
o
C for 1 hr, calcinated at 400
o
C for 3 hrs in a box muffle furnace.
Nano-sized CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
with molar ratio 0.26: 0.19: 0.25: 0.3 was prepared by physical mixing of a solid mixture
of one mole of CuO- Fe
2
O
3
with one mole CuO-CeO
2
-Al
2
O
3
.
The prepared catalysts were characterized using X-ray phase analysis technique ,The BET surface area apparatus (Quanta
chrome NOVA Automated Gas Sorption System), inductive coupled plasma atomic emission spectroscopy (ICP-AES), Scanning
electron microscope and Transmission electron microscope .
Phase identication and crystallite size of the products were determined using X-ray diffraction instrument where the
crystallite size (D) has been calculated using Scherers equation [23]:
D =
u

cos
9 . 0
B

Where B is the full width at half maximum, is the X-ray wavelength, and

is the angle of diffraction
2.2. Hydrocarbon decomposition Experiment
For each acetylene decomposition Experiment, approximately 0.5 g of a catalyst was introduced in to cylindrical alumina cell
closed with one end and placed in the central region of a longitudinal furnace.
To determine the most effective catalyst, which gives the highest acetylene conversion% (the highest carbon yield); the
catalysts were heated at 600
o
C and carbon nanotubes (CNTs) were synthesized at this temperature by flowing 100ml C
2
H
2
:700 ml N
2
at these temperature. Decomposition of acetylene over different catalysts was followed using weight gain technique. The efficiency of
the prepared catalysts was determined and correlated with operation parameters which comprise; Temperature of reaction, catalyst
chemical composition, acetylene flow rate, and catalyst weight. The effect of growth temperature on the acetylene conversion% was
also examined for the most effective catalysts in acetylene decomposition at temperature ranging from 400 to 600
o
C. The synthesized
CNTs were cooled in N
2
flow and the weight of deposited CNTs was detected using weight gain technique. The catalytic activity of
catalyst was measured by measuring acetylene conversion% as follow,



Where W
t
is the weight of carbon deposited at time t and W
c
is the total weight of carbon on hydrocarbon supply passed over the metal
oxides catalysts in 30 minutes.The effect of acetylene flow rate and catalyst

s weights were investigated using different acetylene inlet


concentrations ranging from 50 to 150 ml/min and different catalyst

s weights ranging from 0.25 to 2 g over the most effective catalyst


in by carrying out the acetylene decomposition reactions at 600
o
C
The synthesized CNTs were identified and characterized using X-ray phase analysis technique, Scanning electron microscope
and Transmission electron microscope.
3. Results and discussions
3.1. Characterization of catalysts
The results of phase identification and crystallite size measurements of the prepared catalysts are summarized in Table 1 and
Fig. 1. The XRD patterns of CuO-Fe
2
O
3
catalysts show that there is no interaction between CuO and Fe
2
O
3
and no copper ferrite peaks
100
t
W
% conversion Acetylene
c
W
x =
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

415 www.ijergs.org

were detected. Fig.1(b, c) shows XRD patterns of CuOCeO
2
catalysts washed with ethanol, CeO
2
of fluorite type oxide structure was
observed in all catalysts [24] and no shift in the diffraction lines of CeO
2
could be observed in these catalysts, indicating that there are
no solid solutions appeared in the CuO CeO
2
catalyst [25]. Also XRD results indicate that using Al
2
O
3
as support help in decreasing
the crystal size of the prepared catalysts and that the addition of KOH as precipitating agent and ethanol as dehydrating agent inhibits
the grain growth of CuOCeO
2
particles, yields nano-structured catalysts (crystal size =12.7 nm) and increases the surface areas of
catalysts (58.3 m
2
g
1
for CuOCeO
2
) as summarized in Table .1. We found also no observable XRD peaks corresponding to CuO in
CuOCeO
2
catalysts. This could indicate that the doped CuO was well dispersed in the CeO
2
surface or it may be amorphous or
undetectable amount (about 5%) by XRD














Fig .1. XRD pattern for
(a) CuO-Fe
2
O
3
(b) CuO-CeO
2
(c) CuO-CeO
2
-Al
2
O
3
Where 1, 2, 3 and 4 represent CuO, Fe
2
O
3
, CeO
2
and Al
2
O
3
respectively
Table 1. Crystal sizes and surface area of the prepared catalysts
Catalyst Sample phases Crystal size (nm) Surface area (m
2
/g)
CuO-Fe
2
O
3
CuO 28.7 nm
15.7 Fe
2
O
3
13.6 nm
CuO-CeO
2
-Al
2
O
3
CuO Nd
58.3 CeO
2
11.7 nm
Al
2
O
3
7 nm
CuO-CeO
2
CuO nd Nd
CeO
2
12.7 nm
nd means not determined
The results obtained by ICP indicate that the catalysts are well prepared with the proper metal to metal ratios. The results are
summarized in table.2.
Table .2. ICP AES results of the prepared catalysts
Catalyst Chemical composition ICP results ratios
1 50% CuO + 50% Fe
2
O
3
51% CuO + 49% Fe
2
O
3

2 5% CuO + 95% CeO
2
5.1% CuO + 94.9% CeO
2

3 2% CuO + 38% CeO
2
+ 60% Al
2
O
3
2% CuO + 39.1% CeO
2
+ 58.9% Al
2
O
3

10 20 30 40 50 60 70 80
0
50
100
150
I
n
t
e
n
s
i
t
y

(
a
.
u
)
2-Theta Scale
(a)
(b)
(a) (a) (a) (c )
1
(1,2)
(1,2)
3
3
3
3
3
3
3 3
3
4
3
(3,4) (1,4) (3,4)
1
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

416 www.ijergs.org

Fig.2. and Fig.3. shows TEM and SEM images of CuO-CeO
2
supported Al
2
O
3
catalyst and CuO-Fe
2
O
3
catalyst, from which
we can see that the particles are well dispersed and have a regular spherical morphology, the powder was mostly formed in
homogenous grains and the CuO-Fe
2
O
3
sample seems to be more condensed than CuO-CeO
2
-Al
2
O
3
, this is confirmed by the pore size
distribution shown in Fig.4. The data obtained from the BET surface area apparatus show that the total pore volume is 2.9 x10
-2
CC /
g , micro pore volume 6.6 x10
-2
CC / g and average pore diameter 0.02 m

in case of CuO-CeO
2
-Al
2
O
3
while in case of CuO-Fe
2
O
3

the total pore volume is 7.5 x10
-3
CC / g , micro pore volume 1.3 x10
-2
CC / g and average pore diameter 0.019 m.











Fig .2. TEM image of
( a) CuO CeO
2
- Al
2
O
3
( b) CuO Fe
2
O
3














a
a
b
a
b
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

417 www.ijergs.org

Fig .3. SEM image of
( a) CuO CeO
2
- Al
2
O
3
( b) CuO Fe
2
O
3











Fig.4. Relation between pore width and pore volume for
(A) CuO -CeO
2
-Al
2
O
3
(B) CuO Fe
2
O
3
3.2. Hydrocarbon decomposition
Hydrocarbon isothermal-decomposition experiments indicate that acetylene gas (with a flow rate 50 ml/min.) decomposes
catalytically to carbon and hydrogen at 600
o
C according to the following equation [6];
C
2
H
2
= H
2
+ 2 C
The acetylene molecules adsorbed on the surface of the catalyst then, a weak bond is formed between the catalyst and the
carbon atoms of acetylene molecules then the bonds between carbon atoms on the acetylene molecules are elongated and finally the C-
C triple and the C-H bonds is broken and carbon atoms attached together and hydrogen atoms also combined forming the carbon
nanotubes and hydrogen gas molecule respectively[6].
3.2.1. Effect of catalyst composition
Catalytic decomposition tests were carried out in the simulated reactor to study the effect of catalyst composition on the
removal of acetylene by catalytic decomposition reaction. The decomposition tests were investigated isothermally through the
decomposition of acetylene to produce carbon nanotubes as a function of the prepared samples at 600
o
C and acetylene flow rate 50
ml/min using weight gain technique.
Table 3 shows the effect of catalyst composition on the catalytic decomposition of acetylene. It is found that CuO-CeO
2
and
CuO-CeO
2
-Al
2
O
3
catalysts have no catalytic activity toward acetylene decomposition because there is no type of interaction between
metal oxides and ceria as observed from XRD results so they have no catalytic activity [17], while good conversion percent is
observed with CuO-CeO
2
-Fe
2
O
3
-Al
2
O
3
and this may be attributed to that the metal oxide catalysts supported on alumina, posses
certain acidic sites, and the catalytic decomposition of hydrocarbons is proceeded on this sites and hence the catalytic activity of the
prepared catalysts toward decomposition of acetylene increase [26] The small crystallite size of the catalysts also enhances the
synthesis of dense, long and narrow-diameter CNTs [27] due to increasing the number of active sites at the catalyst surface which in
turn enhanced the acetylene decomposition reaction into carbon nanotubes and hydrogen. Accordingly, acetylene conversion %
increase [6].
Table .3. the effect of catalyst composition on the catalytic decomposition of acetylene
Catalyst HC conversion %
CuO-CeO2 No catalytic activity
CuO-CeO
2
-Al
2
O
3
No catalytic activity
CuO-Fe
2
O
3
53.4 %
CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
68 %
0
0.0005
0.001
0.0015
0.002
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07
A
B
P
o
r
e

v
o
l
u
m
e

(
C
C
/
g

)
Pore width ( m)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

418 www.ijergs.org

SEM analysis of the produced CNTs over both CuO -Fe
2
O
3
and CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
at 600
o
C are shown in Fig.5. High
dense of CNTs are observed in all the samples .The SEM observations suggest also that the carbon nanotube length is ranging from
one to several m. Several long CNTs were observed in the images and scattered in all the samples and it can be observed that there is
a tendency towards the formation of CNT structures of larger diameter at higher temperature. This is in line with other studies and
may be attributed to the agglomeration of metal oxides crystallites at higher reaction temperatures to form larger and non-uniform
metallic clusters which are responsible for growth of thicker CNTs [28] and some catalytic nanoparticales are also observed at the tips
of the carbon nanotubes indicating that CNTs formation occurs by tip growth mechanism













Fig .5. SEM image of CNTs produced over
( a) CuO CeO
2
- Fe
2
O
3
- Al
2
O
3
( b) CuO Fe
2
O
3
TEM image of the produced CNTs over both CuO -Fe
2
O
3
and CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
are shown in Fig. 6. Graphitic
structures with a central channel (CNTs) are observed and we observe also that the carbon nanotubes are thicker over CuO-Fe
2
O
3
which has larger crystal size in agreement with SEM image.








b
a
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

419 www.ijergs.org

(b )















Fig. 6. TEM image of CNTs produced over
( a) CuO CeO
2
- Fe
2
O
3
- Al
2
O
3
( b) CuO Fe
2
O
3

The produced CNTs are also investigated by XRD as shown in Fig. 7. for CuO -Fe
2
O
3
and CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
samples
after decomposition of acetylene. the XRD patterns cannot differentiate between CNTs and other similar graphite-like structures, since
the diffraction peaks of both CNTs and graphite are very close to each other[29], but they provide a primary evidence of graphite
formed. XRD patterns of the catalysts after acetylene decomposition shows that there are two major peaks, one is near 2 = 26
o
And
one near 2 = 43.5
o
for graphite, indicating the well graphitized nature of the CNT. The other major peaks are due to catalytic
impurities, Fe
3
O
4
, CuO , Ce
2
O
3
, CuFe
2
O
4
phases and Al
2
O
3
,the lower metal oxides is formed from the reduction of the metal oxides
by acetylene and hydrogen formed from the decomposition of acetylene to hydrogen and CNTs.








International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

420 www.ijergs.org
















Fig.7. XRD patterns of CNTs after decomposition of acetylene at 600
o
C over:
(a) CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
(b) CuO-Fe
2
O
3

3.2.2. Kinetic and mechanism of acetylene decomposition
To study the kinetics and mechanism of acetylene decomposition over CuO -Fe
2
O
3
and CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
, series of
decomposition experiment were carried out in the temperature range of 400-600
o
C. Fig. 8. Shows the effect of temperature on the
conversion% of acetylene and it was found that the catalytic temperature is one of the most important factors that control the
efficiency of catalytic reaction. Two modes of decomposition rate can be observed for CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
catalyst fig. 8b. The
first one at lower decomposition temperature, 400 and 450
o
C, where the conversion percent 31% and 36.6 % was recorded,
respectively. Increasing the temperature to 500 and 600
o
C results a significant increase in the conversion percent and maximum
percentage yield of 59.6 and 68 % were observed at 500 and 600
o
C, respectively
As time proceeds, acetylene conversion% percent increase and the rate of conversion is high initially and slows down till the end
of the reaction.
Fig.9. summarizes the effect of temperature on the acetylene conversion percent over Fe
2
O
3
-CuO and CuO-CeO
2
-Fe
2
O
3
-
Al
2
O
3
.




I
n
t
e
n
s
i
t
y

(
a
.
u
.
)
1
1
2
(2,4,5)
(4,5,6)
3
(4,5) (4,5,6)
1 graphite
2 CuO
3 Al
2
O
3
4 Fe
3
O
4
5 CuFe
3
O
4
6 Ce
2
O
3
20 30 40 50 60 70 80
I
n
t
e
n
s
i
t
y

(
a
.
u
.
)
2- Theta Scale
1
(1,2)
2
(2,3)
2
1 graphite
2 Fe
3
O
4
3 CuO
(a)
(b)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

421 www.ijergs.org

















Fig.8. Effect of temperature on the catalytic decomposition of acetylene over
(a ) CuO-Fe
2
O
3
(b) CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3












Fig.9. The relationship between the acetylene decomposition temperature and the HC conversion % over
a
b
0
10
20
30
40
50
60
70
0 5 10 15 20 25 30 35
400
o
C
450
o
C
500
o
C
600
o
C
A
c
e
t
y
l
e
n
e

c
o
n
v
e
r
s
i
o
n

%
Time (min.)
0
10
20
30
40
50
60
0 5 10 15 20 25 30 35
400
o
C
450
o
C
500
o
C
600
o
C
A
c
e
t
y
l
e
n
e

c
o
n
v
e
r
s
i
o
n

%
Time (min.)
20
30
40
50
60
70
350 400 450 500 550 600 650
a
b
A
c
e
t
y
l
e
n
e

c
o
n
v
e
r
s
i
o
n

%
Temperature
o
C
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

422 www.ijergs.org

(a) CuO-Fe
2
O
3
(b) CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3

Fig.10.represents the Arrhenius plot of CNTs synthesis, from which activation energy value was calculated. The small value
of activation energy, 11.9 and 17.2 kJ/ mol ,for Fe
2
O
3
-CuO and CuO-CeO
2
-Fe
2
O
3
-Al
2
O
3
respectively, indicates that the two catalysts
are very active toward acetylene decomposition.









Fig.10: Arrhenius plot of CNTs synthesis over
(a) CuO-Fe
2
O
3
(b) CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3

3.2.3 Effect of acetylene flow rate
Effect of acetylene gas flow rate over 0.25 g of the most active catalyst on acetylene conversion % are given on table.4 and
fig.11.It is apparent from the table that acetylene conversion % increase with the decrease of the acetylene flow rate and this may be
attributed to that with increasing acetylene flow rate the CNTs yield is very high at the beginning of the reaction, covering and
poisoning the active sites on the catalyst surface and consequently acetylene conversion percent decrease.
Table .4. Effect of gas flow rate on acetylene decomposition over CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3

Gas Flow rate 150 ml/min 100 ml/min 50 ml/min
HC conversion % 30 % 50% 60%










Fig.11. Histogram of the HC conversion % and acetylene flow rate over CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
catalysts at 600
o
C
1.6
1.8
2
2.2
2.4
2.6
2.8
0.0011 0.00115 0.0012 0.00125 0.0013 0.00135 0.0014 0.00145 0.0015
(A) Ea = 11.9 kJ /Mol
(B) Ea = 17.2 kJ /Mol
L
n

(
R
a
t
e

c
o
n
s
t
a
n
t

)
1/T (K)
0
10
20
30
40
50
60
70
50 100 150
A
c
e
t
y
l
e
n
e

C
o
n
v
e
r
s
i
o
n

%
HC flow rate (ml / min)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

423 www.ijergs.org

3.2.4. Effect of catalyst weight
Effect of weight of CuO-CeO
2
-Fe
2
O
3
-Al
2
O
3
catalyst on acetylene conversion % using 50 ml/min gas flow rate are given in
table .5 and fig.12.
Table .5. Effect of catalyst weight on acetylene decompositionover CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3


Catalyst weight 0.25 gram 0.5 gram 1 gram 2 gram
HC conversion % 60 % 68 % 77 % 42%



















Fig.12. Effect of weight of CuO CeO
2
- Fe
2
O
3
- Al
2
O
3
on hydrocarbon conversion % using 50ml/min gas flow rate
It is apparent that the catalytic activity increase with increasing weight from 0.25 to 1 g because this metal loading exhibit the
formation of CNTs having highest resistance to oxidation and also catalytically active sites are introduced in the system. with further
increase on the catalyst

s weight ,CNTs formed were more readily to oxidized and consequently the carbon deposited will decrease
and the conversion percent decrease [7].
Conclusion
Nanosized CuO-CeO
2
, CuO-Fe
2
O
3
, CuO-CeO
2
-Al
2
O
3
and CuO-Fe
2
O
3
-CeO
2
-Al
2
O
3
were prepared by co-precipitation and wet
impregnations techniques and were used for Catalytic decomposition of acetylene to produce carbon nanotubes (CNTs). Weight gain
technique was used to follow up the catalytic reactions. The results revealed that catalyst chemical composition, catalytic temperature,
acetylene flow rate and catalysts weight have a significant effect on the acetylene conversion percent. It was found that maximum
acetylene conversion% occurs over CuO-CeO
2
-Fe
2
O
3
-Al
2
O
3
and CuO-Fe
2
O
3
and it increase by increasing, temperature from 400-
600
o
C, decreasing acetylene flow rate from 150-50 ml/min and increasing catalyst weight from 0.25-1g. With further increase in
catalyst weight, acetylene conversion% decrease. The SEM image shows that some catalytic nanoparticales are observed at the tips of
the carbon nanotubes indicating that CNTs formation occurs via tip growth mechanism. The results show that nanocrystallite CuO-
Fe
2
O
3
-CeO
2
-Al
2
O
3
can be recommended as promising catalysts for hydrocarbon decomposition.

REFERENCES:
[1] A. GLOBAL .Quantification of the disease burden attributable to environmental risk factors.
0
20
40
60
80
100
0 0.5 1 1.5 2 2.5
A
c
e
t
y
l
e
n
e

c
o
n
v
e
r
s
i
o
n

%
Catalyst weight (gram)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

424 www.ijergs.org

[2] C. Mathers, G. Stevens, M. Mascarenhas, Global health risks: mortality and burden of disease attributable to selected major risks,
World Health Organization, 2009.
[3] H. Rodhe, Science 248 (1990) 1217-1219.
[4] B.J. Finlayson-Pitts, J.N. Pitts, Science 276 (1997) 1045-1051.
[5] M.F. Zwinkels, S.G. Jrs, P.G. Menon, T.A. Griffin, Catalysis Reviews Science and Engineering 35 (1993) 319-358.
[6] M. Khedr, K. Abdel Halim, N. Soliman, Applied Surface Science 255 (2008) 2375-2381.
[7] T. Tsoufis, P. Xidas, L. Jankovic, D. Gournis, A. Saranti, T. Bakas, M.A. Karakassides, Diamond and related materials 16 (2007)
155-160.
[8] S. Iijima, nature 354 (1991) 56-58.
[9] J. Robertson, Materials Today 7 (2004) 46-52.
[10] Y. Zhang, Y. Bai, B. Yan, Drug Discovery Today 15 428-435.
[11] K. Gong, Y. Yan, M. Zhang, L. Su, S. Xiong, L. Mao, Analytical sciences: the international journal of the Japan Society for
Analytical Chemistry 21 (2005) 1383-1393.
[12] A. Hassanien, M. Tokumoto, Y. Kumazawa, H. Kataura, Y. Maniwa, S. Suzuki, Y. Achiba, Applied physics letters 73 (1998)
3839-3841.
[13] A.Y. Kasumov, R. Deblock, M. Kociak, B. Reulet, H. Bouchiat, I. Khodos, Y.B. Gorbatov, V. Volkov, C. Journet, M. Burghard,
Science 284 (1999) 1508-1511.
[14] R. Rakhi, K. Sethupathi, S. Ramaprabhu, Carbon 46 (2008) 1656-1663.
[15] H.D. Lim, K.Y. Park, H. Song, E.Y. Jang, H. Gwon, J. Kim, Y.H. Kim, M.r.D. Lima, R.O. Robles, X. Lepr, Advanced
Materials 25 1348-1352.
[16] Y.-S. Kim, K. Kumar, F.T. Fisher, E.-H. Yang, Nanotechnology 23 015301.
[17] W.-P. Dow, Y.-P. Wang, T.-J. Huang, Applied Catalysis A: General 190 (2000) 25-34.
[18] Y. Ando, X. Zhao, T. Sugai, M. Kumar, Materials Today 7 (2004) 22-29.
[19] M. Perez-Cabero, E. Romeo, C. Royo, A. Monzn, A. Guerrero-Ruiz, I. Rodrguez-Ramos, Journal of catalysis 224 (2004) 197-
205.
[20] T. Ebbesen, H. Lezec, H. Hiura, J. Bennett, H. Ghaemi, T. Thio, (1996).
[21] H. Dai, J.H. Hafner, A.G. Rinzler, D.T. Colbert, R.E. Smalley, Nature 384 (1996) 147-150.
[22] J. Cheng, X. Zhang, Z. Luo, F. Liu, Y. Ye, W. Yin, W. Liu, Y. Han, Materials chemistry and physics 95 (2006) 5-11.
[23] B.D. Cullity, S.R. Stock, Elements of X-ray Diffraction, Prentice hall Upper Saddle River, NJ, 2001.
[24] C.R. Jung, J. Han, S. Nam, T.-H. Lim, S.-A. Hong, H.-I. Lee, Catalysis today 93 (2004) 183-190.
[25] G. Avgouropoulos, T. Ioannides, H. Matralis, Applied Catalysis B: Environmental 56 (2005) 87-93.
[26] S. Li, C. Wu, H. Li, B. Li, Reaction Kinetics and Catalysis Letters 69 (2000) 105-113.
[27] A. Fazle Kibria, Y. Mo, K. Nahm, M. Kim, Carbon 40 (2002) 1241-1247.
[28] K. Kuwana, H. Endo, K. Saito, D. Qian, R. Andrews, E.A. Grulke, Carbon 43 (2005) 253-260.
[29] W. Zhu, D. Miser, W. Chan, M. Hajaligol, Materials chemistry and physics 82 (2003) 638-647








International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

425 www.ijergs.org

Designing a Human Computer Interface Systems Tasks and Techniques
M. Mayilvaganan
1
, D. Kalpana devi
2

1
Associate Professor, Department of Computer Science, PSG College of Arts and Science, Coimbatore, Tamil Nadu, India
2
Research Scholar (PHD), Department of Computer Science, PSG College of Arts and Science, Coimbatore, Tamil Nadu, India
E-mail- mayilvaganan.psgcas@gmail.com

Abstract This paper focus on goal of Human computer interface system which are used to analysis a skill for human user by
interacting with the system. It can be evaluate by problem solving, assessment activity, creativity and other process based on metrics
in systematically. It can be used to develop an idea for software tool to evaluate the skill of human user through usability metrics.
Keywords Interaction techniqies and task, Interaction style, Goal of Human Computer Interface, Function of User Interface,
Design phase model, natural language, cognitive model
INTRODUCTION
Cognition is the act or process of knowing an intellectual process by which knowledge is gained from perception or ideas. A
cognitive model have been developed for examine the development of skill levels from novice to expert. Scope of knowledge is
accumulated information, problem solving schemas, performance skills, expertise, memory capacity, problem representation ability,
abstraction and categorization abilities, synthesis skills, long-term concentration ability, motivation, efficiency and accuracy.
Skill levels of computer users are among the most important factors that impact their performance in performing computer-based
tasks. Accomplishing a task with a minimum outlay of time and effort is essential to the skilled performance of that task. Skills are
learned with practice and experience. Novice users perform tasks by recognition, i.e., they use knowledge in the world to plan and
accomplish tasks, whereas skilled users use knowledge in the head to accomplish tasks.
Human Computer Interface (HCI) can be describes communication between user and the system. The major goal is to improve
through interface between users and computers specifically to analyse the skill factor of the human user through systematically by the
method of cognitive models.
- Designing the best possible interface with respect of
usable metrics for evaluating the skill set in optimal way of ability of the human user.
- Implementing the methods in interface design by cognitive models.
- Evaluating and comparing interfaces.
- Developing new interaction techniques with interface.
- Developing descriptive and predictive models and theories of interaction.
INTERACTION TECHNIQUES AND TASKS
The basic interaction techniques are the ways to use input devices high-quality user interfaces are in many ways the last frontier in
providing computing to a wide variety of users to enter information into the computer. The four basic interaction tasks are Position,
Text, Select and Quantify.
The unit of information input in a position interaction task is of course a position. For example, a selection task can be carried out
by using a mouse to select items from menu using keyboard to enter the name of the selection, presenting a function key, or using a
keyboard to enter the name of the selection, presenting a function key, or using a speech recognizer. A mouse is often used for both
positioning and selection.
Interaction task are different from the logical input devices. The error rate measures the number of user error for interaction. The
error rate affects both speed of learning and speed of use. If it is easy to make mistakes with the system, the learning takes longer and
speed of use is reduced as the user must correct any mistake.
The goal is to teach user interface designs that serve human needs while building feelings of competence, confidence, and
satisfaction. Topics include formal models of people and interactions, collaborative design issues, psychological and philosophical
design considerations, and cultural and social issues.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

426 www.ijergs.org

INTERACTION STYLES
After completion of task analysis and identification of task objects, the interface designer [7] chooses any primary interaction style.
In Direct Manipulation presents task concepts allows easy learning, allows easiness, avoid an errors, encourages exploration,
affords high subjective satisfaction. The operations are invoked by actions performed on the visual representations typically using a
mouse. This command is not invoked explicitly by such traditional means as menu selection or by key board; rather the commands are
implicit in the action on the visual representation.
In menu selection, shorten learning, reduces keystrokes, structure of decision making, permits dialog- management tools and
allows easy support of error handling. Menu selection is the interaction style in which user reads a list of items to selects the most
appropriate item to his/her task and observe the effect.
In form fill-in interaction style, data entry is easy to arrange, requires modest training, it gives convenient assistance, prompting
increases efficiency, permits use of form management tools, use of tools facilitates from development. The user sees the blank data
entry fields that can be filled by moving cursor among the fields either required data. In form fill-in the user, an operator must know
the labels of different fields, permissible data and method of data entry.
In command language style, it is flexible to use, appeal to power users, supports user initiatives and allows convenient creation of
user defined macros. It provides a strong feeling of control for frequent users. In this style the user is supposed too learn the syntax for
his/ her task. An example MS DOS prompt where the users command can be executed to perform his/ her task. The major drawbacks
of this interaction style are
- Error rate is typically high
- Retention may be poor
- Training is necessary
- Error messages and online assistance are difficult to provide.
In natural language, relieves burden of learning syntax, extremely desirable. It can be defined as the operation of performing tasks
by computers by people using a familiar natural language to give commands and receive response.
In this interaction style the computer system is trained in such a way that it performs the user task by requiring command through
human voice. The main advantage of his interaction style is that there is no need to learn command syntax or to select items from
menus. The problems with natural language interaction lie not only in computer implementation but also desirability for large number
of uses for a huge number of tasks.
Designing a User Interface
In HCI, design methodologies aim to create user interface that are usable with ease and efficiently operate. Designers apply
rigorous and systematic user interface design techniques to develop designs for hardware and software.
The role of the interface designer is to create a software device that organizes the content and that presents the content on the
screen [1]. These three areas are information design, interactive design and media design.
According from Ben Shneiderman [5], the golden rules of interface design make an interface designers life easier and pave the
way to the creation of a successful interface.
- A good interface should be able to perform the task effectively for which it is designed.
- It should be highly interactive in nature.
- It should be flexible to use. It should provide high usability.
- It should increase the speed of learning.
- It should reduce burden on users memory.
- It should maintain consistency across applications.
- It should make inner working transparent to users.
- It should ensure that the user is always in control.
- It should provide feedback to user actions. In errors messages polite language should be used. The error messages should
suggest rectification.
- The transition from beginner to an expert should be fast and smooth.
- There should be a minimal requirement of training.
- Error recovery should be easy for the user. The system should not halt on simple errors.
- The features of the application should be easy to identify. There should be online help system.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

427 www.ijergs.org

- There should be less errors and the system should be predictable.
- There should be some guidance messages to show current system status and progress of last command.
Function of User Interface






Fig: 2. Function of User Interface
Good interfaces are designed to match existing trends and to feed on the knowledge of the user [6]. Figure.2 shows the function of
interface which experience in a wide variety of systems and applications suggests that the use of sophisticated computer technology in
the human interface, such as computer display, control and operator aids, raises, serious issues related to the way human operate,
trouble shoot and maintain the systems in respect way.
HCI DESIGN MODELS
The main discipline contributing to HCI is Ergonomics and human factor, Computer Science, Language psychology, Sociology,
Ethnography, semiotic and branching, Software Engineering and design. Water fall model, Star model, Iterative design or the circular
model are generally used for designing a user interfaces.








Fig.3 A system lifecycle for system interface design
Figure.3 shows the models are also used to design the entire software [9] but with some difference of phases and transition among
phases. The important phases are Requirement Analysis, Task Analysis and Evaluation of prototypes, Implementation, operation and
Maintenance. These are the basic model of software developing phases, which are used to develop the user interface model foe
analyzing the expert of human factor.

CONCLUSION
In this paper, it can be concluded that the Human Computer Interface System is to be capable of being learned by new target of
users to allow tasks to be performed with no delays of time period. Through the HCI design and model it can be determine to develop
User
Interface
Error
User
User Input
Help to open File
Computer
System
Investigate end-user
requirements
Analysis tasks
Design Interface
Prototype
Perform analytic
Evaluation
Usability: Test
with real users
Redesign
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

428 www.ijergs.org

for examine the human factor experts determines the exact task, the context and important characteristics of the user. The evaluator
then mentally walkthrough the actions necessary, attempting to predict the user behavior, problems likely to be encountered and
strategies used to solve the problem.

REFERENCES:
[1] B. Shneiderman and C. Plaisant, Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th
edition), Pearson/Addison-Wesley, Boston (2004).
[2] B.A. Myers, A brief history of human-computer interaction technology, ACM interactions, 5(2), pp 44-54 (1998).
[3] J. Nielsen, Usability Engineering, Morgan Kaufman, San Francisco (1994).
[4] Card, S., Moran, T., & Newell, A. (1983). The psychology of humancomputer interaction, Hillsdale, NJ, USA: Lawrence
Erlbaum.
[5] Stephen hill, The human computer Interface,1995, Thomson learning.
[6] International Organization for Standardization [ISO]. (1999, May). Human-centered design processes for interactive systems
(Standard No. 13407). Geneva, Switzerland: ISO.
[7] Mayhew, D J. (1992). Principles and guidelines in software user interface design. Englewood Cliffs, NJ, USA: Prentice Hall.
[8] Rosson, M., & Carroll, J (2002). Usability engineering: Scenario-based development of human-computer interaction. San
Francisco: Morgan Kaufmann.
[9] Coronado, J., Casey, B., 2004. A multicultural approach to task analysis: capturing user requirements for a global software
application. In: Diaper, D., Stanton, N.A. (Eds.), The Handbook of Task Analysis for Human-Computer Interaction. Lawrence
Erlbaum Associates, pp. 179192.
[10] Card, S. k., Moran, T. P., and Newell, A. 1983, The Psychology of Human-Computer Interaction. Lawrence Erlbaum,
Hillsdale, N.J















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

429 www.ijergs.org

Survey on Web Image Mining
Kumari Priyanka Sinha
1
, Praveen Kumar
1

1
Indian Institute of Information Technology
E-mail- Priyankasinha2008@gmail.com
Abstract In this paper a literature survey on web image mining is presented. Web image mining is a technique of searching ,retrieving and
accessing the data from an image, There are two type of web image mining techniques i.e. Text based web image mining and image based web
image mining. The objective of this paper is to present tools and technique which are used in past and current evaluation. We are also going to showa
chart for comparison between all past available techniques and we will show a summarize report for overall development in web image mining.

Keywords:Web image mining,Accountability, image retrieval, data mining,Web image ,Cloud Computing and mining.
INTRODUCTION
In the field of information technology (IT), there has emerged a new buzzword called Cloud Computing. It is de-scribed as the
future and that everyone should move into the so called Cloud. Cloud computing has generated significant interest in both fields i.e.
academic and industry, but it is still an evolving paradigm. Essentially, its aim to consolidate the economic utility model with the
evolutionary development of many existing approaches and computing technologies, including distributed services, applications,
and information infrastructures consisting of pool of computers, networks, and storage resources. Confusion exists in IT
communities about how a cloud differs from existing models and how these differences affect its adoption. Some see a cloud as a
novel technical revolution, while others consider it a natural evolution of technology, economy, and culture [1]. Nevertheless,
cloud computing is an important paradigm, with the potential to significantly reduce costs through optimization and increased
operating and economic efficiencies [1], [2]. Furthermore, cloud computing could significantly enhance collaboration, agility, and
scale, thus enabling a truly global computing model over the Internet infrastructure. However, without appropriate security and
privacy solutions designed for clouds, this potentially revolutionizing computing paradigm could become a huge failure. Several
surveys of potential cloud adopters indicate that security and privacy is the primary concern for itadoption.
TAXONOMY OF CLOUD COMPUTING

Cloud is Xaas offerings where X is software, hardware, platform, infrastructure, data, business etc. [3].The taxonomy is more than
defining the fundamentals that provides a framework for understanding current cloud computing offerings and suggests whats to
come. In Cloud Computing system we have to address fundamentals like virtualization,scalability, interoperability, quality of service,
fail over mechanism and the cloud delivery models within the context of taxonomy. Our main idea behind this taxonomy is to find
about fundamental of cloud computing.
CLOUD SERVICES MODELS

Software as a service (SaaS): In SaaS, the cloud providers enable and provide application software as on-demand services. Because
clients acquire and use software components from different providers, crucial issues include securely composing them and ensuring
that information handled by these composed services is well protected.
Platform as a service (PaaS):PaaS enables programming environments to access and utilize additional application building blocks.
Such programming environments have a visible impact on the application architecture, such as constraints on which services the
application can request from an OS. For example, a PaaS environment might limit access to well-defined parts of the file system,
thus requiring a fi-negrained authorization service.
Infrastructure as a service (IaaS): In this service, the cloud provider supplies a set of virtualized infrastructural components such as
virtual machines (VMs) and storage on which customers can build and run applications. The application will eventually reside on the
VM and the virtual operating system. Issues such as trusting the VM image, hardening hosts, and securing inter-host communication
are critical areas in IaaS.
Hardware as a service (Haas): According to Nicholas Carr the idea of buying IT hardware or even an entire data center as a pay as
you go subscription service that scales up or down to meet your needs. But as a result of rapid advances in hardware virtualization,
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

430 www.ijergs.org

IT automation and usage metering and pricing, It is the concept of Hardware as a service. This model is advantageous to the
enterprise users, since, it dont need to invest in building and managing data centers.

CLOUD DEPLOYMENT MODELS

Public:The Cloud infrastructure is made available to the general public or a large industry group and is owned by an organization
selling Cloud services [4].Public clouds are external or publicly available cloud environment that are accessible to multiple tenants.
Private:The Cloud infrastructure is operated solely for a single organization. It may be managed by the organization or a third party,
and may exist on-premises or off premises [3].Private clouds are typically tailored environments with dedicated virtualized resources
for particular organizations.
Community: The Cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns
(e.g., mission, security requirements, policy, or compliance considerations). It may be managed by the organizations or a third party
and exist on-premises or off premises [4].Community clouds are tailored for particular groups of customers.
Hybrid:The Cloud infrastructure is a composition of two or more Clouds (private, community, or public) that remain unique entities
but are bound together by standardized or proprietary technology that enables data and application portability (e.g., Cloud bursting for
load-balancing between Clouds)[5].It is Combination of any two cloud deployment model.

IV.VIRTUALIZATION MANAGEMENT

It is the technology that abstracts the coupling between the hardware and operating system. It used for providing abstraction of logical
resource from their underlying physical resource in order to improve agility, flexibility, reduce costs.There are also types of
virtualization[6] such as Server virtual-ization, storage virtualization and network virtualization. In a virtualized environment,
computing resource can be dynamically created, expanded, shrunk or moved as per demandvaries. Therefore, it is well suited for
dynamic cloud infra-structure which provides sharing, manageability and isolation.

V.FAULT TOLERANCE
Whenever there is a backup instance of application which is ready to take over without disruption in case of failure is called
failover[7]. Fault tolerance is the feature of distributed computing in which system provides its intended service in case of failure of
some of its component. Unlike isolated instances that are de-played in a silo structure multi-tenancy is a large community which is
hosted by the provider. This could only be practical when the applications are stable, reliable, customizable, se-cure, and upgradeable
which the provider usually handles. It can be viewed in two different perspectives, the client and the provider.
The clients could use a public Cloud service or actually be part of the organization that is hosting the Cloud, but would still be part of
the infrastructure. The provider view is that multi-tenancy will allow for providers to enable economies of scale, availability,
operational efficiency and use of applications to multiple users.
Service Level Agreement: A Service Level Agreement (SLA)[8] is in general a legal binding agreement about a service a client is
buying from a Service Provider (SP). The agreement is a part of a much bigger contract between two partners that define the
purchased service. The levels included are a frame of how the service should be delivered and failure to follow this agreement is
usually followed by penalty, which should also be defined in the agreement. The three principles are the main concerns when dealing
with information security and each principle requires different security mechanisms to be able to be enforced. For Cloud Computing to
be considered to be secure, these principles are what it has to live up to.To enforce these principles there are different mechanisms that
can be applied. The mechanisms are retrieved from a blog called Continuity Disaster.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

431 www.ijergs.org

VI. RISK
Isolations Failure:-The failure of hardware separates storage, memory, routing and even reputation between different tenants[9].
Compliance Risk: Investment in achieving certification may be put at risk by moving to the Cloud.
Management Interface Compromise: Customers man-agreement interfaces of Public Cloud providers are accessible through the
Internet and mediate access to larger sets of resources, which pose an increased risk.
Data Protection: The ability of the customer to check the data handling practices of the Cloud provider and to ensure that the data is
treated in a lawful manner.
Insecure or incomplete data deletion: Customer requesting that their data is deleted and it is not completely removed or deleted due to
duplication.
Abuse and Nefarious Use of Cloud Computing: Easy access and lack of control of who is using Cloud Computing can provide
entrance for malicious people.
Insecure Interfaces and APIs: Authentication and reusable aces tokens/passwordshave to be properly managed or security issues will
rise.
Malicious Insider: Lack of insight at the Cloud providers employees can trigger risks if employees have malicious intent and access
to information he/she should not have.
Shared Technology Issues: With scalability come shared technology issues since the provider is using their own re-sources to provide
more for the clients during peaks.
With sharing technology the risk of hypervisors appear since hypervisors work in between different clients.
Data Loss and Leakage: Improper deletion or backup of data records can lead to unwanted duplication of data that becomes available
when it should not exist.
Account or Service Hijacking: Phishing for credentials to get access to sensitive data.
Unknown Risk Profile: No insight in what the provider do to keep your data safe or doing updates, patches etc.
VII.SECURITY AND PRIVACY
Cloud Computing is a new computing model, regardless of the systems architecture or services deployment is different from the
traditional computing model. Therefore traditional security policies are not able to respond to the emergence of new cloud computing
security issues. We review the Security and Privacy implication and challenges of cloud computing[10].
computing, SLAs are necessary to control the use of computing resources. (Mainly used in utility based or on demand services).
VIII. SECURITY AND PRIVACY CHALLENGES
Since, Cloud computing environments are multi-domain environments in which each domain can use different security, privacy, and
trust requirements and employ various mechanisms, interfaces, and semantics. Such domains could represent individually enabled
services or other infrastructural or application components. It is important to leverage existing research on multi-domain policy
integration and the secure-service composition to build a comprehensive policy-based management framework in cloud computing
environments.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

432 www.ijergs.org


CONCLUSION
The core objective of this research was to explore the Taxonomy of cloud computing with security and privacy challenges. There are
many open issues regarding the cloud computing but security and privacy risks are enormous. Enterprise looking into cloud
computing technology as way to cut down on cost and increase profitability should seriously analyze the security and privacy risk of
cloud computing. A taxonomy of cloud computing provide ideas of researcher and developer on the current cloud systems, hype and
challenges. It also gives the information to evaluate and improve the existing and new cloud system. We see the Security and privacy
implication with the existing challenges. The strength of cloud computing is the ability to manage risk more effectively from
centralized point of view. Security updates and new patches can be applied more effectively. The weakness include list of issues such
as security and privacy of business data which is being hosted in 3rd party data centers, being lock-in to a platform
reliability/performance concerns, and the fears of making the wrong decision before the industry begins to mature. Enterprise should
verify and understand the cloud security, its benefits with future scope, carefully analyze the security issues involved and plan for
ways to resolve it before implementing the technology. Some pilot tools should be setup and good governance should be put in place
to effectively deal with security issues and concerns. It should be planned and it should be gradual over a period of time. It has been
identified that especially the areas of standardization and interoperability need to evolve. Virtualization and Hypervisors is so nascent.
We should do much more experiment with them in order to provide services such as Iaas (Infrastructure as a Service). It means
virtualization is next key issues for cloud computing.
The security addressed by the taxonomy only considers security measures between the client and the cloud. An important addition to
the taxonomy will be to also consider the security mechanisms used within the cloud.

REFERENCES:
I. CLOUD SECURITY ALLIANCE, SECURITY GUIDANCE FOR CRITICAL AREAS OF FOCUS IN CLOUD
COMPUTING V2.1
II. D. CATTEDDU AND G. HOGBEN, CLOUD COMPUTING: BENEFITS, RISKS AND RECOMMENDATIONS FOR INFORMATION SECURITY,
ENISA 2009.
III. http://en.wikipedia.org/wiki/Cloud_computing
IV. Brandl D. Don't cloud your compliance data.2010.
V. Gathering Clouds of Xaas! http://www.ibm.com/developer
VI. BHASKAR PRASAD RIMAL AND IAN LUMB,A TAXONOMY AND SURVEY OF CLOUD COMPUTING SYSTEMS, 2009.
VII. National Institute of Standards and Technology (NIST
http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf)
VIII. . CSA (2009 December) Security guidance for critical areas of focus in Cloud Computing v2.1, Cloud Security Alliance.
IX. http://cloudtaxonomy.opencrowd.com
X. B. D. JAMES AND HASSAN TAKABI SECURITY AND PRIVACY CHALLENGES IN CLOUD COMPUTING ENVIRONMENT, 2010









International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

433 www.ijergs.org

Fast Approximation Based Combinatorial Optimization Algorithm
Harsimran Kaur
1
, Neetu Gupta
2

1
Research Scholar (M.Tech), ECE Deptt, GIMET
2
Asst. Prof, ECE Deptt, GIMET
E-mail- er.harsimrankaur@gmail.com
Abstract - Label cost optimization proposes a new improvement in label cost function, improving existing moves of -expansion
algorithm and introducing some new moves for this algorithm. In order to study the performance comparison, different metrics of
energy minimization has been considered. An appropriate comparison has been drawn among proposed technique i.e. fast
approximation algorithm and previous well known techniques. The objective is to effectively optimize energies so that satisfactory
image segmentation can be obtained (represented with different labels respective to different objects). New combinatorial optimization
algorithm have been proposed which shows promising experimental results with the new moves, which we believe could be used in
any context where -expansions are currently employed

Keywords - Energy minimization, labels, -expansion, segmentation, NP-hard, local minimum, non-parametric histogram
INTRODUCTION

Energy minimization is of strong practical and theoretical importance to computer vision. Energy expresses our criteria for a good
solutionlow energies are good, high energies are badindependent of any algorithm[2]. Algorithms are however hugely important
in practice. Even for low level vision problems we are confronted by energies that are computationally hard (often NP-hard) to
minimize. As a consequence, a signicant portion of computer vision research is dedicated to identifying energies that are useful and
yet reasonably tractable. The work in this paper is of precisely this nature. Computer vision is full of labelling problems cost as
energy minimization. For example, the data to be labelled could be pixels, interest points, point correspondences, or mesh data such as
from a range scanner. Depending on the application, the labels could be either semantic (object classes, types of tissue) or describe
geometry/appearance (depth, orientation, shape, texture).
I a. Labeling Problems
A labeling problem is, roughly speaking, the task of assigning an explanatory label to each element in a set of observations. Many
classical clustering problems are also labeling problems because each data point is assigned a cluster label. To describe a labeling
problem one needs a set of observations (the data) and a set of possible explanations. The labeling problem associates one discrete
variable with each datum, and the goal is to find the best overall assignment to these variables (a labeling) according to some criteria.
In computer vision, the observations can be things like pixels in an image, salient points within an image, depth measurements from
arrange-scanner, or intensity measurements from CT/MRI. The labels are typically either semantic (car, pedestrian, street) or related to
scene geometry (depth, orientation, shape, texture).
I b. -expansion algorithm
The -expansion algorithm has a significant impact in computer vision due to its generality, effectiveness, and speed. It is commonly
used to minimize energies that involve unary, pair wise, and specialized higher-order terms. Their main algorithmic contribution is an
extension of -expansion that also optimizes label costs with well characterized optimality bounds. Label costs penalize a solution
based on the set of labels that appear in it, for example by simply penalizing the number of labels in the solution.The -expansion
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

434 www.ijergs.org

algorithm performs local search using a powerful class of moves. Given an initial labeling
^
and some particular label L, an -
expansion move gives each variable the following binary choice: either keep the current label
^
p, or switch to label . Let M(
^
)
denote the set of all moves (labelings) that can be generated this way,in other words M(
^
) = {f : fp {
^
p} {}}.
All variables are simultaneously allowed to keep their current label or to switch, so there are an exponential number of possible
moves. For each choice of we must efficiently find the best possible move. In practice, this sub-problem is solved by casting it as a
graph cut (Greig et al., 1989) and using combinatorial algorithms to compute the optimal binary conguration (e.g. Goldberg and
Tarjan, 1988; Boykovand Kolmogorov, 2004; Strandmark and Kahl, 2010). Because a graph cut nds the best move from an
exponential number of possibilities, the -expansion algorithm is a very large-scale neighbourhood search (VLSN) technique (Ahuja
et al., 2002) and is very competitive in practice (Szeliski et al., 2006).
With respect to some current labelling
^
, the full set of possible expansion moves is M (
^
) = L M(
^
). The -expansion
algorithm simply performs local search over the full search neighbourhood M (
^
). Perhaps surprisingly, local search with expansion
moves will terminate with a labelling
^
that is within a constant factor from the globally optimal labelling f.

I c. Different Energy Minimization Algorithms
I c.1.Iterated conditional modes (ICM) iterated conditional modes [1] uses a deterministic greedy strategy to find a local
minimum. It starts with an estimate of the labeling, and then for each pixel it chooses the label giving the largest decrease of the
energy function [12]. This process is repeated until convergence, which is guaranteed to occur, and in practice is very rapid.
Unfortunately, the results are extremely sensitive to the initial estimate, especially in high-dimensional spaces with non-convex
energies (such as arise in vision) due to the huge number of local minima. ICM is assign each pixel the label with the lowest data cost.
This resulted in significantly better performance.
I c.2.Graph cuts The two most popular graph cuts algorithms [4], called the swap move algorithm and the expansion move
algorithm, were introduced [7]. These algorithms rapidly compute a strong local minimum, in the sense that no permitted move will
produce a labelling with lower energy. For a pair of labels , , a swap move takes some subset of the pixels currently given the label
and assigns them the label , and vice-versa. The swap move algorithm finds a local minimum such that there is no swap move, for
any pair of labels , that will produce a lower energy labelling. Analogously, we define an expansion move for a label to increase
the set of pixels that are given this label. The expansion move algorithm finds a local minimum such that no expansion move, for any
label , yields a labelling with lower energy. The criteria for a local minimum with respect to expansion moves (swap moves) are so
strong that there are many fewer minima in high dimensional spaces compared to standard moves. In the original work of [7] the swap
move algorithm was shown to be applicable to any energy where Vpq is a semi-metric, and the expansion move algorithm to any
energy where Vpq is a metric. The results of [9] imply that the expansion move algorithm can be used if for all labels ,,and , Vpq(,
) + Vpq(, ) Vpq(, ) + Vpq(,). The swap move algorithm can be used if for all labels , Vpq(, ) + Vpq(, ) Vpq(, ) +
Vpq(,). (This constraint comes from the notion of regular, i.e. submodular, binary energy functions, which are closely related to
graph cuts.) If the energy does not obey these constraints, graph cut algorithms can still be applied by truncating the violating terms
[11]. In this case, however, we are no longer guaranteed to find the optimal labeling with respect to swap (or expansion) moves. In
practice, the performance of this version seems to work well when only relatively few terms need to be truncated.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

435 www.ijergs.org

I c.3.Max-product loopy belief propagation (LBP) To evaluate the performance of LBP, we implemented the max-product LBP
version, which is designed to find the lowest energy solution. The other main variant of LBP, the sum-product algorithm, does not
directly search for a minimum energy solution, but instead computes the marginal probability distribution of each node in the graph.
The belief propagation algorithm was originally designed for graphs without cycles [13], in which case it produces the exact result for
our energy. However, there is nothing in the formulation of BP that prevents it from being tried on graphs with loops. In general, LPB
is not guaranteed to converge, and may go into an infinite loop switching between two labeling. Felzenszwalb and Huttenlocher [2]
present a number of ways to speed up the basic algorithm. In particular, LBP implementation uses the distance transform method
described in [2], which significantly reduces the running time of the algorithm.
I c.4.Tree-reweighted message passing (TRW)Tree-reweighted message passing[13] is a message-passing algorithm similar, on the
surface, to LBP. An interesting feature of the TRW algorithm is that it computes a lower bound on the energy. The original TRW
algorithm does not necessarily converge, and does not, in fact, guarantee that the lower bound always increases with time. In a
research paper an improved version of TRW was used, which is called sequential TRW, or TRW-S. In this version, the lower bound
estimate is guaranteed not to decrease, which results in certain convergence properties. In TRW-S we first select an arbitrary pixel
ordering function S (p). The messages are updated in order of increasing S (p) and at the next iteration in the reverse order. Trees are
constrained to be chains that are monotonic with respect to S (p).
This Introduction covers the terminology and techniques used for the cost labeling approach. Thesis work will be focused around
improvement in label cost function, improving existing moves of -expansion algorithm and introducing some new moves for this
algorithm. Some, new technique will be used in -expansion algorithm to optimize label cost function and utilize it for better results.

II. RELATED WORK
Anton Osokin [5] In his paper Author describe the -expansion algorithm has had a significant impact in computer vision due to its
generality, effectiveness, and speed. It is commonly used to minimize energies that involve unary, pair wise, and specialized higher-
order terms. Their main algorithmic contribution is an extension of -expansion that also optimizes label costs with well
characterized optimality bounds. Label costs penalize a solution based on the set of labels that appear in it, for example by simply
penalizing the number of labels in the solution. As energy has a natural interpretation as minimizing description length (MDL) and
sheds light on classical algorithms like K-means and expectation-maximization (EM). Label costs are useful for multi-model fitting
and he demonstrate several such applications: homography detection, motion segmentation, image segmentation and compression.
Lena Gorelick et al. [6] In this paper author describes computers vision is full of problems elegantly expressed in terms of energy
minimization. They characterize a class of energies with hierarchical costs and propose a novel hierarchical fusion algorithm.
Hierarchical costs are natural for modeling an array of difcult problems. They explain in example, that in semantic segmentation one
could rule out unlikely object combinations using hierarchical context. In geometric model estimation, one could penalize the number
of unique model families in a solution, not just the number of modelsa kind of hierarchical MDL criterion. Hierarchical fusion uses
the well-known -expansion algorithm as a subroutine, and offers a much better approximation bound in important cases.
Yuri Boykov et al. [7] In this paper author address the problem of minimizing a large class of energy functions that occur in early
vision. The major restriction is that the energy function's smoothness term must only involve pairs of pixels. He proposes two
algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move he consider is an
- swap: for a pair of labels ; this move exchanges the labels between an arbitrary set of pixels labeled and another arbitrary set
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

436 www.ijergs.org

labeled . The first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move he
considered is a -expansion: for a label , this move assigns an arbitrary set of pixels the label .The second algorithm, which requires
the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover,
this solution is within a known factor of the global minimum. He experimentally demonstrates the effectiveness of his approach on
image restoration, stereo and motion.
III. PROBLEM DEFINITION
Image can be segmented by assigning different labels (represented by different colors) to different objects. Label costs penalize a
solution based on the set of labels that appear in it, for example by simply penalizing the number of labels in the solution. There
should be sufficient number of labels; too many labels do not represent good segmentation as multiple labels may represent subpart of
single object. On the other hand, in case of too less number of labels, a single label may represent multiple objects. Label cost can be
associated with energy terms (combination of various energies associated with images e.g. Smoothing Energy, Bending Energy,
Elastic energy etc.). Most labeling problems in computer vision and machine learning are ill-posed and in need of regularization, but
the most useful regularization algorithms often make the problem NP-hard. The objective is to effectively optimize energies so that
satisfactory image segmentation can be obtained (represented with different labels respective to different objects). In order to meet the
objective, first task will be to define some label cost function in terms of energies. Unsupervised segmentation will be performed to
assign labels by clustering simultaneously over pixels and color space using Gaussian Mixtures (for color images) and nonparametric
histograms (for gray-scale images). Then based upon fast approximation based combinatorial optimization algorithm is implemented
to minimize label cost function and redefine labels. -expansion algorithm is already available for this purpose. This work focused
around improvement in label cost function, and incorporating elastic energy for this algorithm.
IV. METHODOLGY
IV.1 Fast approximation based combinatorial optimization algorithm
Label costs: Start by considering a basic (unregularized) energy E(f)= pDp(fp), where optimal fp can be determined trivially by
minimizing over independent data costs. We can introduce label costs into E(f) to penalize each unique label that appears in f:

E(f)= D
p
(f
p
) + h
1
.
1
(f)
Where pP lL

Minimum graph cut algorithm is performing a graph cut based upon following objective function f i.e. label, smooth, data and
elastic cost.
1
E(f)= D
p
(f
p
)+ h
l
.
l
(f)+ V
pq
(f
p.
f
q
) + .|dv/ds|
2
ds
pP lL p,qN 0
It defines how different steps discussed above will be used here to achieve the objectives.
Step1: Define some label cost function in terms of energies.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

437 www.ijergs.org

Step2: Unsupervised segmentation will be performed to assign labels by clustering simultaneously over pixels and color space using
Gaussian Mixtures (for Color images) and nonparametric histograms (for gray-scale images).
Step3: Minimum graph cut algorithm is applied to separate two separate layers of the image. Separated layers are added into queue.
Step4: Repeat until queue is empty.
Step 4a: pop an element from queue.
Step 4b: perform minimum graph cut algorithm
Step 4c: If Selected layer is successfully further separated into sub layer by a minimum graph cut algorithm then add sub-layer to
queue else add selected layer to the solution list.
Step 5 Assign different Labels/colors to objects present in every single element of solution list [layers]
V. EXPERIMENTAL RESULTS
The experimental setup is essentially the same for each application: generate proposals via random sampling, compute initial data
costs Dp, and run the iterative algorithm from The tables below compare running times (in seconds, 1.4GHz Pentium IV) of the
selected algorithms for a number of segmentation examples. Note that these times include min-cut/max-flow computation and Fast
approximation based combinatorial optimization algorithm. In each column we show running times of Fast approximation based
combinatorial optimization algorithm and max-flow/min-cut algorithms corresponding to exactly the same set of seeds. The running
times were obtained for the 5 and 25 neighborhood systems (N5 and N25). Switching from N5 to N25 increases complexity of
graphs but does not affect the quality of segmentation results much.
Table1: comparison of various approximation algorithms
Methods 2D examples
Bell
photo(25
5x313
Lung CT
(409x314
)
Liver
MR(511x
511

N4
N8
N4
N8
N4
N8

DINIC 2.73
3.99
2.91
3.45
6.33
22.86

H_PRF 1.27
1.86
1.00
1.22
1.94
2.59

Q_PRF 1.34
0.83
1.17
0.77
1.72
3.45

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

438 www.ijergs.org

Combito
rial
Approac
h
0.11
0.19
0.32
0.38
0.30
0.55


Figure1: Showing segmentation

Figure2: color segmentation of labels

Figure3: Histogram of all color bands
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

439 www.ijergs.org

V.1 Comparisons of Various approximation algorithms results
The following table shows the various approximated methods and related applications:
Table2: comparison of various approximation algorithms
Approximate
Methods

Energy
minimization
case
Algorithm Applications

V metric
-
expansion
and
extensions
,LP
rounding
,r-HST
metrics
approximatio
n bounds,
segmentation
, model fitting
V semi-metric -swap
,r-HST
metrics
Approximatio
n bound
V truncated
convex
Range
moves
Approximatio
n bound

L=2
QPBO
,QPBO
bipartite
multi-cut
Approximatio
n bound
log(#non-
submodular
terms);QPBO
gives partial
labelings

Arbitrary
energy
Mess,
passing
,decompos
ition , local
search
NP-hard to
approximate
by constant
factor
VI. CONCLUSION
Different metrics of energy minimization are considered for performance comparison. An appropriate comparison has been drawn
among proposed technique and previous well known techniques. The objective is to effectively optimize energies so that satisfactory
image segmentation can be obtained (represented with different labels respective to different objects). New combinatorial
optimization algorithm have been proposed which shows promising experimental results with the new moves, which we believe could
be used in any context where -expansions are currently employed.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

440 www.ijergs.org


REFERENCES:
[1] BESAG, J.: ON THE STATISTICAL ANALYSIS OF DIRTY PICTURES (WITH DISCUSSION). JOURNAL OF THE ROYAL STATISTICAL SOCIETY,
SERIES B 48 (1986) 259302.

[2] Felzenszwalb, P.F., Huttenlocher, D.P.: Ecient belief propagation for early vision In: CVPR. (2004) 261268.

[3] Yuri Boykov and Olga Veksler. Graph Cuts in Vision and Graphics: Theories and Applications. In Nikos Paragios, Yunmei Chen,
and Olivier Faugeras, editors, Handbook of Mathematical Models in Computer Vision, pages 7996. Springer US, 2006.

[4] Energy Minimization using Graph Cuts 2011 http://rise.cse.iitm.ac.in/wiki/images/0/07/Deepak.pdf
[5] Andrew Delong Anton Osokin Fast Approximate Energy Minimization with Label Costs International Journal of Computer
Vision, vol. 96, no. 1, pp. 127, January 2012.
[6] Lena Gorelick, Olga Veksler Minimizing Energies with Hierarchical Costs International Journal of Computer Vision, vol. 100,
no. 1, pp. 3858, October 2012.
[7] Yuri Boykov, Olga Veksler, and Ramin Zabih. Fastapproximate energy minimization via graph cuts. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 23(11):12221239, November 2001.
[8] Vladimir Kolmogorov An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision, In
IEEE Transactions on PAMI, Vol. 26, No. 9, pp. 1124-1137, Sept. 2004.

[9] Rother, C., Kumar, S., Kolmogorov, V., Blake, A.: Digital tapestry. In: CVPR. (2005).

[10] Pearl, J.: Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann (1988).

[11] Wainwright, M.J., Jaakkola, T.S., Willsky, A.S.: MAP estimation via agreement on (hyper)trees: Message-passing and linear-
programming approaches. IEEE Trans Info Theory 51 (2005).
[12] Richard Szeliski, Ramin Zabih, A Comparitive Study of Energy Minimization Methods for Markov Random Fields Pattern
Analysis and Machine Intelligence, IEEE Transactions on (Volume:30 , Issue: 6 ) 2008





International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

441 www.ijergs.org

Influence of Macrophyte Types towards Agrochemical Phytoremediation in a
Tropical Environment
Emmy LEMA
1
, Revocatus MACHUNDA
1
, Karoli Nicholas NJAU
1

1
Nelson Mandela African Institution of Science and Technology (NM-AIST), Department of Water Environmental Science and
Engineering, P. O. Box 447, Arusha, Tanzania.
E-mail- lemae@nm-aist.ac.tz
Abstract
The presence of agrochemicals waste water from agricultural fields poses major environmental and human health problems which
may be solved by phytoremediation technologies. Phytoremediation is the use of plants to remediate contaminants in the environment.
Batch experiments were conducted to evaluate the influence of four aquatic macrophytes (Cyperus papyrus, Typha latifolia, Cyperus
alternifolius and Phragmites mauritianus) towards phytoremediation of agrochemicals from simulated wastewater in Arusha,
Tanzania. The selected agrochemicals belonged to different categories namely heavy metal based (Cu, Fe, Mn and Zn) and pesticides
(L-Cyhalothrin, Endosulfan and Permethrin). The change in mean concentration of the agrochemicals was described by first-order
reaction kinetics. The results indicated that the removal rate constants were greater for the batch experiments planted with the
macrophytes than for the control group. Furthermore, the rate of removal varied between the treatments for the different categories of
agrochemicals. As far as heavy metals are concerned, Cyperus papyrus had a greater removal Cu and Fe with the k values of 0.338 d
-1

and 0.168 d
-1
respectively and Typha latifolia had a greater removal of Mn and Zinc with k values 0.420 d
-1
and 0.442 d
-1
respectively.
On the other hand, the pesticides endosulfan and permethrin were greatly removed by Cyperus papyrus with k values 0.086 d
-1
and
0.114 d
-1
respectively. Lastly, L-Cyhalothrin was removed greatly by Typha latifolia with k value of 0.116 d
-1
. Generally, the results
demonstrated that aquatic macrophytes can influence the reduction of agrochemicals in wastewater.

Key words: wastewater,pesticides, heavy metals, agriculture, environment, batch reactor system, removal rate constant.
1. Introduction
1.1 Agrochemical pollution
The use of agrochemicals such as chemical fertilizers and pesticides are integral part in the current agriculture production system
around the globe. Accordingly, their uses have been a common practice particularly in many nations in the tropical world [7]. In
humid tropics of Africa, these agrochemicals have been extensively used to control pests and diseases affecting crop productivity and
improve soil fertility. In Tanzania, the need to increase crop productivity has led to extensive use of pesticides, fertilizers and
promotion of irrigation in horticultural practices [33]. However the excessive and indiscriminate uses of these agrochemicals create
environmental problems such as contamination of soil and water resources[24].

Pollution by agrochemicals is one of the most significant threats to the integrity of the worlds surface waters. In Tanzania, agriculture
has been categorized as one of the most polluting industries releasing effluents containing agrochemicals[34]. The agrochemicals of
main ecological concern are heavy metal based fertilizers, fungicides and pesticides because they are toxic and persistent in the
environment and hence they can eventually bio-accumulate to higher levels that could affect human being [11] and other living
organisms. Although heavy metals occur naturally in soils in small quantities but the major sources emanate from micro nutrients
applied on agricultural fields as such as zinc, manganese, molybdenum, iron, nickel, phosphates, aluminium, selenium and copper.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

442 www.ijergs.org

These trace elements are essential for the growth and health of plants but they are highly toxic when the concentration exceeds certain
limits.
Earlier information on the types of pesticides used in Tanzania revealed that different classes of pesticides are being used in
agriculture like organochlorines (endosulfan); organophosphates (chlorpyrifos, dimethoate, profenofos, diazinon and fenitrothion);
carbamates (carbofuran, mancozeb, carbaryl and metalaxy) and pyrethroids (permethrin, cypermethrin, deltamethrin and lambda-
cyhalothrin [24; 25]. Due to the widespread, long term use and the chemical properties of these pesticides, their residues end up in the
environment and are being detected in various environmental matrices including the biota.

Studies done in Tanzania and elsewhere have indicated significant agrochemical contamination of soil and water resources [25; 22; 32;
14]. Application of copper based fungicides has been reported to cause soil contamination by copper [31]. The application of
phosphate fertilizers to the agricultural soil has led to increase in heavy metals like cadmium, copper, zinc and arsenic [46]. Although
some farms in Tanzania treat their wastewater effluents in a suitable way, others lack convenient treatment systems thus discharging
untreated or poorly treated wastewater into the natural environment [25]. The continual discharges of effluents containing these
agrochemicals can increase the accumulation of toxic chemicals and thereby threatening the aquatic ecosystem and human health [38].

Due to their toxic properties and adverse negative effects on the environment, several strategies have been developed to remove
contaminants from the environment. Conventional wastewater treatment techniques for removal of agrochemicals from agriculture
runoff include physical and/ or chemical treatments such as isolation, containment, coagulation-flocculation, reverse osmosis, ion
exchange, electrochemical treatment, etc. However, these technologies are impractical and expensive for developing countries like
Tanzania and often require a large excess of chemicals and generate large volumes of sludge and hazardous by-products that require
appropriate and costly disposal methods. Due to the above-mentioned constraints of conventional technologies, phytoremediation
methods using aquatic macrophytes are the need for developing countries because they are environmentally friendly, effective and
cheaper to establish and operate.

1.2 Phytoremediation using aquatic macrophytes
Macrophytes are aquatic plants and are regarded as important component of aquatic ecosystem due to their roles in oxygen production,
nutrient recycling, controlling water quality, sediment stabilization and providing shelter for aquatic life [36]. Phytoremediation takes
advantage of the natural processes of macrophytes and their roles in pollutant removal. These processes include water and chemical
uptake, metabolism within the macrophytes, and the physical and biochemical impacts of root system. Aquatic macrophytes are more
suitable for wastewater treatment than terrestrial plants because of their relatively fast growth rate and larger biomass production,
higher capability of pollutant uptake and better purification effects due to direct contact with contaminants in water.

The word phytoremediation comes from the Greek word phyto which means plant and Latin word remediation which means to
remove, which refers to a diverse collection of plant based technologies that use plants to clean contaminants[9]. Phytoremediation
technology is relatively a new approach and has gained importance during the last two decades [10]. This technique can be applied to
both organic and inorganic pollutants [13] present in solid substrates (e.g. soil), liquid substrates (e.g. water) and air [28]. Chemical
substances that can be subjected to phytoremediation include metals (Pb, Zn, Cd, Cu, Ni, Hg etc), metalloids (As, Sb), inorganic
compounds (NO
-
3
, NH
+
4
, PO
3-
4
), radionuclides (U, Cs, Sr), petroleum hydrocarbons (BTEX), pesticides (atrazine, bentazone,
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

443 www.ijergs.org

chlorinated and nitroaromatic compounds), explosives (TNT, DNT), chlorinated solvents (TCE, PCE) and industrial organic wastes
(PCPs, PAHs) and landfill leachates [20].

1.3 Phytoremediation mechanisms
There are several mechanisms by which phytoremediation can occur (Figure 1). Each of these mechanisms will have an effect on the
volume, mobility, or toxicity of contaminants, as the application of phytoremediation is intended to do [12].

Figure1: Phytoremediation through the use of plants.

1.3.1 Phytodegradation or phytotransformation
Is the breakdown (degradation) of contaminants taken up by plants through metabolic processes within the plant, or the breakdown of
contaminants surrounding the plant through the effect of enzymes produced by the plants [43]. Phytodegradation has been observed to
remediate some organic contaminants, such as chlorinated solvents, herbicides, and it can address contaminants in soil, sediment, or
water [12].

1.3.2 Rhizodegradation or phytostimulation
This refers to the breakdown of contaminants within the plant root zone, or rhizosphere through microbial activity. Microorganisms
(yeast, fungi, and bacteria) are enhanced in the rhizosphere because the plant roots release natural substances like sugars, alcohols,
acids, enzymes, and other compounds that contain organic carbon that is used as source of energy and food for microorganisms [8].
The roots also provide additional surface area for microbial growth and aeration. The rhizodegradation process has been investigated
and found to be successful in treating a wide variety of mostly organic chemicals, including petroleum hydrocarbons, polycyclic
aromatic hydrocarbons (PAHs), pesticides etc[12].


1.3.3 Phytoextraction or phytoaccumulation
Is the uptake of contaminants by plant roots and the translocation/accumulation (phytoextraction) of contaminants from the soil into
the plants biomass (shoots and leaves). This process occurs when the sequestered contaminants are not degraded in or emitted from
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

444 www.ijergs.org

the plant rapidly and completely, resulting in an accumulation within the plant tissue [43]. The process involves the removal of
contaminants (metals, radionuclides, and certain organic compounds) from the environment by direct uptake into the plant tissue.

1.3.4 Phytovolatilization
Phytovolatilization is the uptake and transpiration of a contaminant by a plant, with release of the contaminant or a modified form of
the contaminant to the atmosphere from the plant through contaminant uptake, plant metabolism, and plant transpiration.
Phytovolatilization has mainly been applied to groundwater, but it can be applied to soil, sediments, and sludges. Phytovolatilization
may be applied to both organic and inorganic contaminants [12].

1.3.5 Phytofiltration of Rhizofiltration
Is used to remediate surface water, wastewater or groundwater and is defined as the use of plants to absorb, adsorb, concentrate and
precipitate contaminants from polluted waters by their roots. The most appropriate plant for a rhizofiltration system is one capable of
rapid growth, high root biomass, and has the ability to remove contaminants from the water in relatively high concentrations [44].

The most important factor in successful implementation of phytoremediation is the selection of appropriate plant which should have
high uptake of both organic and inorganic pollutants, grow well in polluted environments and easily controlled [37; 42]. Careful
selection of plant and plant variety is critical, first, to ensure that the plant is appropriate for the climatic and soil conditions at the site,
and second, for effectiveness of the phytoremediation of the pollutant at hand [34]. Research experiences have demonstrated the
feasibility of different macrophytes species for the removal of chemical pollutants from different types of wastewater. Amongst them
include cattail (Typha sp) and common reed (Phragmites sp); vetiver grass (Vetiveria zizanioides) [6; 35]; water hyacinth (Eichhornia
crassipes), rye grass (Lolium multiflorium), Duckweed (water Lemna) etc. However, majority of the documented work available on
literature has been carried out in developed countries under temperate climatic conditions and their performance may differ in tropical
conditions in Africa due to climatic factors. The potential for phytoremediation technology in the tropic environment is high due to the
prevailing climatic conditions which favours plant growth and stimulates microbial activity [47].

Information on the capability of phytoremediation of agrochemicals removal is limited [23]. Tanzania lacks information on potential
local plant species that may be used for phytoremediation [34]. Further studies in tropical countries like Tanzania, will add more
information about the phytoremediation effectiveness of the locally available species. The objective of the study was to investigate the
influence of different types of macrophytes towards agrochemical removal. The knowledge about the potential macrophyte plants
towards agrochemical removal will provide insight into choosing appropriate macrophytes which may be suitable in wetland
phytoremediation processes in agricultural environment.
2.0 Materials and Methods
2.1 Site of the study
The research study was conducted between October 2013 and March 2014 in a ventilated greenhouse located at the premises of
Nelson Mandela African Institution of Science and Technology (NM-AIST) in Arusha, Tanzania. The site is at an altitude of 1204 m
above sea level and at a geographical location of coordinates S 03
0
23. 945and E 036
0
47.671. The dominant climate is tropical
savannah type of climate with clearly rainy and dry seasons.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

445 www.ijergs.org

2.2 Preparation of wastewater
Analytical grade heavy metal salts of copper sulphate (CuSO
4
.5H
2
O), zinc sulphate (ZnSO
4
.7H
2
O), manganese sulphate
(MnSO
4
.4H
2
O), iron sulphate (FeSO
4
.7H
2
O), aluminium sulphate (Al
2
(SO
4
)
3
.16H
2
O) and formulated pesticides endosulfan, lambda
cyhalothrin and permethrin were used to prepare the artificial wastewater by diluting with tap water to a final concentration of 5 ppm.
These initial concentrations were to simulate typical concentrations reported in runoff from horticultural farms and were also at
concentration levels capable of being detected by analytical instruments.

2.3 Experimental setup and operation
The experimental system was bucket-reactor based and consisted of 15 plastic buckets and a 500 L bulk tank for wastewater storage.
The plastic bucket reactors had a capacity of 100 litres and were filled with gravel of porosity of 0.3, giving a total working volume of
30 L. Healthy young seedlings of Cyperus papyrus, Typha latifolia, Phragmities mauritianus and Cyperus alternifolius that had
similar biomass were collected from natural wetlands in Arusha and were planted into 12 buckets while 3 unplanted buckets were set
as controls (Figure 2). The experiment was conducted in triplicates. These macrophytes were selected on the basis of local availability
and they also grow well in tropic regions. The macrophytes were watered on daily basis with tap water and occasionally enriched with
Hoagland solution as source of nutrients. The acclimatization period observed was 3 months during which the plants appeared green,
healthy and with new grown shoots (Figure 3). Prior to the start of the experiment, sewage with water addition (1:1) and glucose 22.5
ppm was applied to the system for seven days to establish bacterial inoculation and generation of biofilms on the surface of the gravel.
Thereafter, artificial wastewater from the 500 L storage tank was fed into the system at the start of the batch experiment. During the
whole experimental period, the water volume was kept constant by adding tap water to compensate for water lost through
evapotranspiration [41].

Figure 2: Planting duration in October 2013.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

446 www.ijergs.org


Figure 3: After three months of macrophyte establishment in January 2014
2.4 Sampling and measurement
2.4.1 Heavy metal based agrochemicals
Sampling was done as per standard methods specified in [3]. The waste water was collected by using a 250 mls. polyethylene
sampling bottles at 9 am on every sampling day. The sampling was done at initial start-up (day 0), day 1, day 4, day 8, day 12 and
finally day 16. All samples were filtered using 0.45 m filters (Whatman filter papers) and preserved by acidifying with analytical
grade HNO
3
to pH < 2 and kept at 4
0
C. The concentrations of heavy metal based agrochemicals in the waste water samples were
analysed by Inductively Coupled Plasma Optical Emission (ICP-OES) (manufactured by Horiba Jobin Yvon, France) with detection
limits of 0.01 ppm for Al, Cu, Zn, Mn and Fe respectively.

2.4.2 Pesticide agrochemicals
Effluent water samples were collected using standard methods as described by [2]. Effluent water was sampled before the start of the
experiment (day 0), day 1 and every four days at about 9 am on every sampling day. Upon reaching the laboratory, the samples were
immediately extracted by liquid-liquid extraction (LLE) method [40]. The 1-L unfiltered water sample was quantitatively transferred
in a 2-L separating funnel and the sampling bottle rinsed with 60 ml hexane:acetone 1:1. The rinsate was then mixed with the sample
in the separating funnel. The combined contents were extracted successively with hexane:acetone 1:1 (3x60 ml). The organic phase
was filtered through a plug of glass wool containing anhydrous sodium sulphate (ca. 20 g) for drying and drawn into an erlenmeyer
flask. The aqueous layer was repeatedly extracted with a mixture of hexane:acetone (1:1 v/v, 60 ml) as above. After the extraction
procedure, the volume of the extract was concentrated to 2 mls using a rotary evaporator at 40
0
C and the final volume adjusted by
evaporating under gentle stream of nitrogen gas to 1 ml. The water extracts appeared clean and were not subjected to further clean up,
and hence was stored at -5
0
C freezer ready for GC/MS analysis. Analysis of pesticides was done using gas chromatography (Agilent
Technologies, 7890A GC System with auto sampler 7683B series injector) coupled with mass spectrometer (Agilent Technologies,
5975C inert XL EI/CIMSD with Triple-Axis Detector). The GC/MS analysis parameters and operating conditions were as follows:
Helium was used as a carrier gas at a flow rate of 1.2 mls/min; the oven temperature programme was 50
0
C held for 1 min at a rate of
10
0
C /min to 160
0
C then held for 5 minutes and finally by 3
0
C /min to 300
0
C and held for 18.5 min. The temperature of the
injection port was 250
0
C. The MS detector temperature was 250
0
C (transfer line temperature) and 230
0
C (ion source). Pesticide
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

447 www.ijergs.org

residues were identified and quantified by comparing their retention times and peak heights with respect to external reference
standards.

2.5 Statistical Analysis
Descriptive statistics (mean and standard deviation) of the results was determined using Origin 8.0 software (Origin Lab Corporation,
Northampton, MA, USA). The data obtained were analysed using SPSS 16.0 for windows package (SPSS, Inc., Chicago, IL, USA).
The data was subjected to a one-way analysis of variance (ANOVA) to test the overall variations and differences in mean
concentrations of agrochemicals in wastewater in the batch reactor systems. Furthermore, post hoc Tukey test was used to assess the
significant differences between the planted batch treatment groups relative to control. Differences at p<0.05 were considered
statistically significant.
3.0 Results and Discussion
3.1 Heavy metal removal
Figure 4 shows the performance of the different types of macrophytes towards removal of metals from wastewater. The data analysis
revealed significant differences (P < 0.05) in the removal of heavy metals between the planted batches as compared to the control.
Figure 4 shows that the concentration of heavy metals decreased with time, however a rapid drop in concentration levels was observed
during day 1. This could be a result of dilution in the batch reactor system because it is not completely dry at start of the experiment.
Similarly, this might be associated to the different multiple mechanisms taking place in the batch reactors such as adsorption,
precipitation, co-precipitation, complexation and ion exchange [30]before attaining equilibrium.
0 4 8 12 16
0
1
2
3
0 4 8 12 16
0
1
2
3
0
1
2
3
4
0
1
2
3
4


Time after first irrigation (days)
Cu


Zn

Typha latifolia
Cyperus papyrus
Phragmites mauritianus
Cyperus alternifolius
Control
Mn

M
e
a
n

c
o
n
c
e
n
t
r
a
t
i
o
n

(
p
p
m
)
Fe

Figure 4: Variation of heavy metal removal in wastewater in planted treatments and control.


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

448 www.ijergs.org

3.1.1 Influence of different types of macrophytes in iron (Fe) removal.
According to Figure 4, the results showed that macrophytes are capable in removing Fe from the wastewater. After 16 days retention
in the wastewater, a significant difference (P<0.05) in mean concentration of Fe was observed in all the macrophytes relative to
control. Among the various types of macrophytes, the highest removal capability was observed in planted batch reactor with Cyperus
papyrus and Typha latifolia, and then followed planted batch reactors with Cyperus alternifolius and Phragmites mauritianus where
the initial concentration of iron (3.515 ppm) dropped to 0.077(0.021) ppm, 0.142(0.015) ppm, 0.170(0.042) ppm and 0.252
(0.026) ppm respectively. This observation revealed that macrophytes have different capabilities influencing the magnitude of Fe
removal from wastewater. According to [16; 29; 21], have highlighted that the mechanisms involved in Fe removal from wastewater
are rhizofiltration and chemical processes such as precipitation. Similarly, macrophytes can play an important role in metal removal
through adsorption and uptake by plants [19]. However, in the control reactors, the decrease in the mean levels of iron to
0.624(0.048) ppm can be related to other non phyto mechanisms for heavy metal removal such as adsorption to substrates (e.g.
gravel, particulates and soluble organics), by cation exchange and chelation, and precipitation as insoluble salts as explained by [18].

3.1.2 Influence of different types of macrophytes manganese (Mn) removal.
There was a significant (P<0.05) decrease in the mean concentration of manganese in the wastewater during the 16 day retention time
for the planted batch reactors with macrophytes relative to control (Fig. 4). The levels of manganese concentration decreased from
3.508 ppm to 0.001(0.001) ppm, 0.017(0.006) ppm, 0.045(0.012) ppm, and 0.127(0.033) ppm, for the plated batch reactors with
Typha latifolia, Cyperus papyrus, Cyperus alternifolius and Phragmites mauritianus respectively. The highest removal capability was
influenced by Typha latifolia and Cyperus papyrus causing a reduction of manganese in wastewater to almost completion. The control
group also showed a decrease in levels of manganese to 1.177(0.104) ppm over the 16 day retention time. This decrease in the
control group could be attributed to adsorption to substrate, chemical precipitation and microbial interactions as explained by [19].
Generally, the overall results indicated that aquatic macrophytes were very effective in phytoremediation of manganese. According to
[15], plants possess mechanisms which are able to stimulate metal bioavailability in the rhizosphere and enhance adsorption and
uptake into their roots.

3.1.3 Variations among macrophytes in zinc (Zn) removal.
As shown in Figure 4, the planted batch reactors affected significantly (P<0.05) the mean concentration levels of zinc relative to
control during the 16 day retention time. Likewise, the planted batch reactors with macrophytes caused a reduction of zinc levels from
initial concentration of 2.921 ppm to 0.001(0.001) ppm, 0.001(0.001) ppm, 0.091(0.007) ppm and 0.025(0.022) ppm for the
plated batch reactors with Typha latifolia, Cyperus papyrus, Phragmites mauritianus and Cyperus alternifolius respectively. The
greater removal occurred in the treatments planted with with Typha latifolia and Cyperus papyrus, where the levels of zinc almost
reached to completion on day 16 retention time. The control group also showed a reduction in the mean concentration of zinc to
0.459(0.019) ppm. This reduction in unplanted control system can be due to sorption onto particulates and settlement. Similar studies
indicated that more than 50% of the heavy metals can be easily adsorbed onto particulate matter in the wetland and thus be removed
from the water column by sedimentation [39].
3.1.4 Variations among macrophytes in copper (Cu) removal.
The results shown on Figure 4 indicate that the mean concentrations of copper in wastewater decreased with the retention time. A
sharp decrease in concentration was observed during day 1 exposure perhaps owing to the multi mechanisms for heavy metal removal
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

449 www.ijergs.org

in the batch reactor systems. As far as heavy metals are concerned, generally for the batch reactor systems, most removal takes place
during the initial stages, and the rate slows after wards. This phenomenon has been observed by other authors [26; 1]. As shown in the
figure, the batch reactors planted with macrophytes significantly affected the reduction in copper concentration (P<0.05) as compared
to control. On the other hand, Cyperus papyrus and Typha latifolia achieved the greatest reduction in mean levels of copper from
initial concentration of 2.778 ppm to 0.001(0.001) ppm and 0.023(0.013) ppm respectively, followed by Cyperus alternifolius
0.055(0.022) ppmand Phragmites mauritianus 0.0116(0.035) ppm. The mean levels of copper observed in the control was
0.338(0.033) ppm. The significant reduction in levels of copper in the planted batch reactors with the macrophytes relative to control
may be influenced by plant uptake and filtration effect of the roots system. Statements that of [19; 7]may confirm that macrophytes
can contribute directly through uptake, sedimentation, adsorption and other mechanisms in the rhizosphere.
3.2 Pesticide removal
According to Figure 5, the data obtained during the study revealed that all the batch reactors panted with macrophytes caused a
reduction in the mean concentration of the pesticide levels from the wastewater during the 12 days experimental period. However, it
appeared that there was no statistical significant difference between the planted batch treatments relative to control group at = 0.05
level. The variation of pesticide removal in wastewater in the planted reactors and control is shown in Figure 5.

0 3 6 9 12
1
2
3
4
5
0 3 6 9 12 0 3 6 9 12


M
e
a
n

c
o
n
c
e
n
t
r
a
t
i
o
n

(
p
p
m
)
L.Cyhalothrin


Time after first irrigation (days)
Endosulfan
Typha
C. papyrus
Phragmites
C. alternifolius
Control
Permethrin

Figure 5: Variation of pesticide removal in wastewater in planted treatments and control.
3.2.1 L. Cyhalothrin removal
The variation of pesticide removal in wastewater in the reactors planted with macrophytes (treatments) and control is shown in Figure
5. Stististical analysis showed that there were no observed significant differences in L. Cyhalothrin removal between the treatments at
= 0.05. However, the study has observed that over the 12 days experimental period, Typha latifolia showed the greatest reduction of
L. Cyhalothrin in wastewater from initial concentration of 5.132 to 1.184(0.147) ppm, followed by Cyperus papyrus,
Cyperusalternifolius and phragrmites mauritianus with mean concentrations of 2.116( 0.290) ppm, 2.285( 0.186) ppm and
2.437(0.186) ppm respectively. Meanwhile, the mean concentration of wastewater in the control group was 3.093(0.126) ppm.
These results demonstrate that the planted batch treatments had a better removal of L. Cyhalothrin in wastewater as compared to the
control. Macrophytes can increase pollutant removal including pesticides either directly through uptake or indirectly through enhanced
rhizosphere degradation [32]. Though, the reduction in the control group could be explained by the lipophilic nature of the pesticide.
The pesticide L. Cyhalothrin is a pyrethroid insecticide and their molecules rapidly dissipate from the water column and are strongly
adsorbed to particulates and other aquatic organisms. Reference [26] observed that L. Cyhalothrin residues in water decrease rapidly if
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

450 www.ijergs.org

suspended and/or aquatic organisms (algae, macrophytes or aquatic animals) are present. The better performance in planted batch
treatments is influenced by the macrophytes which serve as sites for adsorption, absorption and degradation of the pesticide.
3.2.2 Endosulfan removal
The difference observed in the removal trends (Figure 5) was found to be not statistically significant at = 0.05 level between the
planted batch reactors. Likewise, there was no observed statistical significant difference between the planted batch reactors and the
control group. However, the analysis of wastewater measured daily for 12 days in the reactors showed that the initial concentration of
endosulfan (5.180 ppm) decreased with time (Figure 5). Among the planted batch treatments, Cyperus papyrus and Typha latifolia
showed highest reduction of endosulfan in wastewater to mean levels of 1.742(0.171) ppm and 1.954(0.265) ppm respectively,
followed by Cyperus alternifolius and Phragmites mauritianus where the mean concentration levels dropped to 2.349(0.383) ppm
and 2.349(0.358) ppm respectively. Meanwhile, the mean concentration of endosulfan in wastewater dropped to 3.475(0.131) ppm
in the control group. The results demonstrated that slightly better endosulfan removal was affected by the planted batch reactors as
compared to unplanted control group. The results were indicative that macrophytes may influence the removal of endosulfan through
several phytoremediation mechanisms such as plant uptake, phytodegradation, and sorption through the root system (rhizosphere).
Pesticides that are sorbed are more likely to remain in the root zone where they may be available for plant uptake and microbial or
chemical degradation. However, the decrease in the control group may be influenced through sorption and bioremediation
mechanisms due to the presence of biofilm, gravel and organic matter in the reactor system [4].
3.2.3 Permethrin removal
There was no observed statistical significant differences in permethrin removal between the planted batch reactors at = 0.05 level
during the 12 day operation of the system. Likewise, no statistical significant difference was noted between the planted batch reactors
and the unplanted control group. However, the analytical results of the levels of permethrin in the wastewater decreased with time.
Among the planted batch reactors planted, Cyperus papyrus, Typha latifolia and Cyperus alternifolius had a higher removal ability of
permethrin from initial concentration of 5.187 ppm to 1.037(0.005) ppm, 1.338(0.151) ppm and 1.500(0.330) ppm respectively,
followed by Phragmites mauritianus where the mean concentration levels dropped to 1.865 (0.196) ppm. The least removal ability
was observed in the control group with a reduction of permethrin to mean concentration of 2.728(0.076) ppm. The decrease in
pesticide in control group can be explained by adsorption to substrates such as gravel and other particulates in the reactor. However, it
appeared that the macrophytes in the planted batch reactor have shown a higher removal capability towards permethrin in the
wastewater. This phenomenon may be influenced by several mechanisms such as phytodegradation, rhizofiltration, or uptake by plants
as explained by[5; 17; 45].
3.3 Kinetics of agrochemical removal
The change in mean concentrations of the selected agrochemicals in the batch experiments was described by first-order reaction
kinetics and mathematically expressed as:-
=

0

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

451 www.ijergs.org

ln

0
= ------------------------------ (Eq. 1)
Where C was the concentration of the respective agrochemicals (mg/l) at time t (d), C
0
being the initial concentration (mg/l) and k, the
first order rate constant (t
-1
). A graph of ln C/C
0
versus time was produced and the slope k determined. The value of k was used to
determine the removal of the agrochemicals with respect to the batch treatment planted with different types of macrophytes relative to
the control. A higher removal rate constant implied a reduction of the concentration levels of the respective agrochemicals.
3.3.1 Kinetics of heavy metal removal
When ln C/C
0
was plotted against t, linear relationships were obtained and the rate constants k was obtained as the -slope of theline.
All samples observed a linear fit with R
2
0.9 (Figure 6). The results obtained have shown that the magnitude of k values was greater
for the planted batch reactors than for the control. Furthermore, the rate of removal varied between the planted batch reactors for the
different types of heavy metals. Among the planted batch reactors, Cyperus papyrus had a greater removal Cu and Fe with the k
values of 0.3385 d
-1
and 0.1679 d
-1
respectively and Typha latifolia had a greater removal of Mn and Zinc with k values 0.4197 d
-1
and
0.4423 d
-1
respectively. The findings have also revealed that macrophytes differ in their affinity towards different types of heavy
metals. Similarly, the rate of removal of heavy metals were much higher that the pesticides.



Figure 6: Determination of first-order kinetic constant (k) for heavy metal removal



y = -0.419x - 0.661
R = 0.926
y = -0.247x - 0.951
R = 0.944
y = -0.165x - 0.473
R = 0.962
y = -0.211x - 0.717
R = 0.968
y = -0.041x - 0.462
R = 0.942
-9
-6
-3
0
0 10 20
l
n
C
/
C
0
Time
Mn
Typha
C. papyrus
Phragmites
C. alternifolius
Control
y = -0.155x - 0.828
R = 0.987
y = -0.167x - 1.180
R = 0.990
y = -0.124x - 0.692
R = 0.981
y = -0.133x - 0.885
R = 0.985
y = -0.087x - 0.442
R = 0.952
-5
-4
-3
-2
-1
0
0 10 20
l
n
C
/
C
0
Time
Fe
Typha
C. papyrus
Phragmites
C. alternifolius
Control
y = -0.207x - 1.332
R = 0.987
y = -0.338x - 1.135
R = 0.929
y = -0.125x - 1.185
R = 0.998
y = -0.153x - 1.423
R = 0.996
y = -0.062x - 0.994
R = 0.975
-8
-6
-4
-2
0
0 10 20
l
n

C
/
C
0
Time
Cu
Typha
C.papyrus
Phragmites
C.alternifolius
Control
y = -0.442x - 1.143
R = 0.973
y = -0.301x - 1.218
R = 0.939
y = -0.146x - 1.108
R = 0.994
y = -0.203x - 1.229
R = 0.970
y = -0.062x - 0.891
R = 0.991
-10
-8
-6
-4
-2
0
0 10 20
l
n
C
/
C
0
Time
Zn
Typha
C. papyrus
Phragmites
C. alternifolius
Control
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

452 www.ijergs.org

3.3.2 Kinetics of pesticide removal


l

Figure 7: Determination of first-order kinetic constant (k) for pesticide removal

y = -0.116x - 0.015
R = 0.984
y = -0.088x - 0.079
R = 0.958
y = -0.072x - 0.016
R = 0.995
y = -0.101x + 0.007
R = 0.992
y = -0.048x + 0.015
R = 0.995
-1.6
-1.2
-0.8
-0.4
0
0 5 10 15
l
n
C
/
C
0
Time (days)
L. Cyhalothrin
Typha
C. papyrus
Phragmites
C. alternifolius
Control
y = -0.070x - 0.078
R = 0.975
y = -0.086x - 0.009
R = 0.983
y = -0.055x + 0.006
R = 0.991
y = -0.062x - 0.033
R = 0.974
y = -0.034x + 0.026
R = 0.985
-1.2
-0.8
-0.4
0
0 5 10 15
l
n
C
/
C
0
Time (days)
Endosulfan
Typha
C. papyrus
Phragmites
C. alternifolius
Control
y = -0.107x - 0.052
R = 0.994
y = -0.114x - 0.177
R = 0.983
y = -0.08x - 0.026
R = 0.988
y = -0.102x + 0.030
R = 0.973
y = -0.049x - 0.009
R = 0.954
-2
-1.6
-1.2
-0.8
-0.4
0
0 5 10 15
l
n
C
/
C
0
Time (days)
Permethrin
Typha
C. papyrus
Phragmites
C. alternifolius
Control
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

453 www.ijergs.org

The results for the kinetic parameters for the pesticides (Figure 7) indicated that all samples observed a linear fit with R
2
0.9. The
amount of k values was greater for the planted batch treatments than for the control. Furthermore, the rate of removal varied between
the planted batch reactors for the different types of pesticides. Among the planted batch treatments, Cyperus papyrus showed the
highest k value for both endosulfan and permethrin removal with k values of 0.086 d
-1
and 0.114 d
-1
respectively. Likewise Typha
latifolia had the highest k value of 0.116 d
-1
for the removal of L-Cyhalothrin.

Acknowledgement
The authors gratefully acknowledge the following for their technical assistance. Mr. Joseph Malulu of Tanzania Pesticide Research
Institute (TPRI) for pesticide analyses and Mr John Bomani of Southern and Eastern African Mineral Centre (SEAMIC) for heavy
metal analyses. This study was supported by a scholarship from the Tanzania Commission for Science and Technology (COSTECH).

4. Conclusion and Recommendation
Water pollution by agrochemicals from agricultural runoff is a serious environmental problem in many parts of the world including
Tanzania. Agrochemicals cannot be degraded easily and thus require a preventative approach for a successful outcome. In this study,
the influence of four aquatic macrophytes Cyperus papyrus, Typha latifolia, Cyperus alternifolius and Phragmites mauritianus
towards agrochemical phytoremediation in wastewater were investigated. The results revealed that planted systems work better than
unplanted systems in the removal of agrochemical residues from wastewater. These results prove their suitability for use in
phytoremediation of agrochemicals. Furthermore, the study has demonstrated that plant type has influence on the removal of
agrochemicals where in this study, Cyperus papyrus and Typha latifolia showed higher removal capability for most agrochemicals,
followed by Cyperus alternifolius and Phragmitesmauritianus. The findings have also shown that the rate of removal of heavy metals
were much higher than the pesticides. Therefore, in designing wastewater treatment systems for agricultural and industrial lands,
removal of pesticide should be used to size the systems. It is recommended that the experiment conducted in this research could be up-
scaled to include treatment of actual wastewater from agricultural industries to establish their long term characteristics under various
environmental conditions like organic loading and velocity.

REFERENCES:
[1] Aisien, F.A., Faleye, O., Aisien, E.T. (2010). Phytoremediation of Heavy Metals in Aqueous solutions. (2010). Leonardo Journal
of Sciences. 17:37-46.
[2] Akerblom, M. 1995, Guidelines for environmental monitoring of pesticide residues for the SADC Region. SADC/ELMS,
Monitoring Techniques Series.
[3] APHA, (1998). Standard methods for the examination of water and wastewater.18
th
Edition.American Public health Association,
Washington, DC.
[4] Braeckevelt, M., Rokadia, H., Imfeld, G., Stelzer, N., Paschke, H., Kuschk, P., Kastner, M., Richnow, H.H., Weber, S. (2007).
Assessment of in situ biodegradation of monochlorobenzene in contaminated groundwater treated in a constructed wetland.
Environ. Pollut. 148: 42837.
[5] Brix, H., 1994. Functions of macrophytes in constructed wetlands. Wat. Sci. Tech. 29, 7178.
[6] Bwire, K.M., Njau, K.N., Minja, R.J. (2011). Use of vetiver grass constructed wetland for treatment of leachate. Water Sci.
Technol. 63(5)924-930.
[7] Carvalho, P.F. (2006). Agriculture, Pesticides, Food Security and Food Safety, Environmental Science and Policy, 9 (7-8) 685-
692.
[8] Chang, S.W., Lee, S.J., and Je, C.H. (2005). Phytoremediation of atrazine by poplar trees: Toxicity, uptake, and transformation. J.
Environ. Sci. Health,40, 801811.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

454 www.ijergs.org

[9] Cunningham, S. D., J. R. Shann, D. E. Crowley, and T. A. Anderson. 1997. Phytoremediation of contaminated water and soil. In
E. L. Kruger, T. A. Anderson, and J. R. Coats (eds.). Phytoremediation of Soil and Water Contaminants, American Chemical
Society Symposium Series 664, Washington, D. C., pp 12-17.
[10] Dhir, B. 2010. Use of aquatic plants in removing heavy metals from wastewater. Int. J. Environ. Eng. 2:185-201.
[11] Dipu, S., Anju A. K., Salom Gnana Thanga V Phytoremediation of dairy effluent by constructed wetland technology,
Environmentalist, 31 (3), 263-268. 2011.
[12] EPA, (2000). A Citizens Guide to Phytoremediation. EPA 542-F-98-011. United States Environmental Protection Agency, p. 6.
Available at: http//www.bugsatwork.com/XYCLONYX/EPA_GUIDES/PHYTO.PDF
[13] Garbisu, C., Alkorta, I.,(2001).Phytoextraction: a cost-effective plant-based technology for the removal of metals from the
environment.Bioresour Technol. 77(3):229-36.
[14] Hellar, H., Kishimba, M.A., 2005, Pesticide residues in water from TPC sugarcane plantations and environs, Kilimanjaro region,
Tanzania. Tanz. J. Sci., 31: 13-22.
[15] Italiya, J.G., Shah, M.J. (2013). Phytoremediation: An Ecological Solution to Heavy Metal Polluted Water and Evaluation of
Plant Removal Ability.The International Journal of Engineering And Science (IJES). 2(6) 26-36.
[16] Jayaweera, M.W., Kasturiarachchi, J.C., Kularatne, R.K.A and Wijeyekoon, S.L.J. (2008). Contribution Of Water Hyacinth
(Eichhornia Crassipes (Mart.) Solms) Grown Under Different Nutrient Conditions to Fe-Removal Mechanisms in Constructed
Wetlands. Journal of Environmental Management. 87, 450460.
[17] Kadlec, R. H. and Wallace, S. D. (2008). Treatment Wetlands, Taylor & Francis Group, Boca Raton, Fla, USA, 2nd edition.
[18] Kadlec, R.H., Knight, R.L., 1996. Treatment Wetlands. CRC Press, Lewis Publishers, Boca Ratn, Florida, USA, p. 893.
[19] Kadlec, R.H., Knight, R.L., Vymazal, J., Brix, H., Cooper, P., Haberl, R., 2000. Constructed wetlands for pollution control
processes, performance, design and operation. IWA Scientific and Technical Report No. 8. IWA Publishing, London, UK.
[20] Khan, F.I, Hussain T, Hejazi R. (2004). An Overview and Analysis of Site Remediation Technologies. Journal of Environmental
Management.71:95122.
[21] Khan, S. Ahmad, I., Shah, M.T., Rehman, S. and A. Khaliq, A. (2009). Use Of Constructed Wetland For The Removal Of Heavy
Metals From Industrial Wastewater. Journal of Environmental Management,90:3451-3457.
[22] Kihampa, C., Mato, R.R., and Mohamed, H., 2010, Residues of Organochlorinated Pesticides in Soil from Tomato Fields,
Ngarenanyuki, Tanzania. J. Appl. Sci. Environ. Manage., 14(3): 37-40.
[23] Kovacic, D. A., Twait, R. M., Wallace, M. P. and Bowling, J. M. (2006). Use of created wetlands to improve water quality in the
Midwest - Lake Bloomington case study. Eco. Eng., 28(3) 258-270.
[24] Lema, E., Machunda, R. Njau, K.N., (2014a). Agrochemicals use in horticulture industry in Tanzania and their potential impact to
water resources. Int. J. Biol. Chem. Sci. 8(2):831-842.
[25] Lema, E., Machunda, R. Njau, K.N., (2014b in press). Assessment of agrochemical residues in wastewater from selected
horticultural farms in Arusha, Tanzania.
[26] Lim, J., Kang, H.M., Kim, L.H., Ko, S.O. (2008). Removal of Heavy Metals by Sawdust Adsorption: Equilibrium and Kinetic
Studies. Environ. Eng. Res. 13(2):79-84.
[27] Li-Ming He, Troiano, J, Wang, A. and Goh, K. (2008). Environmental Chemistry, Ecotoxicity and Fate of Lambda-Cyhalothrin.
Reviews of Environmental Contamination and Toxicology. Springer.
[28] Lone, M.I., Zhen-Li, H., Stoffella, P.J., Xiao, Y. (2008). Phytoremediation of heavy metal polluted soils and water: Progresses
and perspectives. Journal of Zhejiang University Sci. 9: 210-220.
[29] Maine, M.A., N. Sun, N., Hadad, H., Sanchez, G and Bonetto, C. (2009). Influence of Vegetation on The Removal of Heavy
Metals and Nutrients A Constructed Wetland. Journal of Environmental Management.90: 355363.
[30] Matagi, S. V. Swai, D. And Mugabe, R. (1998). A Review of Heavy Metal Removal Mechanisms in Wetlands.Afr. J. Trop.
Hydrobiol. Fish. 8: 23-35.
[31] Mirlean, N., Roisenberg, A., Chies, J. O., (2007). Metal contamination of vineyard soils in wet subtropics (southern Brazil).
Environmental Pollution, 149:10-17.
[32] Moore, M.T., Cooper, C.M., Smith, S., Cullum, R.F., Knight, S.S., Locke, M.A., Bennett, E.R. 2009. Mitigation of two pyrethroid
insecticides in a Mississippi Delta constructed wetland. Environmental Pollution 157:250-256.
[33] MoW, Ministry of Water (2012). Guidelines for Water Resources Monitoring and Pollution Control. The United Republic of
Tanzania.
[34] Mwegoha, W.J.S. (2008). The use of phytoremediation technology for abatement of soil and ground water pollution in Tanzania.
Opportunities and challenges. J. Sustain. Dev. Afr. 10(01):140-156.
[35] Nyomora, A.M.S., Njau, K.N. and Mligo, L. (2012). Establishment and growth of vetiver grass exposed to landfill leachate. J.
Solid Waste Technology and Management. 38(2)82-92.
[36] Ravena, O. (2001). Ecological monitoring for water body management. In: Timmerman J. G. (Ed.), Proceedings of the
international workshop on information for sustainable water management (25-28 Sept 2000), pp157-167.
[37] Roongtanakiat, N., Tangruangkiat, S. and Meesat, R. (2007). Utilization of vetiver grass (Vetiveria zizanioides) for removal of
heavy metals from industrial wastewaters. Science Asia, 33, 397-403.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

455 www.ijergs.org

[38] Sasmaz, A., Obek, E. and Hasar, H. (2008).The accumulation of heavy metals in Typha latifolia L. grown in a stream carrying
secondary effluent. Ecological engineering. 33: 278284.
[39] Sheoran, A.S., Sheoran, V. (2006). Heavy metal removal mechanism of acid mine drainage in wetlands: a critical review, Miner.
Eng. 19: 105116.
[40] Siegel, S., and Lee, M., 2004, Validated Multi-Residue Method for Extraction and Analysis of Trace-Level Pesticides in Surface
Water. Agilent Technologies, Inc. USA.
[41] Soltan, M.E. and Rashed, M.N. (2003) Laboratory Study on the Survival of Water Hyacinth under Several Conditions of Heavy
Metal Concentrations. Adv. Environ. Res., 7, 321334.
[42] Stefani, G.D., Tocchetto, D., Salvato, M. and Borin, M. (2011). Performance of a floating treatment wetland for in-stream water
amelioration in NE Italy., Hydrobiologia, 674: 157-167.
[43] Susarla, S. Medina, V.F., McCutchon, S.C. (2002). Phytoremediation: An ecological solution to organic chemical contamination.
Ecological Engineering. 18: 647658.
[44] Tome, F. V., P. B. Rodriguez, and J. C. Lozano. (2008). Elimination of natural uranium and Ra-226 from contaminated waters by
rhizofiltration using Helianthusannuus L. Science of the Total Environment, 393:351-357.
[45] USEPA, (2000). A Handbook of Constructed Wetlands, A Guide to Creating Wetlands for: Agricultural Wastewater, Domestic
Wastewater, Coal Mine Drainage, Stormwater in the Mid- Atlantic Region.-Volume 1: General Considerations. United Sates
Environmental Protection Agency, Washington DC, New York, USA.
[46] Zarcinas, B.A., Ishak, C.F., McLaughlin, M.J., Cozens, G. (2004). Heavy metals in soils and crops in Southeast Asia. Environ.
Geochem. Health. 26(3): 343-357.
[47] Zhang, X., Xia, H., Li, Z., Zhang, P., and Gao, B. (2010). Potential of four forage grasses inremediation of Cd and Zn
contaminated soils. Bioresour. Technol.,101: 2063-2066

















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

456 www.ijergs.org

Implementation of Decimation Filter for Hearing Aid Application
Prof. Suraj R. Gaikwad, Er. Shruti S. Kshirsagar and Dr. Sagar R. Gaikwad
Electronics Engineering Department, D.M.I.E.T.R. Wardha
email: surajrgaikwad@gmail.com
Mob. No: +91982389441

Abstract - A hearing aid is a small electronic device that one wears in or behind his/her ear who have hearing loss. A hearing aid can
help to the people to hear more in both quiet and noisy situations. It makes sounds louder so that a person with hearing loss can listen,
communicate, and participate better in daily activities. In this paper, we implemented a digital filter which is used for hearing aid
application. The implemented filter is based on the multirate approach in which high sampling rate signal is decimated to low
sampling rate signal respectively. This proposed decimated filter is designed and implemented using the Xilinx System Generator and
Matlab Simulink.


Keywords - Digital Filter, CIC filter, FIR filter, Half band filter and Oversampling Concept.

INTRODUCTION

Filters are a basic component of all signal processing and telecommunication systems. The primary functions of a filter are one or
more of the followings: (a) to confine a signal into a prescribed frequency band or channel (b) to decompose a signal into two or more
sub-band signals for sub-band signal processing (c) to modify the frequency spectrum of a signal (d) to model the input-output relation
of a system voice production, musical instruments, telephone line echo, and room acoustics [2].

Hearing aids are primarily meant for improving hearing and speech comprehensions. Digital hearing aids score over their analog
counterparts. This happens as digital hearing aids provide flexible gain besides facilitating feedback reduction and noise elimination.
Recent advances in DSP and Microelectronics have led to the development of superior digital hearing aids [6]. Many researchers have
investigated several algorithms suitable for hearing aid application that demands low noise, feedback cancellation, echo cancellation,
etc., however the toughest challenge is the implementation [8].

DIGITAL FILTER

A digital filter uses a digital processor to perform numerical calculations on sampled values of the signal. The processor may be a
general-purpose computer such as a PC, or a specialized DSP (Digital Signal Processor) chip [3]. The analog input signal must first be
sampled and digitized using an ADC (analog to digital converter). The resulting binary numbers, representing successive sampled
values of the input signal, are transferred to the processor, which carries out numerical calculations on them. These calculations
typically involve multiplying the input values by constants and adding the products together [7]. If necessary, the results of these
calculations, which now represent sampled values of the filtered signal, are output through a DAC (digital to analog converter) to
convert the signal back to analog form. In a digital filter, the signal is represented by a sequence of numbers, rather than a voltage or
current. The figure1: shows the basic setup of such a system.

Figure1: Basic set-up of a digital filter
CIC FILTER

In 1981, E. B. Hogenauer introduced an efficient way of performing decimation and interpolation. Hogenauer devised a flexible,
multiplier-free filter suitable for hardware implementation that can also handle arbitrary and large rate changes. These are known as
cascaded integrator-comb filters (CIC filters) [14].
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

457 www.ijergs.org


The simplest CIC filter is composed of a comb stage and an integrator stage. The block diagram of three-stage CIC filter is shown in
figure 2.

Fig: 2 (a) Three Stage Decimating CIC Filter

Fig: 2 (b) three Stage Interpolating CIC Filter

FIR FILTER

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is
of finite duration, because it settles to zero in finite time.This is in contrast to infinite impulse response (IIR) filters, which may have
internal feedback and may continue to respond indefinitely (usually decaying) [12]. The impulse response of an Nth-order discrete-
time FIR filter (i.e., with a Kronecker delta impulse input) lasts for N + 1 samples, and then settles to zero. The non-recursive nature of
FIR filter offers the opportunity to create implementation schemes which significantly improve the overall efficiency of the decimator.

We have designed and implemented a conventional comb-FIR-FIR decimation filter. FIR filters offer great control over filter shaping
and linear phase performance with waveform retention over the pass band.

OVERSAMPLING CONCEPT

In signal processing, oversampling is the process of sampling a signal with a sampling frequency significantly higher than the Nyquist
frequency. Theoretically a bandwidth-limited signal can be perfectly reconstructed if sampled at or above the Nyquist frequency.
Oversampling improves resolution, reduces noise and helps avoid aliasing and phase distortion by relaxing anti-aliasing
filter performance requirements [3].

IMPLEMENTATION OF CIC-FIR-FIR DECIMATION FILTER STRUCTURE

The incoming oversampled signal at the rate of 1.28 MHz has to be down-sampled at the rate of 20 KHz. We have chosen passband
frequency of 4 KHz because the human ear is sensitive to all the sounds within the range of 4 KHz. Figure 3 shows that the proposed
decimation filter structure using CIC-FIR-FIR filter.

Fig. 3: Simulink model of CIC-FIR-FIR Decimation filter

This Simulink model of CIC-FIR-FIR Decimation filter is designed using Matlab Simulink and Xilinx System Generator. In this
design, the incoming sampling rate is 1.28 MHz which is first down sampled by using Xilinx CIC filter and then two Xilinx DAFIR
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

458 www.ijergs.org

filters. These FIR filters are based on the Distributed Arithmetic principle, which results in less hardware and less power
consumption compared to other decimation filters.
The overall frequency specification of CIC filter is given in table 1.

No. of Stages (N) 4
Sampling Frequency (Fs) 1.28 MHz
Decimation Factor (R) 16
Bit gain (G) 65536
No. of output bits (Bout) 32
Filter Gain (Gf) 1
Scale Factor (S) 1
Table 1.Frequency specification of CIC filter


Fig. 4: Magnitude response of 4 stage CIC filter
Above figure shows that the magnitude response of 4 stage CIC filter in which the attenuation is obtained is about 48 dB. This
magnitude response is plotted with N = 4, R = 16 and M = 1.
FIRST FIR FILTER DESIGN
By considering the application requirements, FIR filter and IIR filter structures can be used to meet the design specifications. FIR
filters offer great control over the filter shaping and linear phase performance with the waveform retention over the pass band. Due to
its all-zero structure, the FIR filter has a linear phase response necessary for audio application, but at the expense of the high filter
order. IIR filter can be designed with much smaller orders than the FIR filters at the expense of the nonlinear phase. It is very difficult
to design a linear phase IIR filter. Thus, we have designed FIR filter as a compensation filter. The filter specification of this FIR filter
is given in table 2.
Sampling Frequency (Fs) 80 KHz
Passband Frequency (Fpass) 20 KHz
Stopband Frequency (Fstop) 35 KHz
Transition width (f) 0.1875
Passband Attenuation (Apass) 1 dB
Stopband Attenuation (Astop) 85 dB
Filter Length (N) 12
Table 2: Filter specification of first FIR filter
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

459 www.ijergs.org



Fig: 5: Magnitude response of FIR filter
Above figure shows that the magnitude response of first FIR filter in which the stop band attenuation is obtained is about 85 dB. This
magnitude response is plotted Fpass = 20 KHz and Fstop = 35 KHz.
SECOND FIR FILTER DESIGN
An additional FIR filter is designed to push out of band undesired signals. The FIR filter is used in the last stage instead of a shaping
filter for less power consumption because a shaping filter has more taps than an FIR filter. Second FIR filter is used as corrector filter
that having passband of 4 KHz because the human ear is sensitive to all the sounds within the range of 4 KHz. From the frequency
response of second FIR filter it can be seen that stop band attenuation of more than 100 dB is obtained which is suitable for this
corrector filter. Filter specification of second filter is given in the table 3.
Sampling Frequency (Fs) 40 KHz
Passband Frequency (Fpass) 4 KHz
Stopband Frequency (Fstop) 15 KHz
Transition width (f) 0.275
Passband Attenuation (Apass) 1 dB
Stopband Attenuation (Astop) 100 dB
Filter Length (N) 8
Table 3: Filter specification of second FIR filter


Fig: 6: Magnitude response of second FIR filter
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

460 www.ijergs.org

Above figure shows that the magnitude response of second FIR filter in which the stop band attenuation is obtained is more than 100
dB. This magnitude response is plotted Fpass = 4 KHz and Fstop = 15 KHz.
CIC-HALF BAND FIR-FIR DECIMATION FILTER STRUCTURE
This decimation filter is implemented by using CIC-Half band FIR-FIR filter and the block diagram of this filter is shown in fig. 6.8.
The operation of this filter is very similar to the CIC-FIR-FIR filter. The incoming oversampled signal at the rate of 1.28 MHz has to
be down-sampled at the rate of 20 KHz. We have chosen passband frequency of 4 KHz because the human ear is sensitive to all the
sounds within the range of 4 KHz.
A half-band IIR filter can have fewer multipliers than the FIR filter for the same sharp cutoff specification. An IIR elliptic half-band
filter when implemented as a parallel connection of two all-pass branches is an efficient solution. The main disadvantage of elliptic
IIR filters is their very nonlinear phase response [9]. To overcome the phase distortion one can use optimization to design an IIR filter
with an approximate linear phase response, or one can apply the double filtering with the block processing technique for real-time
processing. For the appropriate usage of digital filter design software in half-band filter design, it is necessary to calculate the exact
relations between the filter design parameters in advance and accurate method can be found in the FIR half-band filter.

We have designed a CIC-Half band FIR-FIR decimation filter using Matlab Simulink model and Xilinx system Generator for the same
specification of CIC-FIR-FIR decimation filter and the designed Simulink model of CIC-Half band FIR-FIR filter shown in figure 7.

Fig. 7: Simulink model of CIC-Half band FIR-FIR Decimation filter
This Simulink model of CIC-Half band FIR-FIR Decimation filter is designed using Matlab Simulink and Xilinx System Generator. In
this design, the incoming sampling rate is 1.28 MHz which is first down sampled by using Xilinx CIC filter and then two Xilinx
DAFIR filters. In this case, first DAFIR filter is set as a half band FIR filter. These FIR filters are based on the Distributed
Arithmetic principle, which results in less hardware and less power consumption compared to other decimation filters.


Fig: 8: Magnitude response of Half-band FIR filter
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

461 www.ijergs.org

Above figure shows that the magnitude response of half band FIR filter in which the stop band attenuation is obtained is more than 50
dB.
RESULT
Decimation Filter
Architecture
Number of
Taps
Number of
Slices
Number of
Flip-Flops
LUT IOB
CIC-FIR-FIR Filter 20 2644 4769 3561 32
CIC-Half band FIR-FIR Filter 14 2548 4729 3394 32
Table 4: Comparison between Decimation filter architectures
The table 4 shows that the cell used for CIC-FIR-FIR and CIC-half band FIR-FIR filter design. The CIC-Half band FIR-FIR filter
required less number of taps due to half-band filter and also it uses less number of slices, flip-flops and LUTs as compared to CIC-
FIR-FIR filter. Thus the area used and power consumption is less using the CIC-Half band FIR-FIR filter design compared to the CIC-
FIR-FIR filter design. Hence we have concluded that the designed CIC-Half band FIR-FIR decimation filter is a hardware saving
structure.
CONCLUSION

The decimation filter is designed using oversampling sampling rate for audio application. CIC-FIR-FIR filter and CIC-Half band FIR-
FIR filter are designed and compared in terms of storage requirement, area used and power consumption for same specifications. It is
observed that the CIC-Half band FIR-FIR filter required less storage for filter coefficients, less area and less power consumption than
the CIC-FIR-FIR filter. Hence, CIC-Half band FIR-FIR filter is highly efficient than CIC-FIR-FIR filter.

REFERENCES

[1] L.C Loong and N.C Kyun, Design and Development of a Multirate Filters in Software Defined Radio Environment,
International Journal of Engineering and Technology, Vol. 5, No. 2, 2008.
[2] Suraj R. Gaikwad and Gopal S. Gawande, Implementation of Efficient Multirate Filter Structure for Decimation, International
Journal of Current Engineering and Technology, Vol.4, No.2 , April 2014.
[3] Fredric J. Harris and Michael Rice, Multirate Digital Filters for Symbol Timing Synchronization in Software Defined Radios,
IEEE Journal vol. 19, no. 12, December 2001.
[4] Ronald E. Crochiere and Lawrence R. Rabiner, Further Considerations in the Design of Decimators and Interpolators, IEEE
Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-24, no. 4, August 1976.
[5] Suraj R. Gaikwad and Gopal S. Gawande, Design and Implementation of Efficient FIR Filter Structures using Xilinx System
Generator, International Journal of scientific research and management, volume 2 issue 3 March 2014.
[6] University of Newcastle upon Tyne, Multirate Signal Processing, EEE305, EEE801 Part A.
[7] Ljiljana Milic, Tapio Saramaki and Robert Bregovic, Multirate Filters: An Overview, IEEE Journal, 1-4244-0387, 2006.
[8] L. D. Milic and M.D. Lutovac, Design multirate filtering, Idea Group Publishing, pp. 105-142, 2002.
[9] S.K. Mitra, Digital Signal Processing: A Computer based approach, The McGrow-Hill Companies, 2005.
[10] Yonghao Wang and Joshua Reiss, Time domain performance of decimation filter architectures for high resolution sigma delta
analogue to digital conversion, Audio Engineering Society ConventionPaper 8648 Presented at the 132nd Convention, April 2012.
[11] Kester, Mixed-signal and DSP Design Techniques. Norwood, MA: Analog Devices, Ch.3, pp. 16-17. 2000.
[12] Ljiljana D. Mili, Efficient Multirate Filtering, Software & Systems Design, 2009.
[13] Damjanovi, S., Mili, L. & Saramki, T., Frequency transformations in two-band wavelet IIR filter banks, Proceedings of the
IEEE Region 8 International Conference onComputer as a Tool, EUROCON 2005.
[14] Hogenauer, E. An economical class of digital filters for decimation and interpolation, IEEE Transactions on Acoustics,
Speech and Signal Processing,. Vol. 29, No. 2, pp. 155-162. 1981.
[15] N. J. Fliege, Multirate digital signal processing, New York: John Wiley & Sons, 1994.
[16] P.P. Vaidyanathan, Multirate systems and filter banks. Englewood Cliffs, NJ: Prentice Hall, 1993.
[17] A.I. Russel, "Efficient rational sampling rate alteration using IIR filters," IEEE Signal processing Letters, vol. 7, pp. 6-7, Jan.
2000.
[18] M. D. Lutovac, and L. D. Milic, "Approximate linear phase multiplierless IIR half-band filter," IEEE Signal Processing Letters,
vol.7, pp. 52-53, March 2000.
[19] Matthew P. Donadio, CIC Filter Introduction, m.p.donadio@ieee.org ,18 July 2000.
[20] Fredric J. Harris and Michael Rice, Multirate Digital Filters for Symbol Timing Synchronization in Software Defined Radios,
IEEE Journal vol. 19, no. 12, December 2001

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

462 www.ijergs.org

Identificationand Classification ofClouddenseareasinInfrared
Weather ImageryusingIntensity Contours
D. Poobathy
1
, Dr. R. Manicka Chezian
2
1
Research Scholar, Dr. Mahalingam Centre for Research and Development, NGM College, Pollachi, India
poobathy.d@gmail.com

2
Associate Professor, Dr. Mahalingam Centre for Research and Development, NGM College, Pollachi, India
chezian_r@yahoo.co.in
AbstractAn alternative method to detect cloud density on geographic locations of the satellite infrared images for the prediction
and nowcasting of weather and precipitation has been derived and tested. INSAT and METEOSAT satellite captured two different
images were used for the analysis. Those images were covered different range and period of time; it shows the geographical areas of
India and its nearest subcontinents. The techniques applied for cloud studyare density slicing and image contouring, the core idea
behind the technique thatthe various intensity levels of gray in the monochromic images. Three basic steps involved in this task,
Grayscale conversion, plot contour lines, provide labels.There are two contouring techniques appliedto carryout tests. After
experiments the cloud dense areas are identified and classified into five categories. Those categories indicated by five colors, a circle
like white region covered by red color spectacles that a cyclonic circulation formed on that location, same time a white region
bounded by blue curves indicates no cloud on that particular surface
Keywords Cloud Intensity, Density Slicing, Image Contouring, Infrared Vision, Weather Image
1. INTRODUCTION
Weather is the state of atmosphere at a place and time as concerns rain, wind, temperature etc., on meteorological perspective, most
weather occurrences happen in the troposphere[1], just under the stratosphere. Meteorological conditions changes primarily due to
temperature and moisture differs from one location to another location. The long-range difference (various times of a year) makes
climates or seasons. These differences can occur due to the sun angle at any particular location, which differs by latitude from the
tropics. These weather changes are making a crucial impact on lives of living hoods by the way of rain, heat wave, storm, cyclone,
typhoon, hurricanes, snowfall etc.
The weather predicted by the approachessuch as numeric (mathematical) models, radiosondes and weather satellite images. From
these methods, numeric model is very traditional technique of forecasting; now the meteorological department uses image-processing
systems for weather predictions. The satellites were launched particularly for capturing earth surface to detect clouds formation,
moisture density, and convective clouds.The idea behind weather imaging is maps, which startedto use in mid nineteenth century to
formulatea theory on storm systems. IsothermsMaps shows the temperature gradients,which can help to locate weather fronts [2].At
present, weather satellites are using to monitor the current weather of atmosphere. These satellites are polar (equator) orbiting,
spinning at the speed of Erath and seem to be at the same location. These satellites send real-time captured image data to the ground
stations. It observes the surface by using various channels of electromagnetic spectrum. The visible and infrared are the two widely
used channels. The drawback of visible channel is that, it cannot be applied for the night hours areas. However, that is not at all an
obstacle in infrared vision.

2. WEATHER IMAGES

2.1 Infrared images
The infrared(thermal) images recorded by sensors called scanning radiometers tocalculate cloud heights and types,locate oceansurface
features, and measure land and surface water temperatures. Infrared satellite imagery can be used effectively for tropical cyclones with
a visible eyepattern, using the Dvorak technique, where the difference between the temperature of the warm eye and thesurrounding
cold cloud tops can be used to determine its intensity (colder cloud tops generally indicate a moreintense storm)[3]. Infrared pictures
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

463 www.ijergs.org

show ocean tides or cyclones and map currents such as the Gulf Stream, which are valuable to the shipping industry.
Farmersandanglers are attentive in knowing land and watertemperatures to protect their crops against frost or increase their catch from
the sea, even an El Ninophenomenon can be spotted. Coloredand contouring techniques are used to convertgray-shaded infrared
images to color for easier identification of preferred information.
TABLE 1: Visible and Infrared Spectrum properties
Spectrum
Frequency
(H
Z)
Wavelength
(m)
Visible 10
15
0.6 m 1.6 m
Infrared 10
13
- 10
14
3.9 m 13.4 m

Table 1 indicates the common spectrum used for the most geostationary orbit satellites to make images available ofEarth surface and
atmosphere. These spectrum has medium traveldistance but most suitable for the manual processing.
Visible Spectrum:
Clouds cover only during the daytime. Not suitable for night vision.

Infrared Spectrum:
3.9 m 7.3 m (Water Vapor), 8.7 m, 13.4 m (Thermal imaging)[4].

2.2 Cloud density
Weathersystem ishighlycomplex because the system includes numerouselements.Amongthem the cloud is a very important
factor.Clouds are composed primarily of small water droplets and, if it is cold enough, ice crystals. Cloud appears with various shapes,
different Grayscale and not clear boundaries in the remote sensing images [5].Cloud consists of different layers. The formation of such
layer clouds is primarily due to the local meteorological conditions, in which increasing moist air-cools until it becomes sufficiently
supersaturated with water vapour to allow condensation on submicron diameter atmospheric particles[6].Cirriform, waveform,
cumuliform and,stratiform are the various kinds of clouds based on its density. Cirriform clouds are very wispy and feathery looking,
waveform clouds occur in sheets or patches with wavy, rounded masses or rolls. Cumuliform clouds are usually puffy in appearance,
similar to large cotton balls. Stratiform clouds are horizontal in nature, layered clouds that stretch out across the sky like a blanket
[7].The basic classification implies three elementary categories i.e. high, middle and low clouds. Thus, it is possible to classify clouds
based upon its density level, and it would be an accurate fact for the precipitation nowcasting. Volume ofclouds present in the
atmosphere at a given locationcalculated as per the term

3
. The same term applied for finding mass of air. The less intensity cloud
may be fog or leads to drizzling, medium density clouds provides scattered rainfall up to 10 mm. the high density clouds may be a
severe cyclone (hurricane, typhoon) and pouring more than 25mm rainfall.

3. METHODOLOGY
The weather images are analyzed for the prediction of both forecasting and nowcasting, and applied for statistical study. Consideration
on flood, drought, cyclone and monsoon are done with the collection of satellite imageries. These image dataset are makes
meteorologist and weather analyst to make decisions easily.
Density slicing and image contouring are most related methods known with different dimensions. Here, a single method used
(contour) that explore infrared image, and map different color code for variability on density. The density slicing is the process of
pixel classification based on intensity, but the contouring is the process that groups classified pixel categories and draws curves or
plots stream of points.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

464 www.ijergs.org

3.1 Density Slicing
The term density slicing is the part of false color image processing. Allottingcolor on the pixels to the gray values based on the
intensity levels i.e. the process of replacing gray level detail to specified color details. Here, each gray level has arranged with
equalizing color pattern. Image noise can also be reduced by using density slicing.As a result of slicing, image can be segmented, or
contouredinto slices of parallel grey level pixels.In weather image processing,analysis of satellite imagery to enhance the information
gathered from an individual brightness band. It is done by dividing the range of brightnesss in a single bandinto intervals, then
allocating each interval to a color. [8].
, =
1, (,

) <
1
0, (,

)
1


(1)
Where, iis pixel location, nmaximum intensity level. At each pixel, graylevel details divided into categories for color allocation.The
slicing process starts with (,

)
1
condition which indicates pixel data = 0 = white.The peak intense pixel data = 1 =
black. In-between, other white to black colors had categorized in equal intervals.



Figure 1: Density Slicing Graph with five color levels

The fig. 1 shows the graph that state about how an infrared image transformed into color contoured map. The yaxis denotes
intensity and xindicates pixel data. The lowest or zero value for black pixels, highest intensity level is white, and one represented for
white pixels, in-between 0.5 may represent gray-100% color. Five different color curves plotted on the graph, it separates black to
white pixels equally.

3.2 Image Contouring
Edges of a particular surface in the image, connectedto make a region boundary. This connecting activity is called a contour. These
may be plotted over the boundaries or density variations on the image itself [9]. The contour may be open or closed.An open contour
may be part of a region boundary, which is not projecting a boundary. Closed contours correspond to region boundaries, and the pixels
in the region may be found by a filling algorithm. Here, open contours are done by single looping plots where closed contouring by
looping plots by more than once. The edge detection is the most relevant ideology to contouring, because both are finding regional
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

465 www.ijergs.org

boundaries, in addition to that contours provides extra detail that about intensity. The canny edge detector applied for such operations
[10].
A contour represented as a list of edges or by a curve, i.e. each curve segregated by its density on that particular location where the
curve plotted [11].These contours are the outlines that implied by,

= (
0 ,

1 ,

1
)
(2)
where, C is contour Vector of n length.

|C| = (

2
)
1
=0
1
2


(3)
|C| is length of contours, and is vector.

The image contours are follows density slicing, i.e. each contour line plottedbased on the density. The lines drawn as curves classified
into different colors for different intensity in the image.



Figure 2:3-D graph for Infrared Image with different density levels.
The Fig.2 shows the three dimensional view about contoured layers of the infrared image meteosat.jpg.Thex and z axis denotes pixel
map, and y axis represents density of those pixels. The maximum intensity level is set as 250and the minimum level as 50.The infrared
image had categorizedinto five layers according to their own gay level details.Each same valued pixel were connected by plotting
curve between them, after to separate each layer, five different colors set for each of them.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

466 www.ijergs.org

4. PROCEDURE
Cloud density identification process consist of four tasks, in some cases it need one more task that to convert a RGB (Red Green Blue)
infrared image to Grayscale monochromic image [12]. The Fig. 3 illuminates the entire procedure of the image analysis. Thus, satellite
infrared images were taken for contour analysis as a first step of the process, and the consecutive second step is needed at some
circumstances while using RGB image for the study. When the color thermal images used for density slicing or contouring, that image
must be converted to Grayscale, because in the pixel analysis state, the matrix of image should be at least two dimensional.Third stage
is to plot contour lines based on their intensity, and it is possible to provide annotation based on the contour plots or curves. The final
steps are identification and classification of cloud with referring to the contour color layers.





















FigURE3:Theflow diagram for the cloud classification process.


TABLE 2: Contour Colors and Custom ranges
Color Range



















Satellite Infrared Image
RGB to Grayscaleconversion
Intensity Contouring of Image
Identification of Cloud
Classification of Cloud
250
200
150
100
50
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

467 www.ijergs.org

The table 2 shows that colors assigned for the contour lines and corresponding numerical range. These ranges are applied to segregate
curves for analyses cloud variations on the imagery. The colors and ranges given on the table for analysis are fully user defined; those
numbers did not show any relevance to the rainfall. The range denotes lowest intensity cloud to the highest on the atmosphere.
Two satellite infrared images captured by two different satellites INSAT (Indian National Satellite System) and METEOSAT
(METEOrologySATellite), taken for experimentation from the public access websites of IMD (Indian Meteorological Department)
and EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites). Both the images are infrared thermal
pictures namely insat.jpg and meteosat.jpg. First image was Grayscale by default; second one was color thermal imagery. The
images are labeled as A and B respectively and those images are different in size, captured date-time, capture satellite and covering
area. The test images would be taken into Matlab application for the experimentation.

5. RESULTS AND DISCUSSIONS
After carrying out tests, images had Grayscale conversion, subsequently that converted into bi-colored image sliced with intensity
pattern, a new map created that illustrates an outline of given image with variety of color curves (contours), and finally high dense
clouds were noted with the label of range 250. The contouring was done in two types, single contouring and contouring with fifty
loops.
Whensingle contouring of image A done, a map was shown that there were 5 heavy cloud areas were detected on INSAT image,
which was represented by the customized range 250 that was marked on the map (fig.4-c) with red color.The Fig. 4-b makes the
results much easier as it draws closed contours i.e. a white region surrounded by dark red shows a depression or low pressure area or a
cyclone. At the same time, a closed region surrounded by dark blue make us to conclude that high pressure or clear sky without any
clouds present. Image B had one more step added to A that Grayscale conversion, then it contours on the map with five colored
curves. Fig.5-c express two well-marked cloud dense areas on the location 200(y axis) 600(x axis) and 100(y axis) 600(x axis).
The tiny cloud dense locations were detected on the Fig.5-d with open contour labels; it shows twenty four high-dense (250)
categorizedclouds were spotted on the map.

(4-a)

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

468 www.ijergs.org

(4-b)

(4-c)

Figure 4:(a) INSAT Infrared Image (b) Map with 50 contour loops (c) Single contoured map with high cloud dense areas annotated.

(5-a)

(5-c)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

469 www.ijergs.org

(5-b) (5-d)

Figure 5: (a) EUMSAT Color Infrared Image (b) Grayscale converted Infrared image (c) Image after 50 Contours (d) Single
contoured and Cloud level labeled map.

6. CONCLUSION
Satelliteimages are analyzed for weather predictions. This paper discussesan alternate way for precipitation nowcasting by analyzing
cloud density. The infrared satellite images were used for study. Two different images from two satellites were taken for experiment.
The technique behind this analysis is image contouring a part of density slicing. Image contouring done by two ways, open and
closed.After experiments, to conclude that, by using closed contours techniqueit is possible to identify well-marked low pressures,
cyclones and depressions. When using single contours, predict cloud dense locations and also annotate cloud density levels on the
image.With these techniques the cloud were identified and classified into five categories. In future, improving this concept can be
elaborated for the purpose of weather forecasting and automated rainfall annotations to specified sub-areas represented on the
imagery.

REFERENCES:
[1] Glossary of Meteorology. Hydrosphere, (http://amsglossary.allenpress.com/), Retrieved on 27 June 2008.
[2] DataStreme Atmosphere (2008-04-28)."Air Temperature Patterns". AmericanMeteorological Society. Archived from
theoriginal on 2008-05-11. Retrieved 2010-02-07.
[3] Chris Landsea. Subject: H1) what is the Dvorak technique and how is it used, Hurricane Research Division, Retrieved on
January3, 2009.
[4] EUMETSAT Meteosat Second Generation (MSG) Spectrum.
[5] Kai Liu, Zheng Kou, "The Study on the Segmentation of Remote Sensing Cloud Imagery", 3rd International Conference on
Multimedia TechnologyICMT) pp. 1372 1379, 2013.
[6] R Giles Harrison and Maarten H P Ambaum, "Electrical signature in polar night cloud base variations", Environmental
Research Letters - IOP Publishing, pp. 7, 5 March 2013.
[7] Lei Liu etal. Cloud Classification Based on Structure Features of Infrared Images, Journal Of Atmospheric And Oceanic
Technology, pp. 410-417, March 2011.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

470 www.ijergs.org

[8] J. B. Campbell, "Introduction to Remote Sensing", 3rd edition, Taylor & Francis, p. 153, 2002.
[9] AmalDevParakkat et al., "A Graph based Geometric Approach to Contour Extraction from Noisy Binary Images", Computer-
Aided Design & Applications, 11(a, pp.1-12, 2014.
[10] Poobathy, D., and R. Manicka Chezian. "Edge Detection Operators: Peak Signal to Noise Ratio Based Comparison."
International Journal of Image, Graphics and Signal Processing (IJIGSP) 6, no. 10 (2014): 55.
[11] NelloZuech, Richard K. Miller Machine Vision Vision for Industrial Robots, Springer Science & Business Media, pp.91-
97, 31-Aug-1989.
[12] Nishad PM and Dr. R. Manicka Chezian, Various colour spaces and colour space conversion algorithms, Journal of Global
Research in Computer Science, Volume 4, No. 1,pp. 44-48, January 2013




















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

471 www.ijergs.org

Synchronization and Time Slot-Based Method for Vampire Attacks Detection in
Wireless Sensor Networks
I
N. Keerthikaa MCA.,
II
K. Devika M.Sc., MCA., M.Phil.,
I
Research Scholar, Bharathiar University, Coimbatore,
II
Assistant Professor, CS,
I, II
Dept. of Computer Science, Maharaja Co-Education Arts and Science College,
Perundurai, Erode 638052.
I
Email id: keerthinagarajan@gmail.com
II
Email id: devika_tarun@yahoo.co.in

ABSTRACT: Ad hoc Wireless Sensor Networks (WSNs) promise exciting new applications in the near future, such as ubiquitous on-
demand computing power, continuous connectivity, and instantly deployable communication for military and first responders. Such
networks already monitor environmental conditions, factory performance, and troop deployment, to name a few applications. As
WSNs become more and more crucial to the everyday functioning of people and organizations, availability faults become less
tolerablelack of availability can make the difference between businesses as usual and lost productivity. Ad hoc low-power wireless
networks are an exciting research direction in sensing computing. Prior security work in this area has focused primarily on denial of
communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing
protocol layer, which permanently disable networks by quickly draining nodes battery power. These Vampire attacks are not
specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. It is found that all
examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as
few as one malicious insider sending only protocol-compliant messages. Carousel attack and Stretch attack are the possible
scenarios occurred. To avoid such attacks attestation based forwarding scheme and loose source routing is proposed. The thesis
proposes the concepts using Content and Presence Multicast Protocol (CPMP) using the FPD (Future Peak Detection) and RFPD
(Randomized Future Peak Detection) algorithms which nodes use to send updates to their neighbors. The updates contain the relative
time of their senders next transmission. To address the energy efficiency problem the algorithms propose by synchronizing the
transmission times of all the nodes in the system. Transmission synchronization presents energy saving opportunities through dynamic
power management of the network interface. That is, nodes can switch off their wireless interfaces between transmissions. However,
in uncontrolled ad hoc environments, a single malicious user can easily disrupt network stability and synchronization, affecting either
the nodes power savings or their ability to receive updates from their neighbors.
Keywords: Carousel attack, Stretch attack, Future Peak Detection Content, Presence Multicast Protocol, Randomized Future Peak
Detection, Sensor.

1. INTRODUCTION
1.1. Ad Hoc Wireless Sensor Network
Ad-hoc wireless sensor networks (WSNs) promise exciting new applications in the near future, such as ubiquitous on-
demand computing power, continuous connectivity, and instantly deploy able communication for military and first responders. Such
networks already monitor environmental conditions, factory performance, and troop deployment, to name a few applications. As
WSNs become more and more crucial to the everyday functioning of people and organizations, availability faults become less
tolerablelack of availability can make the difference between business as usual and lost productivity, power outages, environmental
disasters, and even lost lives; thus high availability of these networks is a critical property, and should hold even under malicious
conditions. Due to their ad hoc organization, wireless ad hoc networks are particularly vulnerable to denial of service (DoS) attacks,
and a great deal of research has been done to enhance survivability While these schemes can prevent attacks on the short-term
availability of a network, they do not address attacks that affect long term availability the most permanent denial of service attack
is to entirely deplete nodes batteries.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

472 www.ijergs.org

These attacks are distinct from previously studied DOS, reduction of quality (ROQ), and routing infrastructure attacks as they
do not disrupt immediate availability, but rather work over time to entirely disable a network. While some of the individual attacks are
simple, and power draining and resource exhaustion attacks have been discussed before prior work has been mostly confined to other
levels of the protocol stack, e.g., medium access control (MAC) or application layers, and to our knowledge there is little discussion,
and no thorough analysis or mitigation, of routing-layer resource exhaustion attacks .

Vampire attacks are not protocol-specific, in that they do not rely on design properties or implementation faults of particular
routing protocols, but rather exploit general properties of protocol classes such as link-state, distance-vector, source routing and
geographic and beacon routing. Neither do these attacks rely on flooding the network with large amounts of data, but rather try to
transmit as little data as possible to achieve the largest energy drain, preventing a rate limiting solution. Since Vampires use protocol-
compliant messages, these attacks are very difficult to detect and prevent.

1.2. Denial of Service Attack

Adversary injecting malicious information or altering legitimate routing setup messages, or can prevent the routing
protocol from functioning correctly. For example, an attacker can forge messages to convince legitimate nodes to route packets in a
way from the correct destination. Vampire attack is one of the resource depletion attacks. The resource depletion attack focuses the
nodes batteries life. Vampire attacks affect any protocol and utilize the properties of routing protocols classes such as
source routing, distance vector and link state and geographic and beacon routing.

Dynamic Source Routing (DSR) Protocol is a stateless protocol do not store or maintain any routing information at the
nodes. The source node specifies the entire route to a destination within the packet header, so intermediaries do not make
independent forwarding decisions, relying rather on a route specified by the source. An adversary arranges packets with
knowingly establish routing loops sends packets in circles targets source routing protocols by take advantage of the limited
verification of message headers at forwarding nodes, allowing a single packet to repetitively traverse the same set of nodes that
is called Carousel attack.

1.3. Contributions
In this thesis makes three primary contributions. First, one is thoroughly evaluate the vulnerabilities of existing protocols to
routing layer battery depletion attacks. They are observe that security measures to prevent vampire attacks are orthogonal to those
used to protect routing infrastructure, and so existing secure routing protocols and do not protect against Vampire attacks. Existing
work on secure routing attempts to ensure that adversaries cannot cause path discovery to return an invalid network path, but vampires
do not disrupt or alter discovered paths, instead using existing valid network path s and protocol- compliant messages.

Protocols that maximize power efficiency are also inappropriate, since they rely on cooperative node behavior and cannot
optimize out malicious action. Second, one is show the simulation results quantifying the performance of several representative
protocols in the presence of a single Vampire (insider adversary). Third, one is modify an existing sensor network routing protocol to
provably bind the damage from Vampire attacks during packet forwarding.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

473 www.ijergs.org

Vampire attacks, a new class of resource consumption attacks that use routing protocols to permanently disable ad hoc
wireless sensor networks by depleting no des battery power. These attacks do not depend on particular protocols or implementations,
but rather expose vulnerabilities in a number of popular protocol classes. They are showed a number of proofs of concept attacks
against representative examples of existing routing protocols using a small number of weak adversaries, and measured their attack
success on a randomly generated topology of 30 nodes.

The proposed technique routing protocol are provably bounds damage from Vampire attacks by verifying that
packets consistently make progress toward their destinations and reduce the reimbursement. Derivation is damage bounds and
defenses for topology discovery, as well as handling mobile networks

2. PROBLEM FORMULATION
2.1. Problem Definition
The first challenge in addressing Vampire attacks is defining DOS attacks in wired networks are frequently characterized by
amplification of adversary can amplify the resources it spends on the attack, e.g., use 1 minute of its own CPU time to cause the victim
to use 10 minutes. However, consider the process of routing a packet in any multi-hop network: a source composes and transmits it to
the next hop toward the destination, which transmits it further, until the destination is reached, consuming re-sources not only at the
source node but also at every node the message moves through.

If they are consider the cumulative energy of an entire network, amplification attacks are always possible, given that an
adversary can compose and send messages which are processed by each node along the message path. So, the act of sending a
message is in itself an act of amplification, leading to resource exhaustion, as long as the aggregate cost of routing a message (at the
intermediate nodes) is lower than the cost to the source to compose and transmit it. So, we must drop amplification as our definition of
maliciousness and instead focus on the cumulative energy consumption increase that a malicious node can cause while sending the
same number of messages as an honest node.
They are define a Vampire attack as the composition and transmission of a message that causes more energy to be consumed
by the network than if an honest node transmitted a message of identical size to the same destination, although using different packet
headers. They are measure the strength of the attack by the ratio of network energy used in the benign case to the energy used in the
malicious case, i.e., the ratio of network-wide power utilization with malicious nodes present to energy usage with only honest nodes
when the number and size of packets sent remains constant. Safety from Vampire attacks implies that this ratio is 1. Energy use by
malicious nodes is not considered, since they can always unilaterally drain their own batteries.

The stretch attack is more challenging to prevent, its success rests on the forwarding node not checking for optimality of the
route. If we call the no-optimization case strict source routing, since the route is followed exactly as specified in the header, loose
source routing is defined, where intermediate nodes may replace part or all of the route in the packet header if they know of a better
route to the destination. This makes it necessary for nodes to discover and cache optimal routes to at least some fraction of other
nodes, partially defeating the as-needed discovery advantage.
2.2. Main Objective
1. To extend the generic algorithm and implement the Weight Based Synchronization algorithm to find the winner slot to store
the packet data.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

474 www.ijergs.org

2. To extend the Weight Based Synchronization algorithm and implement the Future Peak Detection algorithm to avoid the
inflation attack which is made by sending false maximum weight among the nodes.
3. To extend the Future Peak Detection algorithm and implement the Randomized Future Peak Detection algorithm to
synchronize all the neighbor nodes by using all the slots.

2.3. System Methodology
In this thesis, they existing present a series of increasingly damaging Vampire attacks, evaluate the vulnerability of several
example protocols, and suggest how to improve resilience. In source routing protocols, we show how a malicious packet source can
specify paths through the network which are far longer than optimal, wasting energy at intermediate nodes that forward the packet
based on the included source route.

2.3.1 Carousel Attack and Stretch Attack
In routing schemes, where forwarding decisions are made independently by each node (as opposed to specified by the
source), they are suggest how directional antenna and worm-hole attacks can be used to deliver packets to multiple remote network
positions, forcing packet processing at nodes that would not normally receive that packet at all, and thus increasing network-wide
energy expenditure. Lastly, they are show how an adversary can target not only packet forwarding but also route and topology
discovery phasesif discovery messages are flooded, an adversary can, for the cost of a single packet, consume energy at every node
in the network.

Fig 1.1 Malicious Route Carousel Attacks On Source Routing

In this first attack, an adversary composes packets with purposely introduced routing loops. They are calling it the carousel
attack, since it sends packets in circles as shown in Fig. 1.1. It targets source routing protocols by exploiting the limited verification of
message headers at forwarding nodes, allowing a single packet to repeatedly traverse the same set of nodes. Brief mentions of this
attack can be found in other literature, but no intuition for defense or any evaluation is provided.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

475 www.ijergs.org

In this second attack, also targeting source routing, an adversary constructs artificially long routes, potentially traversing
every node in the net-work. We call this the stretch attack, since it increases packet path lengths, causing packets to be processed by a
number of nodes that is independent of hop count along the shortest path between the adversary and packet destination.


Fig. 1.2 Malicious Route Stretch Attack Attacks On Source Routing

An example is illustrated in Fig. 1.2. Results show that in a randomly generated topology, a single attacker can use a carousel
attack to increase energy consumption by as much as a factor of 4, while stretch attacks increase energy usage by up to an or der of
magnitude, depending on the position of the malicious node. The impact of these attacks can be further increased by combining t hem,
increasing the number of adversarial nodes in the network, or simply sending more packets. Although in networks that do not employ
authentication or only use end-to-end authentication, adversaries are free to replace routes in any overheard packets, we assume that
only messages originated by adversaries may have maliciously composed routes

2.3.2 CPMP Overview

The CPMP (Content and Presence Multicast Protocol) is designed to support social content consumption experiences. CPMP
provides a framework for periodically communicating information about what content is currently being consumed and what content
is being sought for future consumption at each participating node.

The goal is to make the protocol efficient and scalable while including features intended to support synchronization of
presence message transmissions. CPMP messages are transmitted periodically to inform nearby devices of updated content presence
information using IP multicast. CPMP headers have the following format CPMP; device identifier; TX; containing a TX field
specifying the number of seconds in which to expect a new CPMP message from the node specified by device identifier.
2.3.3 WBS: Weight Based Synchronization Algorithm

An algorithm is described first that uses the size of synchronization clusters as a catalyst for synchronization. The algorithm
is called WBSweight based synchronization. As mentioned previously, at the end of each active interval, a node uses the slotArray
structure to decide its next transmission time. The slotArray structure has s entries, one for each slot of the next (sleep) interval.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

476 www.ijergs.org

WBS requires each node to locally maintain a variable monitoring the size of the cluster of synchronization which contains
the node. The variable is called the weight of the node/cluster. Initially, the weight of each node is 1. Each node includes its weight in
all its CPMP updates. Certainly, nodes cannot maintain globally accurate weights. Instead, each node needs to use only local
knowledgeextractedfrom packets received from neighborsto update the value of this variable.

1. Object implementation WBS extends GENERIC;
2. maxW :int; #max weight over active interval
3. weight :int; #weight advertised in CPMP packets
4. OperationinitState()
5. for (i:= 0; i < s; i ++) do
6. slotArray[i] : = new pkt[]; od
7. end
8. OperationsetTX()
#compute the maxW value
9. maxW := 0;
10. for (i:= 0; i < s; i ++) do
11. for (j:= 0; j < slotArray[i]:size();j ++) do
12. if (slotArray[i][j]:weight >maxW) then
13. winnerSlot := i;
14. maxW := slotArray[i][j]:weight; fi
15. odod
#determine new TX and weight values
16. if (winnerSlot!= nextSendCPMP % ta) then
17. TX:=winnerSlot;
18. nextSendCPMP := tcurr TX;
19. weight := maxW + 1;
20. else
21. weight := maxW;
22. fi
23. end
24. OperationprocessPackets(t
curr
: int)
25. pktList := inq:getAllPackets(slotLen);
26. for (i:= 0; i <pktList:size();i ++) do
27. index :=((t
curr
+ pktList[i]:TX) mod t
a
)/ts);
28. slotArray[index]:add(pktList[i]);
29. od
30. end

2.3.4 Future Peak Detection Algorithm
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

477 www.ijergs.org


The future peak detection algorithm is proposed to address the inflation attack. Instead of relying on subjective information
(the weight value contained in CPMP updates), FPD allows nodes to build a local approximation of this metric, using only objective
information derived from observationthe time of update receptions. FPD works by counting the number of packets that are stored in
each slot of the current active interval.

1. Object implementation FPD extends WBS;
2. maxC :int; #max nr: of packets per slot
3. Operation setTX()
#compute the maxC value
4. maxC : 0;
5. for (i := 0; i < s; i++) do
6. if (slotArray[i]:size() >maxC) then
7. maxC : slotArray[i]:size();
8. winnerSlot := i; fi
9. od
#update the TX value
10. if (winnerSlot! = nextSendCPMP % ta) then
11. TX:=winnerSlot;
12. nextSendCPMP := tcurr + TX;
13. fi
14. end

3. SYSTEM DESIGN
3.1 Module Design
The thesis contains the following modules.

1. Network Nodes Creation
Nodes details are added using this module. The node contains details such as Node Id, periodical updates sending
information, next transmission time information and the time which it could be in sleeping mode. In addition, neighbor node details
are also added in which start node id, end node id and distance are saved.
2. Carousel Attack
a. Scenario
In this module, the path is created such that the intermediate nodes modify the path information. One of the node is set as
malicious and it alters the path information such that the packet again travels in the partial visited path. So cycle/loop occurs in the
transmission.
b. Prevention
In this module, after the loop occurs, attested based scheme algorithm is worked out so that the packet contains signature
which avoids the loop in the transmission.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

478 www.ijergs.org


3. Stretch Attack
a. Scenario
In this module, the path is created such that the intermediate nodes modify the path information. One of the nodes is set as
malicious and it alters the path information such that it creates a new multi-hop partial path to reach its neighbor, so that the packets
hop distance to destination is increased.
b. Prevention
In this module, after the stretch path scenario is found out, the loose source routing is applied so that all the nodes are made to
find the alternate shortest available path to reach the destination. So stretch path attack is prevented.

4. Transmission Schedule Fixing
Here we use Content and Presence Multicast Protocol (CPMP) in which nodes use to send updates to their neighbors. The
updates contain the relative time of their senders next transmission.

5. Node Synchronization
In this module Future Peak Detection (FPD) algorithm is proposed. Nodes running FPD use CPMP updates to sync with their
largest set of already synced neighbors, counting the number of packets received within a given interval and setting the nodes next
transmission to be in sync with the slot where most packets have been received. While lightweight and efficient, FPDs greedy
strategy clusters the network: nodes reach a stable state without being synchronized with all their neighbors.
We address this issue using randomization. The Randomized Future Peak Detection (RFPD) algorithm is similar to FPD but
uses a weighted probabilistic strategy to decide a nodes next transmission time based on the packets received.

6. Suspicious Node Detection Based On Neighbor Nodes Behavior During Previous Transmissions
This module proposes the future peak detection algorithm to address the inflation attack. Instead of relying on subjective
information (the weight value contained in CPMP updates), FPD allows nodes to build a local approximation of this metric, using only
objective information derived from observationthe time of update receptions. FPD works by counting the number of packets that are
stored in each slot of the current active interval. Note that each packet received during the current active interval is stored in the slot
corresponding to the packet senders next transmission time. The RFPD nodes do not propagate information (e.g., cluster sizes) thus
preventing nodes from spreading inaccurate data and so suspicious node is avoided.


4. RESULT AND DISCUSSION
4.1. Experimental Analysis for Pr0posed
The following Table 4.1 describes experimental result for existing system analysis. The table contains energy usage for
Carousel attack Stretch attacks in sensor node detection are shown.



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

479 www.ijergs.org

Table 4.1Energy Usage with Various Attacks for Existing System

S. No Carousel Attack Stretch Attack
1 0.03 0.06
2 0.07 0.14
3 0.12 0.21
4 0.23 0.27
5 0.25 0.34
6 0.29 0.39
7 0.33 0.42
8 0.45 0.51
9 0.53 0.62
10 0.63 0.69



The following Fig 4.1 describes experimental result for existing system analysis. The table contains energy usage for
Carousel attack Stretch attacks in sensor node detection are shown


Fig 4.1Energy Usage with Various Attacks for Existing System

The following Table 4.2 describes experimental result for proposed system analysis. The table contains energy usage for
Carousel attack Stretch attacks in sensor node detection are shown

Enery Usage with Various Attacks for
Existing System
0
0.5
1
1.5
1 2 3 4 5 6 7 8 9 10
Fraction of node Enery Consumed
F
r
a
c
t
i
o
n

o
f

T
o
t
a
l

N
o
d
e
Carousel Attack Stretech Attack
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

480 www.ijergs.org

Table 4.2 Energy Usage with Various Attacks for Proposed System

S. No Carousel attack Stretch attack
1 0.08 0.09
2 0.16 0.18
3 0.25 0.28
4 0.34 0.36
5 0.42 0.46
6 0.52 0.54
7 0.61 0.63
8 0.73 0.76
9 0.79 0.81
10 0.83 0.85


The following Fig 4.2 describes experimental result for proposed system analysis. The table contains energy usage for
Carousel attack Stretch attacks in sensor node detection are shown





The following Table 4.3 describes experimental result for proposed system performance analysis. The table contains set of
synchronization node details; total number node sending packets and average percentage for existing and proposed system in sensor
node detection are shown.


Energy Usage with Various Attacks for Proposed
System
1 2 3 4 5 6 7 8 9 10
Fraction of Node Energy Consumed
F
r
a
c
t
i
o
n

o
f

T
o
t
a
l

N
o
d
e
s
CAROUSEL ATTACK STRETCH ATTACK
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

481 www.ijergs.org

Table 4.3Experimental Performances For Existing And Proposed System















The following Fig 4.3 describes experimental result for proposed system performance analysis. The table contains set of
synchronization node details, total number node sending packets and average percentage for existing and proposed system in sensor
node detection are shown




Fig 4.3Experimental Performances for Existing And Proposed System
EXPERIMENTAL PERFORMANCES FOR
EXISTING AND PROPOSED SYSTEM
0
20
40
60
80
100
1 2 3 4 5 6 7 8 9 10
Set of Secure ynchronization
Node
P
E
R
F
O
R
M
A
N
C
E
S

(
%
)
Existing System (%)
Proposed System
(%)
Set of Secure
Synchronization Node
Total Number
of Node
Existing
System (%)
Proposed
System (%)
{N1,N3,N6,N11} 4 52.34 55.22
{N2, N3, N4, N12, N13} 5 63.33 65.21
{N2, N5, N141, N8, N14, N12} 6 74.12 75.33
{N1, N8, N5, N13, N42, N5, N13} 7 83.11 84.67
{N1,N5, N3, N8, N12, N16,N20} 8 83.11 84.78
{N7, N8, N10, N12, N20} 5 66.44 68.36
{N1,N3,N6,N11} 4 57.33 60.11
{N17, N28, N20, N2, N1} 5 67.22 70.36
{N5, N8, N12, N15, N20} 5 69.22 72.45
{N4,N15, N13, N18, N12, N1} 6 78.22 80.22
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

482 www.ijergs.org


5. CONCLUSION AND FUTURE ENHANCEMENTS

The central question addressed is how to effectively exploit secondary user co-operation when conventional cooperation
method becomes inefficient. FLEC, a exible channel cooperation design is proposed to allow SUs to customize the use of leased
resources in order to maximize performance.

The problem of synchronizing the periodic transmissions of nodes in ad hoc networks, in order to enable battery lifetime
extensions without missing neighbors updates is studied. Several solutions, both lightweight and scalable but vulnerable to attacks is
proposed.

Extension of generic algorithm to use transmission stability as a metric for synchronization is made. The implementation and
simulations show that the protocols are computationally inexpensive, provide significant battery savings, are scalable and efficiently
defend against attacks.

The application works well for given tasks in windows environment. Any node with .Net framework installed can execute the
application. The underlying mechanism can be extended to any / all kind of web servers and even in multi-platform like Linux, Solaris
and more.

The system eliminates the difficulties in the existing system. It is developed in a user-friendly manner. The system is very
fast and any transaction can be viewed or retaken at any level.

REFERENCES:

[1] Aad, J.-P. Hubaux, and E.W. Knightly, Denial of Service Resilience in Ad Hoc Networks, Proc. ACM MobiCom, 2004.

[2] G. Acs, L. Buttyan, and I. Vajda, Provably Secure On-Demand Source Routing in Mobile Ad Hoc Networks, IEEE Trans.
Mobile Computing, vol. 5, no. 11, pp. 1533-1546, Nov. 2006.

[3] T. Aura, Dos-Resistant Authentication with Client Puzzles, Proc. Intl Workshop Security Protocols, 2001.

[4] J. Deng, R. Han, and S. Mishra, Defending against Path-Based DoS Attacks in Wireless Sensor Networks, Proc. ACM
Workshop Security of Ad Hoc and Sensor Networks, 2005.

[5] J. Deng, R. Han, and S. Mishra, INSENS: Intrusion-Tolerant Routing for Wireless Sensor Networks, Computer Comm.,
vol. 29, no. 2, pp. 216-230, 2006.

[6] S. Dosh i, S. Bhandare, a nd T.X. Brow n, An On-Demand Minimum Energy R outing Protocol for a Wireless Ad Hoc
Network, ACM SIGMOBILE Mobile Computing and Comm. Rev., vol. 6, no. 3, pp. 50-66, 2002.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

483 www.ijergs.org

[7] J.R. Douceur, The Sybil Attack, Proc. Intl Workshop Peer-to-Peer Systems, 2002.

[8] H. Eberle, A. Wander, N. Gura, C.-S. Sheueling, and V. Gupta, Architectural Extensions for Elliptic Curve Cryptography
over GF(2m ) on 8-bit Microprocessors, Proc. IEEE Intl Conf Application-Specific Systems, Architecture Processors
(ASAP), 2005.

[9] L.M. Feeney, An Energy Consumption Model for Performance Analysis of Routing Protocols for Mobile Ad Hoc
Networks, Mobile Networks and Applications, vol. 6, no. 3, pp. 239-249, 2001.

[10] J. Deng, R. Han, and S. Mishra, Defending against Path-Based DoS Attacks in Wireless Sensor Networks, Proc. ACM
Workshop Security of Ad Hoc and Sensor Networks, 2005.

[11] D.B. Johnson, D.A. Maltz, and J. Broch, DSR: The Dynamic Source Routing Protocol for Multihop Wireless Ad Hoc
Networks, Ad Hoc Networking, Addison-Wesley, 2001.

[12] R. Govindan and A. Reddy, An Analysis of Internet Inter- Domain Topology and Route Stability, Proc. IEEE INFOCOM,
1997.

[13] J.W. Bos, D.A. Osvik, and D. Stefan, Fast Implementations of AES on Various Platforms, Cryptology ePrint Archive,
Report 2009/ 501, http://eprint.iacr.org, 2009.

[14] H. Eberle, A. Wander, N. Gura, C.-S. Sheueling, and V. Gupta, Architectural Extensions for Elliptic Curve Cryptography
over GF (2m ) on 8-bit Microprocessors, Proc. IEEE Intl Conf Application-Specific Systems, Architecture Processors
(ASAP), 2005.

[15] Y.-C. Hu, A. Perrig, and D. Johnson. Ariadne: A secure on-demand routing protocol for ad hoc networks. In Proceedings of
the ACM Conference on Mobile Computing and Networking (Mobicom), 2002








International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

484 www.ijergs.org

An Enhanced Source Anonymity Method Framework for Sensor Networks with Replica
Detection Using Hypothesis Testing
I
K. Poongodi MCA.,
II
S. Nivas MCA., M.Phil., Ph.D.,
I
Research Scholar, Bharathiar University, Coimbatore,
II
Head of the Dept, CS,
I, II
Dept. of Computer Science, Maharaja Co-Education Arts and Science College,
Perundurai, Erode 638052.
I
Email id: poongodi.kandhasamy@gmail.com
II
Email id: nivasmaharaja@gmail.com

ABSTRACT: In certain applications, the locations of events reported by a sensor network need to remain anonymous. That is,
unauthorized observers must be unable to detect the origin of such events by analyzing the network traffic. Known as the source
anonymity problem, this problem has emerged as an important topic in the security of wireless sensor networks, with variety of
techniques based on different adversarial assumptions being proposed. This thesis presents a new framework for modeling, analyzing,
and evaluating anonymity in sensor networks. The novelty of the proposed framework is twofold: first, it introduces the notion of
interval in distinguish ability and provides a quantitative measure to model anonymity in wireless sensor networks; second, it maps
source anonymity to the statistical problem of binary hypothesis testing with nuisance parameters. The thesis shows how mapping
source anonymity to binary hypothesis testing with nuisance parameters leads to converting the problem of exposing private source
information into searching for an appropriate data transformation that removes or minimize the effect of the nuisance information. By
doing so, it transforms the problem from analyzing real-valued sample points to binary codes, which opens the door for coding theory
to be incorporated into the study of anonymous sensor networks. In addition, to mitigate the limitations of previous schemes, the thesis
proposes a zone-based node compromise detection scheme in sensor networks. The main idea of the proposed scheme is to use
sequential hypothesis testing to detect suspect regions in which compromised nodes are likely placed. A fast and effective mobile
replica node detection scheme is proposed using the Sequential Probability Ratio Test.
Keywords: Source Anonymity, Evaluating Anonymity, Sensor Network, Hypothesis Testing, Replica, Traffic, Sequential Probability
Radio Test.
1. INTRODUCTION
1.1 Sensor Networks
Sensor networks are deployed to sense, monitor, and report events of interest in a wide range of applications including, but
are not limited to, military, health care, and animal tracking. In many applications, such monitoring networks consist of energy
constrained nodes that are expected to operate over an extended period of time, making energy efficient monitoring an important
feature for unattended networks. In such scenarios, nodes are designed to transmit information only when a relevant event is detected
(i.e., event-triggered transmission).

Consequently, given the location of an event-triggered node, the location of a real event reported by the node can be
approximated within the nodes sensing range. In the example depicted in Fig. 1.1, the locations of the combat vehicle at different
time intervals can be revealed to an adversary observing nodes transmissions. There are three parameters that can be associated with
an event detected and reported by a sensor node: the description of the event, the time of the event, and the location of the event.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

485 www.ijergs.org


Figure 1.1 A sensor network deployed in a battlefield. Only nodes in close proximity to the combat vehicle are broadcasting
information, while other nodes are in sleep mode.

When sensor networks are deployed in untrustworthy environments, protecting the privacy of the three parameters that can be
attributed to an event-triggered transmission becomes an important security feature in the design of wireless sensor networks. While
transmitting the description of a sensed event in a private manner can be achieved via encryption primitives, hiding the timing and
spatial information of reported events cannot be achieved via cryptographic means.

Encrypting a message before transmission, for instance, can hide the context of the message from unauthorized observers, but
the mere existence of the ciphertext is indicative of information transmission. The source anonymity problem in wireless sensor
networks is the problem of studying techniques that provide time and location privacy for events reported by sensor nodes.

1.2 Source Anonymity Problem

In the existing literature, the source anonymity problem has been addressed under two different types of adversaries, namely,
local and global adversaries. A local adversary is defined to be an adversary having limited mobility and partial view of the network
traffic. Routing-based techniques have been shown to be effective in hiding the locations of reported events against local adversaries.

A global adversary is defined to be an adversary with ability to monitor the traffic of the entire network (e.g., coordinating
adversaries spatially distributed over the network). Against global adversaries, routing-based techniques are known to be ineffective in
concealing location information in event-triggered transmission. This is due to the fact that, since a global adversary has full spatial
view of the network, it can immediately detect the origin and time of the event-triggered transmission.

The first step toward achieving source anonymity for sensor networks in the presence of global adversaries is to refrain from
event-triggered transmissions. To do that, nodes are required to transmit fake messages even if there is no detection of events of
interest. When a real event occurs, its report can be embedded within the transmissions of fake messages. Thus, given an individual
transmission, an observer cannot determine whether it is fake or real with a probability significantly higher than 1/2, assuming
messages are encrypted.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

486 www.ijergs.org























Figure1.2. Different approaches for embedding the report of real events within a series of fake transmissions; (a)
shows the prespecified distribution of fake transmissions, (b) illustrates how real events are transmitted as soon as they are
detected, (c) illustrates how nodes report real events instead of the next scheduled fake message.

1.3 Probabilistic Distribution

In the above approach, there is an implicit assumption of the use of a probabilistic distribution to schedule the transmission of
fake messages. However, the arrival distribution of real events is, in general, time-variant and unknown a priori. If nodes report real
events as soon as they are detected (independently of the distribution of fake transmissions), given the knowledge of the fake
transmission distribution, statistical analysis can be used to identify outliers (real transmissions) with a probability higher than 1/2, as
illustrated in Fig. 2b. In other words, transmitting real events as soon as they are detected does not provide source anonymity against
statistical adversaries analyzing a series of fake and real transmissions.

One way to mitigate the above statistical analysis is illustrated in Fig. 2c. As opposed to transmitting real events as they
occur, they can be transmitted instead of the next scheduled fake one. For example, consider programming sensor nodes to
deterministically transmit a fake message every minute. If a real event occurs within a minute from the last transmission, its report
must be delayed until exactly 1 minute has elapsed.
Occurrence Time of a
real event
Time
Time
b)
Time Fake Message
Real Message
c)
a)
Fake Message Schedule
Incorporation of Real Events
Incorporation of Real Events
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

487 www.ijergs.org


This approach, however, introduces additional delay before a real event is reported (in the above example, the average delay
of transmitting real events is half a minute). When real events have time-sensitive information, such delays might be unacceptable.
Reducing the delay of transmitting real events by adopting a more frequent scheduling algorithm is impractical for most sensor
network applications since sensor nodes are battery powered and, in many applications, unchargeable. Therefore, a frequent
transmission scheduling will drastically reduce the desired lifetime of the sensor network.

1.4 Main Contributions
The main contributions of this thesis are.
- The notion of interval in distinguish ability is introduced and illustrated how the problem of statistical source
anonymity can be mapped to the problem of interval in distinguish ability.
- A quantitative measure is proposed to evaluate statistical source anonymity in sensor networks.
- The problem of breaching source anonymity is mapped to the statistical problem of binary hypothesis testing with
nuisance parameters.
- The significance of mapping the problem is demonstrated in hand to a well-studied problem in uncovering hidden
vulnerabilities. In particular, realizing that the SSA problem can be mapped to the hypothesis testing with nuisance
parameters implies that breaching source anonymity can be converted to finding an appropriate data transformation that
removes the nuisance information.
- Existing solutions under the proposed model is analysed. By finding a transformation of observed data, the problem is
converted from analyzing real-valued samples to binary codes and a possible anonymity breach is identified in the
current solutions for the SSA problem.

2. PROBLEM FORMULATION

2.1 Main Objectives

- To detect same identity based multi adversaries
- To Localization the hacker node
- Cluster based victim node detection in the network
- Detect the presence of spoofing attacks
- Determine the number of attackers
- Localize multiple adversaries and eliminate them.
- To create Mobile Node Network.
- To make Mobile movement (Random walk) within given speed.
- To update location information to its neighbors.
- To update location information of all nodes to Base Station.
- To make replicate node attack.
- To make Base station identifies the mobile replication attack.


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

488 www.ijergs.org


2.2 Specific Objectives

- A fast and effective mobile replica node detection scheme is proposed using the Sequential Probability Ratio Test.
- To tackle the problem of spoofing attacks in mobile sensor networks.
- The scheme detects mobile replicas in an efficient and robust manner at the cost of reasonable overheads.
- The number of attackers when multiple adversaries masquerading as the same node identity

2.3. System Methodology
2.3.1. Proposed Framework for SSA
In this section, we introduce our source anonymity model for wireless sensor networks. Intuitively, anonymity should be
measured by the amount of information about the occurrence time and location of reported events an adversary can extract by
monitoring the sensor network. The challenge, however, is to come up with an appropriate model that captures all possible sources of
information leakage and a proper way of quantifying anonymity in different systems.

2.3.2. Statistical Goodness of Fit Tests and the SSA Problem
2.3.2.1. SSA Solutions Based on Statistical Goodness of Fit Tests
The statistical goodness of fit of an observed data describes how well the data fits a given statistical model. Measures of
goodness of fit typically summarize the discrepancy between observed values and the values expected under the statistical model in
question. Such measures can be used, for example, to test for normality of residuals, to test whether two samples are drawn from
identical distributions, or to test whether outcome frequencies follow a specified distribution.

2.3.2.2. Statistical Goodness of Fit under Interval In distinguish ability
In this section, they are analyzing for statistical goodness of fit-based solutions under the proposed model of interval in
distinguish ability. As before, let Xi be the random variable representing the time between the i
th
and the (i + 1)
st
transmissions and let
the desired mean of these random variables be ; i.e., IE [Xi] = , for all i (since the Xis are iid). We now examine two intervals, a
fake interval and a real one.

2.3.3 Sequential Probability Ratio Test

The enhanced Sequential Probability Ratio Test (SPRT) which is a statistical hypothesis testing. SPRT has been proven to be
the best mechanism in terms of the average number of observations that are required to reach a decision among all sequential and non-
sequential test processes. SPRT can be thought of as one dimensional random walk with lower and upper limits. Before the random
walk starts, null and alternate hypotheses are defined in such a way that the null one is associated with the lower limit and the alternate
one is associated with the upper limit. A random walk starts from a point between two limits and moves toward the lower or upper
limit in accordance with each observation.

Algorithm process for enhanced SPRT:
DECLARATION: n=0,w
n
=0
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

489 www.ijergs.org

INPUT: location information L and time information T
OUTPUT: accept the hypothesis H
0
or H
1

curr_loc=L
curr_time=T
if n>0 then
compute T
0
(n) and T
1
(n)
compute speed 0 from curr_loc and prev_loc, curr_time and prev_time
if 0>V
max
then
w
n
=w
n
+1
end if
if w
n
>=T
1
(n) then
Accepts the hypothesis h
1
and terminate the test
end if
if w
n
<=T
0
(n) then
initialize n and w
n
to 0 and accepts the hypothesis H
0
return;
end if
end if
n=n+1
prev_loc=curr_loc
prev_time=curr_time

Algorithm Steps
1) Create a network of n nodes and save the information in the database table.
2) Draw the network with the available node information.
3) Random walk procedure is worked out so that the nodes mobility is carried out by just moving its location with n
pixels below (the given speed) in both x and y direction. For example, if the speed is given as 10 units, then a random
value below 10 is chosen, and the node is moved in x or y direction. This is carried out for all nodes. For simulation, the
timer is set to 5 seconds. So once each 5 seconds, all the nodes are moved within the given speed horizontally or
vertically.
4) The nodes are sending their location to their neighbor nodes. The node is treated as neighbor to one, if it is within the
given pixel units. For example, the unit is given as 50, then a node with left position in the space with 150 x value and
another node with 180 x value is treated as neighbor nodes. This is applicable to y axis also. So in the rectangular area of
50 units (side), when the two nodes fall inside, then they are treated as neighbor nodes.
5) The nodes are updating their location information once in 10 seconds. The arrow lines are drawn during the animation
such that from all nodes, the line is drawn to the base station. The area located at left bottom corner of the drawing space
in the form.
6) Replica Attack: When a button is clicked, a node is chosen randomly which behaves as attacker node; a node is chosen
randomly which behaves as affected node. The attacker node through sends the current location information, it sends its
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

490 www.ijergs.org

id as the affected node. So the base station receives updates with two ids at single update. Now, the base station needs to
identify which node is correct and which is attacker.
7) If two nodes send same id, then the base station, collects the previous location information of the same id. Any one of the
entry will have wrong previous location. At the same time, the neighbor nodes location data is also used such that, the
affected nodes neighbors update correct location of suspected id whether the attacker nodes neighbor nodes update
wrong location and the attacker node will be identified.
8) Then the node is revoked from the network.

Techniques to Detect Compromised Nodes In Zones

Reputation-based trust management schemes do not stop compromised nodes doing malicious activities in the network. Also,
the existing schemes based on software attestation require each sensor to be periodically attested because it cannot be predicted when
attacker compromises sensors. The periodic attestation of individual nodes will incur large overhead in terms computation and
communication overhead.

To mitigate the limitations of both approaches, a zone-based node compromise detection scheme is proposed which
facilitates node compromise detection and revocation by leveraging zone trust information. Specifically, the network is divided into
a set of zones, establish trust per zone, and detect untrustworthy zones in accordance with zone trust values.

Once a zone is determined to be untrustworthy, the network operator attests the software modules of all sensors in the
untrustworthy zone, and detects and revokes compromised nodes in that zone.

A straightforward approach for untrustworthy zone detection is to decide a zone as untrustworthy by observing a single
evidence that its trust value is less than a pre defined threshold. However, this approach does not consider the zone trust measurement
error. Due to the error occurrence in the zone trust measurement, trustworthy (resp. untrustworthy) zone could be detected as
untrustworthy (resp. trustworthy).

To minimize these false positive and negatives, it needs to make a decision with multiple pieces of evidence rather than
single evidence. To meet this need, the Sequential Probability Ratio Test (SPRT) is used, which is a statistical decision process
that makes a decision with multiple pieces of evidence. The SPRT benefits in the sense that the SPRT reaches a decision with a
small number of evidences while achieving the low false positive and negative rates. The SPRT can be thought of as one-
dimensional random walk with lower and upper limits.

It is believed that SPRT is well-suited for tackling the compromised node detection problem in the sense that a random
walk with two limits can be constructed in such a way that each walk is determined by the trust value of a zone; the lower and
upper limits are properly configured to be associated with the excess and shortfall of a predened trust threshold, respectively.
Protocol Operation
The proposed protocol to find the compromised zones proceeds in three phases:

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

491 www.ijergs.org

1) Phase I:
Zone Discovery and Trust Aggregator Selection: After deployment, every sensor node u nds out its location and determines
the zone to which it belongs. This zone is called the home zone. From u s point of view, other zones are called as the foreign zones.
Node u discovers every other node residing in the same zone. After the zone discovery process, the Trust Aggregator (TA) is selected
in a round robin manner. Specically, the time domain of a zone is partitioned into time slots. An initial duty time slot is assigned to
each node u in the zone according to the ascending order of the nodes IDs. Each node u then acts as trust aggregator every S time
slots starting from its initial duty time slot, where S is the number of nodes residing in the zone.

2) Phase II:
Trust Formation and Forwarding: For each time slot T i , each node u in zone Z computes neighborhood-trust that is dened
in accordance with the difference between the probability distributions of the information generated by u and the information sent to u
by u s neighboring nodes in zone Z.

3) Phase III:
Detection and Revocation: Upon receiving a zone-trust report from a TA in zone Z , the base station veries the authenticity
of TAs report with the secret shared key between TA and itself and discards the report if it is not authentic. The base station also
maintains a record per TA associating each TAs ID with its home zone. This prevents compromised TAs from claiming multiple
home zones.

3. SYSTEM DESIGN

3.1. Module Description
The following modules are present in the thesis
Real interval identification using interval in distinguish ability
Fake interval
Mobile node network creation.
Mobile movement (random walk) within given speed.
Update location information to its neighbors.
Base station updates location information of all nodes.
Replicate node.
Base station identifies the mobile replication attack.

1. Real Interval Identification Using Interval In Distinguish Ability
In this module, sender node C chooses two intervals IR and IF, in which IR is a real interval and IF is a fake one. C draws a
bit b e {0, 1} uniformly at random and sets IR = Ib and IF = Ib, where b denotes the binary complement of b. C gives Ib and Ib to
receiver A. A makes any statistical test of her choice on Ib and Ib and outputs a bit b. If b = b, A wins the transmission.
.
2. Fake Interval
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

492 www.ijergs.org

In the absence of real events, nodes are programmed to transmit fake messages according to a pre-specified probability
distribution. Nodes transmit fake messages according to a pre specified probabilistic distribution and maintain a sliding window of
inter transmission times. When a real event occurs, it is transmitted as soon as possible under the condition that the samples in the
sliding window maintain the designed distribution.

3. Mobile Node Network Creation
In this module, a form is generated which contains a text box to get node id and the id is saved in to Nodes table. During
network creation, the nodes with id will be displayed in random X and Y position. The base station node is need not be displayed as it
is programmatically listens and updates the location information of all the nodes when they are in movement.

4. Mobile Movement (Random Walk) Within Given Speed
In this module, all the nodes are roaming in any directions (their walk is updated by incrementing x-axis or y-axis or both at a
movement with any number of pixels within the specified maximum limit. In practical situation, the nodes can move with their
physical capabilities. For sake of convenience, if the nodes reach the picture box limit, then they move in opposite direction so that
they roam in the rectangular boundary of the picture box control.

5. Update Location Information to Its Neighbors
In this module, all the nodes are calculating the neighbor nodes with their transmission range (specified in n units common
for all nodes. It means than all the sensor nodes are having homogeneous transmission ranges). Then it gives the location information
i.e., its position to all of its neighbors. It occurs for all the nodes at regular intervals. The timer control is provided and the time is
considered in global aspect. All the nodes are having unique time values.

6. Base Station Updates Location Information Of All Nodes
In this module, the base station is collecting the location information from all nodes. It occurs for all the nodes at regular
intervals. It is assumed that no two nodes are in same location since the nodes purpose is to serve individually a specific area.

7. Replicate Node
In this module, the node is updating its location information to base station with one of the remaining nodes. It means that it
is replicating some other node. This results in, at a given time, both the nodes are sending same location information to the base station
of which one is true and other is false.

8. Base Station Identifies the Mobile Replication Attack
This module presents the details of the technique to detect replica node attacks in mobile sensor networks. In static sensor
networks, a sensor node is regarded as being replicated if it is placed in more than one location. If nodes are moving around in
network, however, this technique does not work, because a benign mobile node would be treated as a replica due to its continuous
change in location.

The base station computes the speed from every two consecutive claims of a mobile node and performs the SPRT by
considering speed as an observed sample. Each time the mobile nodes speed exceeds (respectively, remains below) Vmax, it will
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

493 www.ijergs.org

expedite the random walk to hit or cross the upper (respectively, lower) limit and thus lead to the base station accepting the alternate
(respectively, null) hypothesis that the mobile node has been (respectively, not been) replicated. Once the base station decides that a
mobile node has been replicated, it revokes the replica nodes from the network.


4. RESULT AND DISCUSSION

Provided the training data collected during the offline training phase, we can further improve the performance of determining
the number of spoofing attackers. In addition, given several statistic methods available to detect the number of attackers, such as
System Evolution and combine the characteristics of these methods to achieve a higher detection rate.

In this section, we explore using hypothesis testing to classify the number of the spoofing attackers. The advantage of using
hypothesis is that it can combine the intermediate results (i.e., features) from different statistic methods to build a model based on
training data to accurately predict the number of attackers.

Hit Rate % Hit Rate Existing System Hit Rate Proposed System
75 80 90
75 80 75
85 90 85
80 90 90
75 70 75
65 75 80

The training data set can be obtained through regular network monitoring activities. Given a training set of instance-label
pairs and the label, the support vector machines require the solution of the following optimization problem:

Error Rate for Existing System
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

494 www.ijergs.org




Error Rate for Proposed System



Error Rate for Proposed System
1 2 3 4 5
Error Rate
Error Rate
Existing
Error Rate
Proposed
Hit Rate for Existing System
75
75 85
80
75
65
80
80 90
90 70
75
90 75 85 90 75 80
0%
20%
40%
60%
80%
100%
2 ATTACKER 3 ATTACKER 4 ATTACKER
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

495 www.ijergs.org

5. CONCLUSION AND FUTURE ENHANCEMENTS
This thesis proposed to use received signal strength-based spatial correlation, a physical property associated with each
wireless device that is hard to falsify and not reliant on cryptography as the basis for detecting spoofing attacks in wireless networks.
It provided theoretical analysis of using the spatial correlation of RSS inherited from wireless nodes for attack detection. It derived the
test statistic based on the cluster analysis of RSS readings. The approach can both detects the presence of attacks as well as determine
the number of adversaries, spoofing the same node identity, so that we can localize any number of attackers and eliminate them.
In addition, a zone-based node compromise detection scheme is proposed using the Chronological Likelihood Fraction Test
(CLFT). Furthermore, several possible attacks are described against the proposed scheme and proposed counter-measures against
these attacks. The scheme is evaluated in simulation under various scenarios. The experimental results show that the scheme quickly
detects untrustworthy zones with a small number of zone-trust reports.

6. REFERENCES
[1] B. Alomair, A. Clark, J. Cuellar, and R. Poovendran, On Source Anonymity in Wireless Sensor Networks, Proc. IEEE/IFIP
40
th
Intl Conf. Dependable Systems and Networks (DSN 10), 2010.

[2] K. Mehta, D. Liu, and M. Wright, Location Privacy in Sensor Networks Against a Global Eavesdropper, in ICNP 2007.
IEEE International Conference on Network Protocols. , 2007.

[3] B. Alomair, A. Clark, J. Cuellar, and R. Poovendran, Statistical Framework for Source Anonymity in Sensor Networks,
Proc. IEEE GlobeCom, 2010.

[4] P. Kamat, Y. Zhang, W. Trappe, and C. Ozturk, Enhanc-ing Source-Location Privacy in Sensor Network Routing, ICDCS
2005. The 25th IEEE International Conference on Distributed Computing Systems.

[5] C. Ozturk, Y. Zhang, and W. Trappe, Source-location privacy in energy-constrained sensor network routing, in
Proceedings of the 2nd ACM workshop on Security of ad hoc and sensor networks, 2004.

[6] Y. Xi, L. Schwiebert, and W. Shi, Preserving source location privacy in monitoring-based wireless sensor net-works, in
IPDPS 2006. The 20th International Parallel and Distributed Processing Symposium, 2006.

[7] B. Hoh and M. Gruteser, Protecting Location Privacy Through Path Confusion, in SecureComm 2005. First Inter-national
Conference on Security and Privacy for Emerging Areas in Communications Networks., 2005.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

496 www.ijergs.org

[8] Y. Ouyang, Z. Le, G. Chen, J. Ford, F. Makedon, and U. Lowell, Entrapping Adversaries for Source Protection in Sensor
Networks, in Proceedings of the 2006 IEEE International Symposium on World of Wireless, Mobile and Multimedia
Networks, 2006.

[9] K. Mehta, D. Liu, and M. Wright, Location Privacy in Sensor Networks Against a Global Eavesdropper, in ICNP 2007.
IEEE International Conference on Network Protocols., 2007.

[10] M. Shao, Y. Yang, S. Zhu, and G. Cao, Towards Sta-tistically Strong Source Anonymity for Sensor Networks, INFOCOM
2008. The 27th IEEE Conference on Computer Communications., 2008.

[11] Y. Yang, M. Shao, S. Zhu, B. Urgaonkar, and G. Cao, To-wards event source unobservability with minimum network trafc
in sensor networks, in Proceedings of the rst ACM conference on Wireless network security, 2008.

[12] N. Li, N. Zhang, S. Das, and B. Thuraisingham, Privacy preservation in wireless sensor networks: A state-of-the-art
survey, Ad Hoc Networks, 2009.

[13] A. Perrig, R. Szewczyk, J. Tygar, V. Wen, and D. Culler, SPINS: Security Protocols for Sensor Networks, Wireless
Networks, vol. 8, no. 5, pp. 521-534, 2002.

[14] D. Hong, J. Sung, S. Hong, J. Lim, S. Lee, B. Koo, C. Lee, D. Chang, J. Lee, K. Jeong, H. Kim, J. Kim, and S. Chee,
HIGHT: A New Block Cipher Suitable for Low-Resource Device, Proc. Eighth Intl Workshop Cryptographic Hardware
and Embedded Systems (CHES 06), pp. 46-59, 2006.

[15] National Institute of Standards and Technology (NIST), FIPS-197: Advanced Encryption Standard, November 2001.
http://www.itl.nist.gov/fipspubs/









International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

497 www.ijergs.org

The Effect of Engine Temperature on Multi Cylinder SI Engine
Performance with Gasoline as a fuel
Sunil Choudhary
1
, A.C. Tiwari
2
, Ajay Vardhan
3
, Arvind Kaushal
4
University Institute of Technology-RGPV, Bhopal (M.P.)-462036
1 2 3

IGEC, Sagar, (M.P.)
4
-470004
Email: Corresponding Author: a_v1986@rediffmail.com
Contact No. 07566476384

Abstract- This country is amongst the tropical countries where the deviation in the temperature is having very vast range. Looking in
to this vast varying temperature range it is very difficult to say that which temperature is best suited for operating condition of engines
and gives us best performance level as for as Thermal efficiency and brake power is concerned. In this work it tried to investigate the
best option to run the S.I. engine. The development of engines with its complexity of in-cylinder processes requires modern
development tools to exploit the full potential in order to reduce fuel consumption. A three cylinder, four stroke, petrol carburetor
Maruti 800 engine connected to eddy current type dynamometer for loading was adopted to study engine power. The performance
results that are reported include brake power and specific fuel consumption (sfc) as a function of engine temperature; i.e. 50, 60, 70,
80, and 90C with varying engine speed of 1500, 1800, 2100 and 2400 rpm. The effect of increasing the temperature can have the
multiple advantage of reducing the specific fuel consumption while on the other hand low head temperature will have good impact in
reducing the thermal stress of the top portion, reduction in chance of knocking & pre-ignition, increase in the volumetric efficiency. It
is indisputable conclusion that lower speed engines and large capacity engines, which are usually of low speed design, more efficient
than high speed engines.

Keywords- Thermal Efficiency, S.I. Engine, Fuel, Engine Temperature, Four Stroke, Eddy Current, RPM
INTRODUCTION
We have two types of internal combustion engines, the spark ignition, SI, and the compression ignition, CI. Both have their merits.
The SI engine is a rather simple product and hence has a lower first cost. The problem with the SI engine is the poor part load
efficiency due to large losses during gas exchange and low combustion and thermodynamics efficiency.
The effect of increasing the liner temperature can have the multiple advantage of reducing the specific fuel consumption, while on
the other hand low head temperature will have good impact in reducing the thermal stress of the top portion, reduction in chance of
knocking and pre ignition, increase in the volumetric efficiency.
The experimental study is carried out on a three cylinders, four stroke, petrol carburetor water cooled, Maruti800 engine connected
to eddy current type dynamometer for loading. The objective of this project is to examine engine performance parameter specific fuel
consumption (SFC), brake power (BP) and with varying engine temperature at 50, 60, 70, 80, 90
o
C and at an engine speed of 1800,
2100, 2400 rpm with respect to engine load 6, 9, 12, 15 kg.
The results are shown by various graphs i.e. between engine temperature and specific fuel consumption, engine temperature and brake
power, engine speed and specific fuel consumption, engine speed and brake power, engine load and specific fuel consumption, engine
load and brake power.
There are two types of engine cooling systems used for heat transfer from the engine block and head; liquid cooling and air cooling.
With a liquid coolant, the heat is removed through the use of internal cooling channels with in the engine block. Liquid systems are
much quieter than air systems, since the cooling channel absorbs the sounds from the combustion process. However, liquid systems
are subject to freezing, corrosion, and leakage problems that do not exist in air system.
The performance of the engine-cooling system has steadily improved as the power output & density of internal combustion engines
gradually increases. With greater emphasis placed on improving fuel economy & lowering emissions output from modern IC engines,
engine downsizing & raising power density has been the favored option. Through this route, modern engines can attain similar power
outputs to larger convectional engines with reduced frictional losses.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

498 www.ijergs.org

EXPERIMENTAL DETAILS
Experiment was conducted on a three cylinder, four stroke, Petrol Carburetor Maruti 800 engine which is connected to eddy current
type dynamometer for loading. The performance results which include Brake Power (B.P.) and Specific Fuel Consumption (SFC) as a
function of engine temperature; i.e. 50,60,70,80 and 90C are reported. The test has been conducted to study the effect of engine
temperature on SFC and B.P. with varying engine speed i.e. 1500, 1800, 2100 and 2400 rpm with the load of 6,9,12 and 15 kg.
Engine temperature has been controlled by controlling cooling water flow rate. The cooling water flow rate for engine is measured
manually by rotameter. The values of engine performance parameter are directly obtained by "Engine Soft" software.
A test matrix is created to record the engine performance parameter but main focal point was on specific fuel consumption and
brake power of the engine at different engine speed 1500, 1800, 2100 and 2400 rpm with the engine load of 6,9,12,15 kg at engine
temperature 50,60,70,80,90C

RESULTS & DISCUSSIONS
An Experiment was conducted on a three cylinder, four stroke, Petrol Carburetor Maruti 800 engine which is connected to eddy
current type dynamometer for loading. The performance results which include Brake Power (B.P.) and Specific Fuel Consumption
(SFC) as a function of engine temperature; i.e 50,60,70,80 and 90C are reported.
Following are the graphs which has obtained for various engine performance parameters:
i. The effect of engine temperature on specific fuel consumption with varying engine speed.
ii. The effect of engine temperature on brake power with varying engine speed.
iii. The effect of engine speed on specific fuel consumption with varying engine temperature.
iv. The effect of engine speed on brake power with varying engine temperature.
v. The effect of engine load on specific fuel consumption with varying engine temperature.
vi. The effect of engine load on brake power with varying engine temperature.

FIGURES





Fig. 1 Effect of engine temperature on specific fuel consumption with varying engine speed and at 6 Kg Engine load






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

499 www.ijergs.org

Fig 2 Effect of engine temperature on specific fuel consumption with varying engine speed and at 9 Kg Engine load






Fig 3 Effect of engine temperature on specific fuel consumption with varying engine speed and at 12 Kg Engine load






Fig 4 Effect of engine temperature on specific fuel consumption with varying engine speed and at 15 Kg Engine load






Fig 5 Effect of engine temperature on brake power with varying engine speed and at 6 Kg Engine Load






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

500 www.ijergs.org

Fig 6 Effect of engine temperature on brake power with varying engine speed and at 9
Kg Engine Load






Fig 7 Effect of engine temperature on brake power with varying engine speed and at 12 Kg Engine Load






Fig 8 Effect of engine temperature on brake power with varying engine speed and at 15 Kg Engine Load






Fig 9 The Effect of engine speed on specific fuel consumption with varying engine temperature at 6 kg Engine Load






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

501 www.ijergs.org


Fig 10 The Effect of engine speed on specific fuel consumption with varying engine temperature at 9 kg Engine Load






Fig 11 The Effect of engine speed on specific fuel consumption with varying engine temperature at 12 kg Engine Load






Fig 12 The Effect of engine speed on specific fuel consumption with varying engine temperature at 15 kg Engine Load






Fig 13 The Effect of engine speed on brake power with varying engine temperature at 6 kg Engine Load






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

502 www.ijergs.org

Fig 14 The Effect of engine speed on brake power with varying engine temperature at 9 kg Engine Load







Fig 15 The Effect of engine speed on brake power with varying engine temperature at 12 kg Engine Load




Fig 16 The Effect of engine speed on brake power with varying engine temperature at 15 kg Engine Load






Fig 17 The Effect of engine load on specific fuel consumption with varying engine temperature and at 1500 rpm Engine speed






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

503 www.ijergs.org

Fig 18 The Effect of engine load on specific fuel consumption with varying engine temperature and at 1800 rpm Engine speed





Fig 19 The Effect of engine load on specific fuel consumption with varying engine temperature and at 2100 rpm Engine speed






Fig 20 The Effect of engine load on specific fuel consumption with varying engine temperature and at 2400 rpm Engine speed






Fig 21 The Effect of engine load on brake power with varying engine temperature and at 1500 rpm Engine speed






Fig 22 The Effect of engine load on brake power with varying engine temperature and at 1800 rpm Engine speed

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

504 www.ijergs.org








Fig 23 The Effect of engine load on brake power with varying engine temperature and at 2100 rpm Engine speed






Fig 24 The Effect of engine load on brake power with varying engine temperature and at 2400 rpm Engine speed
CONCLUSION
It is concluded that if we are increasing engine temperature there is some fall in specific fuel consumption but brake power is
unaffected but when increase in engine speed there is some decrement in specific fuel consumption and brake power is increased. It is
also affected by applying different load on the engine. Best result we obtained in this study is 522 g/KWhr specific fuel consumption
and 6.51 KW brake power at 70
o
C engine temperature and 2400 rpm engine speed with 12 kg engine load.
REFERENCES:
[1] ROBINSON, K, N.CAMPBELL, J.HAWLEY. (1999) and D. TILLEY (1999),"A Review of Precision Engine Cooling,"
SAE paper 1999-01-0578.
[2] BORMAN, G. and K. NISHIWAKI (1987), "International Combustion Engine Heat Transfer," Prog. Energy Combustion
Sci., 13, p. 1 - 46.
[3] LI, (1982),"Piston Thermal Deformation and Friction Considerations," SAE paper 820086
[4] BRUCKNER, M. GRUENBACHER, E. ABERER, D. RE, L.D., TSCHREITER, F,(Oct 2006), "Predictive Thermal
Management of Combustion Engine,", page 2778-2783,
[5] SHAYLER, P., S. CHARISTIAN, and T. MA, (1993), "A Model for The Investigation of Temperature, Heat Flow, and
Friction Characteristics During Engine Warm up," SAE paper 931153
[6] D. BRADLEY, G. T. KALGHATGI, M. GOLOMBOK, JINKU YEO, (1996) "Heat Release Rates Due to Autoignition, and
Their Relationship to Knock Intensity In Spark Ignition Engines", Twenty-Sixth Symposium (International) on
Combustion/The Combustion Institute, pp. 2653-2660.
[7] Kirlosker ,C.S., Chanderasekher ,S.B and Narayan Rao ,N.N., The av-1 series 3 differientally coolend semi-adiabatic diesel
engine below 10kw.SAE paper no.790644. 1979 .
[8] Kobayashi, H., Yoshimura , K. and Hirayama , T.:- a study on dual circuit cooling or higher compression ratio , IMechE,
427.84 ,SAE paper, no. 841294,1984.
[9] Willumeit ,H.P>, Steinberg ,P.,Scheibner, B and Lee,W. New :- temperature control criteria for more efficient gasoline
engine ,SAE paper no. 841292,1984.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

505 www.ijergs.org

[10] Finlay, I.C. tugwell, W., Biddulp, T.W.and marshell , R.A. :- the inffluence of coolant temperature on on the performance of
a four cylinder 1100cc engine.
[11] Kubozuka ,T.,Ogava, N.,Hirano,Y .and Yayashi, y :- the development of engine evaporative cooling system .SAE paper no.
870033,1987) employing dual circuit cooling , SAE paper, no. 8802631
[12] Guillemot, P., Gatellier,B. and Rouveirolles,P. :-the influence of coolant temperature on unburend hydrocarbon emission
from spark ignition engine SAE paper no 941962, 1994
























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

506 www.ijergs.org

Experimental Determination of the Rotor Speed of a Vertical Shaft
Centrifugal Nut Cracking Machine

P. O. Ajewole
Department of Agricultural and Bio-Environmental Engineering, The Federal Polytechnic, P.M.B. 5351, Ado-Ekiti, Nigeria
Email: peterajewole@gmail.com, Mobile: +2347068675164


ABSTRACT: The impinging velocitythat gives that maximum cracking efficiency of a vertical shaft centrifugal palm nut cracking
machine was determined in this study. A nut cracking energy instrument which consists of a hammering mass falling vertically on
palm nuts placed on a base was used to determine potential energy required to crack the nuts. This energy was equated to the kinetic
energy required to crack the nuts in a centrifugal palm nut cracking machine, from which the average impinging velocity was
determined. Experiment was carried out to generate cracking energy data for both Dura and Tenera varieties of palm nuts available in
the study area. For Dura type, highest percentage of fully cracked (FC) nuts and average impinging velocity of 32.50 m/s were
obtained when the height of the hammering mass was set at 0.15m, 0.25m and 0.30m for nuts sorted into diameter ranges of d <
15mm, 15mm d 18mm, and 18mm d 22mm respectively. For the Tenera type, highest percentage of fully cracked (FC) nuts
and average impinging velocity of 39.56 m/s were obtained when the height of the hammering mass was set at 0.09m, 0.10m and
0.13m for nuts sorted into diameter ranges of 9mm <d < 13mm, 13mm d 15mm, and 15mm d 20mm respectively. The
overall average impinging velocity of 36m/s was obtained for both varieties of palm nut and which was used in the design and
construction of a centrifugal palm nut cracker. The results of testing the cracker showed that it has cracking efficiency of 98.75% and
kernel extraction efficiency of 63.4%.

KEYWORDS:Impinging velocity, palm nut, cracking, Dura, Tenera, oil palm, efficiency, cracker

INTRODUCTION
Palm kernel is an important part of oil palm produce, which is obtained by cracking of palm nuts and separation of the kernels from
the shell [1]. However due to global demand of palm kernel and its by-products, an effort has been geared towards an improved
method of palm kernel extraction.In Nigeria National Milling capacity, palm kernel stands at about 23% potential from fresh fruit
production in 1991 [3].

There are three varieties of oil palm as reported by Jimoh, [6] namely dura, teneraand pisifera. Dura is characterized by thin
mesocarp, thick endocarp (shell) with generally large kernel. The duratype is genetically homozygous and dominant for shell. It is
denoted by DD. Tenerapossesses thin mesocarp, thin endocarp with large kernel. This is a dual-purpose palm for the production of
mesocarp oil and kernel. It is genetically heterozygous and is denoted by Ddand it is also the highest oil yielding variety for both
mesocarp and palm kernel oils as reported by Opeke, [11].Pisiferapossesses thick mesocarp with very little oil content, no endocarp
(shell less) with small kernel, the female flowers are often sterile, this results in bunch failure and it is genetically homozygous,
recessive for shell and it is denoted by dd [7]. Badmus, [2] stated that typical African durais about 8-20 mm in length and has a fairly
uniform shell thickness of about 2 mm. The tenerais about 7-15 mm in length with shell thickness of 1.2 mm. Since pisiferawas not
readily available in the country for commercial purpose, this was replaced with local palm nut that is common in Africa and is about
15-40 mm in length with shell thickness of 2.2 mm. It is characterized with too hard brittle shell with small and heavy kernel. Kernel
is an edible endosperm, which is covered by reddish brown to black testa. The kernel fits tightly into the shell and varied in shape and
size depending on the shape and size of the nuts [8].

Palm kernel processing industry is very popular in the third world countries because of dependency of many companies on palm
kernel and palm oil as raw material [5, 13]. The modern crackers are of two types, the hammer-impact and the centrifugal-impact
types. The hammer-impact type as reported by Koya, [9] breaks or cracks the nut by impact when the hammer falls on the nut, while
centrifugal-impact nut cracker uses centrifugal action to break the nut. Centrifugal nut cracker is of two models based on shaft
orientation. They are vertical and horizontal shaft cracker.The horizontal shaft centrifugal nut cracker has been used in larger medium
and small scale palm kernel recovery plants imported to Nigeria. The nut is fed into the hopper and it falls into the housing where a
plate attached to the rotor is rotating [10].

Ndukwu and Aseogwu, [10] identified three major factors affecting the efficiency of centrifugal nut crackers which are: the shaft
speed, moisturecontent and feed rate. With higher speed of the cracker, they obtained higher cracking efficiency but higher kernel
breakage ratio. The author in a previous research had observed that the centrifugal cracker can work at a high efficiency if operated at
an optimum speed.Obiakor and Babatunde, [12] reported that centrifugal nut crackers are characterized by significant kernel breakage
butOfei (2007) stated that centrifugal nut crackers are majorly used in Nigeria due to their high productivity.Badmus, [2]also reported
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

507 www.ijergs.org

that the vertical shaft centrifugal nut crackersgive a better cracking effect because of the large diameter of the rotor, taking into
consideration the aerodynamic properties of the nuts. If the gap between the rotor and cracker ring is made extra wide (say 100mm),
the nuts will be brought into the optimum aerodynamic position (that is, thick end foremost) and then strike the cracker drum at the
most favourable angle [2].

Badmus, [2] further reported that the Tenera nuts having a distinct rounded head and tapering fibre-covered tail are enabled to take up
a 'head-foremost' position in the extra distance covered and so be cleanly cracked on hitting the plate. If the fibre-covered tail hits the
plate, the nut may not crack. Secondly, in the confined space of a vertical nut cracker, the tenera nuts may be deflected by rebounding
shell fragments. Thirdly, the larger the distance between the cracker rotor and the cracker ring, the more obtuse will be the angle at
which the nut strikes the ring and there will then be less likelihood of the nut glancing off the ring surface. The speed of the nut
crackers varies from 800-2500 revolution per minute according to the diameter of the rotor [4]. The above phenomenon is explained
with the diagram in Figure 1.





























The nut enters cracker rotor at A, transverses the spiral route ABD to leave the rotor at point D. The radial outgoing speed,Vp is given
by:

2 2
r R V
p
= e .. (1)
Assuming that friction of the nut with the rotor is negligible; the peripheral speed of the rotor, VQis given by:

R V
Q
e = .. (2)
Since r is very small compared to R, Vp is approximately equal to V
Q
and the angle between V
Q
and V
T
is less than 45.

2 2 2
Q P T
V V V + = .. (3)
Hence,
2 2
Q p T
V V V + = .. (4)

METHODOLOGY
V
p

V
T1

V
Q

R
r
A
B
D


V
p1

V
T1

V
Q1

1

V
p2

V
T2

V
Q2

2

Figure 1: The motion of tenera nuts in the cracking drum
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

508 www.ijergs.org

Nut Cracking Energy Instrument
Average impinging velocity required to crack palm nuts in palm nut crackers was determined experimentally for both Dura and
Tenera varieties palm nuts using Nut Cracking Energy Instrument which consists of a 1.34kg iron block (hammer) that is moved a
vertical scale. The hammer was raised to various heights by means of a rope attached to it. The base was made of cast steel of 15cm
thickness. A graduated wooden bar attached to the base was used to measure the height through which the hammer falls to crack the
nut placed on the wooden base.The nut absorbs the energy of the falling hammer due to the height (h - d) through which it falls. Thus,
the cracking energy is equal to the potential energy (PE):

( ) d h Mg E P E = = . .. (5)
If the nut is cracked by flinging it on to a hard static surface, the wall absorbs the kinetic energy due to its impinging velocity (V)
component normal to the surface. Therefore, the cracking energy was equated to the kinetic energy(KE) of the palm nut and was
given by:

2
2
1
. mV E K E = = .. (6)
Assuming that energy losses during cracking are negligible, P.E. = K.E.
Therefore, ) (
2
1
2
d h Mg mV = .. (7)
Thus the impinging velocity of the nut is given by:
( ) | |
m
d h Mg
V
2 / 1
2
= .. (8)
Where M= mass of the hammer (1.34kg), g = acceleration due to gravity (9.81m/s
2
), h = initial height of the hammer, d = relative
diameter of the nut, and m = mass of the nut. Substituting known values, the impinging velocity is then given as:
( )
m
d h
V
2 / 1
1275 . 5
= .. (9)

Experimental Procedure
In order to generate the cracking energy data for palm nuts, a large quantity of fresh palm nuts (Dura and Teneravarieties) were sun-
dried until a moisture content of 10% was achieved. Since it was known that different sizes of nuts require different speed and energy
to crack, nominal diameters of the nuts were then measured with vernier caliper and both varieties were grouped in the different size
ranges as shown in Table 1.

Table 1: Grouping of the palm nuts into different sizes
Dura Variety Tenera Variety
1. d < 15mm 9mm d 13mm
2. 15mm d 18mm 13mm d 15mm
3. 18mm d 22mm 15mm d 20mm

The mass of each nut was measured using an electronic weighing balance. Each nut was then placed at the centre of the instrument
base plate. The hammer was raised to a height indicated on the scale and released to fall on the nut. The observations made were
recorded with the following symbolic representations:

FC - Fully Cracked -the shell is broken and the kernel is released from pieces of the shell.
FCW - Fully Cracked with wound - the kernel is separated from the pieces of the shell but with wounds on it.
NFC - Not Fully Cracked - this comprises half-cracked and uncracked nuts.
SM Smashed - the kernel is broken along with the shell.
Ten nuts (10) from each size range were tested at each height of the hammer for both varieties. Three (3) or two (2) different heights
were used for different size ranges. 80 nuts were tested for Dura variety and 80 nuts for Tenera variety making a total of 160 nuts.The
percentage of fully cracked nuts was calculated for each size range at different heights. The impinging velocity corresponding to each
height for each nut in the different size ranges were also calculated using equation (9). The size ranges at a particular height having
highest cracking efficiency were selected from the size ranges and the average impinging velocities of the nuts in these selected size
ranges were found. Then the overall average impinging velocities were also found. This procedure was carried out for both Dura and
Tenera palm nut varieties.The average of the overall average impinging velocities for both Dura and Tenera was then found


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

509 www.ijergs.org

RESULTS AND DISCUSSION
The results of the experiment and calculation of impinging velocities of the nuts observed were presented in Tables 2 and 3.

Table 2: Average impinging velocities obtained for Dura nut variety
Size Range
(mm)
Height of cracking (m) Percentage fully cracked
(%)
Average Impinging
Velocity (m/s)

d < 15

0.15 90 34.76
0.17 80 38.58
0.18 80 37.61

15 d 18
0.25 80 33.33
0.26 60 35.34
0.28 70 34.36

18 d 22
0.30 80 29.41
0.35 40 33.39
Average impinging velocity where highest percentage of fully
cracked nuts were obtained
32.50


Table 3: Average impinging velocities obtained for Tenera nut variety
Size Range
(mm)
Height of cracking (m) Percentage fully cracked
(%)
Average Impinging
Velocity (m/s)

9 d 13
0.08 60 50.53
0.09 80 46.49

13 d 15
0.10 80 36.13
0.11 60 43.58
0.12 50 49.62

15 d 20

0.12 60 38.19
0.13 80 36.07
0.14 50 39.06
Average impinging velocity where highest percentage of fully
cracked nuts were obtained
39.56

As shown in Tables 2 and 3, for Dura variety, the highest percentages 90%, 80% and 80% were obtained for size ranges of d < 15mm,
15mm d 18mm and 18mm d 22mm respectively. For Tenera variety, the highest percentages 80%, 80% and 80% were
obtained for size ranges 9mm d 13mm, 13mm d 15mm and 15mm d 20mm respectively. The overall average of
impinging velocities for both varieties were thus calculated and used in the design and fabrication of a nut cracking machine. The
fabricated

CONCLUSION
The average impinging velocity of the rotor of a centrifugal palm nut cracking machine was determined using a nut cracking energy
instrument. Dura and Tenera varieties of palm nut were considered. The overall average impinging velocity for both varieties were
found to be 35m/s which could be used in the design and fabrication of a vertical shaft centrifugal palm nut cracker.

REFERENCES:
[1] Babatunde, O. O., and Okoli, J. O. Investigation into the effect of nut size on the speed needed for cracking palm
nut in centrifugal nut cracker Nigerian Journal of Palm and Oil Seeds, 9(1): 84 88, 1988
[2] Badmus, G. A. NIFOR Automated Small Scale Oil Palm Processing Equipment In Proc. 1991 PORIM
INTERNATIONAL Palm Oil Conference, 9 14 September, 1990
[3] FAO. Trade Year Book.Journal for Food andAgricultural Organization, 56 (1): 174 175, 2002
[4] Hartley, C. W. S. The Oil Palm 3rd Edition, Longman, London and New York, pages 806, 2000
[5] Hartley, C. W. S. The Oil Palm Longman Group Limited London. Pp 67, 1988
[6] Jimoh, M. O. Development and performance evaluationof a motorized palm nut cracker M. Eng. Thesis, Department
ofAgricultural Engineering, Federal University of Technology, Akure, Nigeria, 2004
[7] Jimoh, M.O. Determination of physical and engineeringproperties of palm nut In Proc. Half Yearly Conference
ofNigerian Institute of Food Science and Technology, 21st -22ndJuly, 2011.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

510 www.ijergs.org

[8] JimohM. O. and Olukunle, O. J. Effect of heat treatment during mechanical cracking usingvarieties of palm nut
AgricEngInt: CIGR Journal, Open access at http://www.cigrjournal.org Vol. 14, No.3, September, 2012
[9] Koya, O. A. Palm Nut Cracking under Repeated Impact Load Journal of Applied Sciences 6 (11), 2471- 2475, 2006.
[10] Ndukwu, M.C. and Asoegwu, S.N. Functional performance of a vertical-shaft centrifugalpalm nut cracker Res.
Agr.Eng. Vol. 56, 2010, No. 2: 778, 2010
[11] Opeke, K. O. Tree crops Lagos: Johnson Publishers Limited, 27-61, 1997
[12] Obiakor, S. I. and Babatunde, O. O. Development and testing of the NCAM centrifugal palm nut cracker AGRIMECH
Res. Inform. Bull. (NCAM), Ilorin, Nigeria, 1999
[13] Oke, P. K. Development and Performance Evaluation of Indigenous Palm Kernel Dual Processing Machine Journal
of Engineering and Applied Sciences 2 (4), 701-705, Medwell Journals, 2007























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

511 www.ijergs.org

Impact of informationTechnology on Higher Institution of learning: A Case
Study Njala University
Gegbe B, Gbenjeh M.M
Email: bgegbe@njala.edu

ABSTRACT: Njala University as a higher institution of learning remains to be on top of academic challenges in the educational
processes in Sierra Leone. And for the past five years, information technology is being offered at various disciplines. One needs to
know the impact of such programmes on students. So what is likely the impact of information technology on Njala University? The
targeted population for the study comprises student in the Njala University. Priority was placed on student in the school of technology
and schools that offered courses or programs that deals with information technology. Out of the targeted population, one hundred and
fifty students were randomly selected and interviewed from Njala and Bo campuses. Forty five percent (45%) of the students agreed
that they normally bring laptop to class while fifty five percent (55%) did not agree. Eleven percent (11%) of the students were
unskilled in Microsoft Word, fifty nine percent (59%) were skilled and thirty percent (30) were very skilled respectively. Sixty four
percent (64%) were skilled in Microsoft Excel and thirty six percent (30%) were very skilled respectively.. Fifty six percent (56%) of
the students were skilled in the use of Firewalls, Antivirus Software while forty eighty percent (48%) were very skilled in the use
Firewalls & Antivirus Software. Nearly fifty percent (50%) of the students each week in Njala University spent 1 to 2 hours on
classroom activities and studying using electronic device; seventeen percent (17%) 3 to 5 hours; one percent (1%), 11 to 15 hours and
three percent (3%) 16 to 20 hours respectively. On library resource to complete a course assignment thirty one percent (31%) spent 1
to 2 hours each week; nineteen percent (19%) 3 to 5 hours; thirty five (35%) 6 to 10 hours; four percent (4%); 11 to 15 hours and four
percent(4)% 16 to 20 hours respectively.
Keywords: Impact, Information Technology, Institution, Learning
ACKNOWLEDGMENT
I owe depth of gratitude to God Almighty through Jesus for giving me knowledge, wisdom and understanding throughout my
academic pursuit.
My sincere thanks go to Miss Marian Johnson who works assiduously as a typist to ensure that this work comes to an end. I am
particularly grateful to my wife for her architectural role in my academic activities. Thanks and appreciations go to my mother and late
father, they nurtured me to the level I am today.
INTRODUCTION
BACKGROUND
The combination of education and technology has been considered the main key to modern human progress. Education feeds
technology which in turn forms the basis of education. It is therefore evident that information technology has affected changes to the
methods, purpose and perceived potential of education.
Being able to access large databases of information fundamentally changes education, since learners can now be creators and
collaborators in the access and construction of discourses of information. Due to their technological literacy, young people can derive
cultural capital from their understanding of modern information technology, and thereby have input into educational changes. The
same technology also facilitates the rapid exchanges of information by researchers on specific topics, so that the speed of the
distribution of information is greatly increased. The increased access to huge amounts of data means students need help selecting,
evaluating and analyzing information, and they need to learn how to determine the currency, validity and veracity of the information
itself. All of these changes in learning have implications for teaching practice as well.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

512 www.ijergs.org

Sierra Leone, like other developing sub-Saharan countries needs Information Technology (IT) as a prerequisite for sustained
infrastructure development. For example, India and China have embraced and used information technology to their advantage and
their strategic roles in science have put them on the G-20 list of emerging market countries. Sierra Leone must take advantage of
development opportunities in it initiation such as those offered by World Bank (development) or the U.S Agency or International
Development (dot.com. Alliance) etc.
An evaluation of Sierra Leone Development goals including but not limited to the poverty reduction strategy paper (PRSP), National
Commission for privatization Act 2002 (NCP), and Truth and Reconciliation Commission (TRC) recommendations, etc should
ultimately help determine request for establishing Information Technology standards. In which these standard must be evaluated to see
if they are meeting the international standards.
The highest level of change occurring in relation to information technology and education is in the way teaching is increasingly being
seen as occurring via the medium of technology, rather than utilizing technology as an additional extra in the classroom. Information
technology particularly impacts course content and teaching methodology and the recruitment and training of teaching staff as well as
the content of courses. Information technology requires teachers to learn new sets of skills. Utilizing computer technology improves
the educational experience of the students-not so much because of the media itself, but because software programs require teachers to
think laterally and systematically, and produce better teaching materials.
SIGNIFICANCE OF THE STUDY
While education in the past has been centered on the teaching and learning, information technology has affected changes to the aims
of education. Therefore, now education is increasingly perceived as the process of creating, perceiving, integrating, transmitting, and
applying knowledge. The perceptions of knowledge itself have also changed where as knowledge could once have been perceived as
unchanging; it should be perceived as refinery, creative, personal and pluralistic. The future of education is not predetermined by
modern information technology, but rather that this future will hinge prominently on how we construct (and construe) the place of
technology in the education process. We are moving form just-in-case education to just-for you education where education is
targeted to meet the needs to individual students.
Information technology frees education institutions from the constraints of space and time, and enables the delivery of education
services anywhere, anytime. Therefore we can foresee a future where physical libraries would be replaced by digital libraries available
to anyone; and that scholar could c ease to be located around a geographical focus and will probably become increasingly located
around a specialization, but physically located anywhere in the world. We could also imagine a day when modern technology will
enable students in a given location to access the best of teachers in a given field and to interact with them, whether live or via video.
While various authors have differed in their opinion of the degree, desirability and destiny of these changes, they all agree that change
processes have certainly been underway. However, the process of change is far from over.
STATEMENT OF THE PROBLEM
Njala University as an institution remains to be on top of academic challenges in the educational processes in Sierra Leone. And for
the past five years, information technology has being offered at various disciplines. One needs to know the impact of such
programmes on students. So what is likely the impact of information technology on Njala University? What is the strategic importance
of information technology to the University? The mission of colleges and universities as creators and consumers of valuable
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

513 www.ijergs.org

knowledge and information can no doubt be greatly improved if information technology is strategically and proactively embraced in
support of the institutions mission. If we are reactive, information technology may have the opposite disruptive effect.
According to vision 2025 in Sierra Leone, there is need for the restoration and promotion of positive aspects of our national culture
and the development of science and technology base to keep in pace with the advances that any world taking place in the rest of the
world with Njala University no exception
RESEARCH QUESTIONS
The following questions will address the gravity of the problem
- What type of electronic devices are used by students
- What technology is used in structured courses or programmes
- What better understanding students experience with Information Technology
AIM: The aim of this research is to uncover the impact of student Information Technology use in higher education
OBJECTIVES: The specific objectives of this research are to
- Identify the use of electronic devices by students at Njala university
- Evaluate the use of technology in the teaching of courses at various levels at Njala University
- To better understand students experience with Information Technology at Njala University
LIMITATIONS OF THE RESEARCH
A sample is used rather than the entire population. The research will be limited also only to Njala University, as an academic
institution, because of time and money.
RESEARCH METHODOLOGY
RESEARCH DESIGN
This research deals with some of the strategies or procedure used in the collection of data. Descriptive and analytical case-study which
seeks to investigate: The impact on student information technology use on skill in higher education Njala University as a case-study.
This research also ex-post factor design in the sense that the researcher does not have direct control over independent variables
because the manifestations have already occurred or because they are inherently manipulable. It is a descriptive survey in that
questionnaire are prepared and administered to student in Njala University.
RESEARCH POPULATION
The target population for the study comprises student in the Njala University. Priority was placed on student in the school of
technology and schools that offered courses or programs that deals with information technology.
The following campuses were considered respectively
Njala University Njala campus
Njala university-Bo campus (Towama)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

514 www.ijergs.org

SAMPLING PROCEDURE AND SAMPLE
Out of the targeted population, one hundred and fifty students were randomly selected from Njala and Bo campuses using fully
structured questionnaire.
DATA COLLECTED AND PROCEDURES
The information for this research was based on the selected objective to determine the impact of student information technology use
and skills in higher education. It was collected from the questionnaire administers. The questionnaire was fully structured with
instruction to ensure proper fillings.
DATA ANALYSES AND RESULTS
Statistical tools such as SPSS were used. Analysis of variance (ANOVA) was also used to do some test.
PRESENTATION AND ANALYSIS OF DATA
GENDER DISTRIBUTION OF RESPONDENTS

Figure 1.1: Bar Chart of Gender Distribution of Respondents

QUESTINNAIRE ADMINISTERED
One hundred and fifty (150) questionnaires were administered to students of Njala University; eighty (80) for Njala campus and
seventy (70) for Bo campus but only one hundred and two (102) questionnaires were retrieved from both campuses.
GENDER DISTRIBUTION OF RESPONDENTS
MALE: 61 males responded, which gives 70% as valid percentage of those students who responded to the questionnaire as clearly
indicated by fig 4.1 above.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

515 www.ijergs.org

FEMALE: 26 females responded which gives 30% as valid percentage of female students who responded to the questionnaire.15
students never indicated their sex which represents 15% as invalid percentage of students who responded but could not indicate their
sex.
AGE DISTRIBUTION OF RESPONDENTS

Figure 1. 2 Bar Chart of Age of Respondents

AGE OF RESPONDENTS
18 years and below: 15% of those who responded to the questionnaire were below the age of 18.
Age between (21-29) years: 79% of the age bracket between the age-range (21-29) responded to the questionnaire.

30 years and above: 6% of those who responded to the questionnaire were 30 years of age and above.






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

516 www.ijergs.org


DISTRIBUTION OF RESPONDENTS WHO USUALLY BRING LAPTOPS TO CLASSES
Figure 1.3: Bar Chart of Respondents Who Normally Bring Laptop to Class


RESPONDENTS WHO USUALLY BRING LAPTOPS TO CLASS
45% of the students agreed that they do usually bring laptops to class while 55% did not.
Figure 1. 3: Bar Chart of Cumulative Grade Points of Respondents

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

517 www.ijergs.org


GRADE POINTS: 49% of the respondents were between cumulative grade points of 3.0-3.24 while 37% were between cumulative
grade points of 3.25-3.49; 29% were between cumulative grade points of 3.50-3.74 and 22% were between cumulative grade points of
3.74-4.0.
SENIOR OR FRESHER STUDENTS RESPONDENT

Figure 1.4:Bar Chart of Senior or Fresher Students

SENIOR OR FRESHER STUDENTS
7% of the respondents were senior students while the remaining 3.3% were fresher

RESPONDENTS WHO RESIDE EITHER ON CAMPUS OR OFF CAMPUS
Figure 1.5: Bar Charts of Respondents who reside either on Campus or off Campus
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

518 www.ijergs.org



STUDENTS WHO RESIDED ON CAMPUS OR OFF CAMPUS
According to the bar chart above 58% of the respondents resided on campus while 42% are off campus

Figure 4.6: Bar Chart for Knowledge of Computer Technology and Standard Software Application MS-Word


MICROSOFT WORD
From the above table and chart you can depict 11% of the students were unskilled in the use of Microsoft Word, 59% were skilled and
30% were very skilled respectively.

Figure 1.7: Bar chart for Knowledge of Computer Technology and Standard Application Software in Excel
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

519 www.ijergs.org


MICROSOFT EXCEL: 64% were skilled in Microsoft Excel and 36% were very skilled respectively.

Figure 1.8: Bar Chart for Computer Technology and Standard Application Software in MS-Power Point

MICROSOFT POWER POINT: 14% were unskilled in Power Point, 27% were skilled while, 59% were very skilled in MS-Power
Point.

Figure 1.9: Bar Chart for Computer Technology and Standard Application Software - Photoshop & Flash
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

520 www.ijergs.org


PHOTOSHOP & FLASH: 13% of the students were unskilled in Photoshop and Flash while 69% were skilled and 19% were very
skilled respectively.

Figure1.10: Bar Chart for Computer Technology and Standard Application Software -Creating Editing Video


CREATING AND EDITING VIDEO: 73% of the students were skilled while 27% were very skilled in creating and editing Video.



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

521 www.ijergs.org

Figure1.11 Bar Chart for Computer Technology and Standard Application Software - Creating Web Page

CREATING WEB PAGE: 38% of the respondents were skilled in creating Web Page while 63% were very skilled in creating Web
Page.
Figure1.13: BAR CHART FOR COMPUTER TECHNOLOGY AND STANDARD APPLICATION SOFTWARE - COURSE MANAGEMENT

MANAGEMENT SYSTEM: 31% of the student were vey unskilled in the use of management system while 46% were skilled and
23% very skilled in the used of management systems respectively.




International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

522 www.ijergs.org

Figure1.12 Bar Chart for Computer Technology and Standard Application Software - Online Library Resource


ONLINE LIBRARY RESOURCE: 15% of the students were unskilled in the use of Online Library Resource 85% were skilled.


Figure1.13: BAR CHART FOR COMPUTER TECHNOLOGY AND STANDARD APPLICATION SOFTWARE -COMPUTER OPERATING
SYSTEM

COMPUTER OPERATING SYSTEM: 69% were skilled in the use of Computer Operating System and 31% were very skilled in
the use of Computer Operating System.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

523 www.ijergs.org

Figure1.14: BAR CHART FOR COMPUTER TECHNOLOGY AND STANDARD APPLICATION SOFTWARE - COMPUTER MAINTENANCE

COMPUTER MAINTENANCE: 57% were skilled in Computer maintenance and 43% very skilled respectively
Figure1.15 BAR CHART FOR COMPUTER TECHNOLOGY AND STANDARD APPLICATION SOFTWARE-INFIREWALLS & ANTIVIRUS
SOFTWARES

FIREWALLS & ANTIVIRUS SOFTWARES: 56% of the students were skilled in the use of Firewalls & Antivirus Software
,while 48% were very skilled in the use Firewalls & Antivirus Software.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

524 www.ijergs.org

HOURS SPENT ON ACTIVITIES EACH WEEK USING ELECTRONIC DEVICES
Figure1.16: BAR CHART OFHOURS SPENT EACH WEEK ON CLASSROOM ACTIVITY AND STUDING USING AN ELECTRONICS DEVICE


CLASSROOM ACTIVITY AND STUDYING USING ELECTRONICS DEVICES
From the above chart you can easily depict that 50% of the students each week in Njala University spent 1 to 2 hours on classroom
activities and studying using electronic device; 17%, 3 to 5 hours; 1%, 11 to 15 hours and 3% ,16 to 20 hours respectively.


Figure 1.19: BAR CHART OF HOURS SPENT EACH WEEK USING LIBRARY REOURCE TO COMPLETE
ASSIGNMEN

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

525 www.ijergs.org


USING LIBRARY RESOURCE TO COMPLETE A COURSE ASSIGMENT
On library resource to complete a course assignment 31% spent 1 to 2 hours each week; 19% 3 to 5 hours; 35% 6 to 10 hours; 4% ,11
to 15 hours and 4% 16 to 20 hours respectively.
Figure1.20 BAR CHART OF HOURS SPENT EACH WEEK ON ACTIVITY BY SURFACING THE INTERNET FOR INFORMATION

SURFACING THE INTERNET FOR INFORMATION
5% do not use the internet each week for information; 20% spent less than an hour; 7% 1 to 2 hours; 15% ,11 to 15 hours; 7% ,16 to
20 hours and 7% more than 20 hours respectively.
Figure1.21: BAR CHART OF HOURS SPENT EACH WEEK ON ACTIVITY BY WRITING DOCUMENT ON EACH COURSE WORK

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

526 www.ijergs.org

WRITING DOCUMENT FOR COURSE WORK
78% of the respondents spent 1 to 2 hours in writing document for course work, while 17% spent 3 to 5 hours and 6% 11 to 15 hours
per week respectively.
Figure1.22: BAR CHART OF HOURS SPENT EACH WEEK ON ACTIVITY BY

CREATING, READING, SENDING e-MAIL
26% of the students spent less than one hour in creating, reading and sending e-mails in a week; 9% spent 1 to 2 hours; 13% 3 to 5
hours; 9%, 6 to 10 hours; 17%, 11 to 15 hours; 13% 16 to 20 hours and 13% more 20 hours in a week.


Figure1.23: BAR CHART OF HOURS SPENT EACH WEEK ON ACTIVITY BY CREATING, READING AND SENDING INSTANT MESSAGE

CREATING READING AND SENDING INSTANT MESSAGE
20% spent less than one hour in creating, reading and sending instant message in a week; 33% 1 to 2 hours; 20% 3 to 5 hours; 7% 11to
15 hours and 20% more than 20 hours in a week.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

527 www.ijergs.org

Figure 1.24: BAR CHART OF HOURS SPENT EACH WEEK ON ACTIVITY BY WRITING DOCUMENT FOR PLEASURE



WRITING DOCUMENT FOR PLEASURE
25% of the students spent less than an hour in writing document for pleasure in a week, while 42% 1 to 2 hours; 25% 3 to 5 hours
and 8% 11 to 15 hours a week respectively.

Figure1.25: BAR CHART OF HOURS SPENT EACH WEEK ON ACTIVITY BY PLAYING COMPUTER GAME

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

528 www.ijergs.org

PLAYING COMPUTER GAME
30% of the students spent less than an hour in playing computer game per week; 20% , 6 to 10 hours and 10% 11 to 15 hours a week
playing computer games.
Figure1. 26: BAR CHART HOURS SPENT EACH WEEK ON ACTIVITY BY DOWNLOADING AND LISTERNING TO MUSICS OR DVDs



DOWNLOADING AND LISTERNING TO MUSIC OR DVDs
75% of the students spent less than one hour in down loading and listening to music or DVDs and 25% 11 to 15 hours a week.
Figure1.27: BAR CHART HOURS SPENT EACH WEEK ON ACTIVITY BY SURFACING THE INTERNET FOR PLEASURE

SURFACING INTERNET FOR PLEASURE
18% spent less than an hour in surfacing internet for pleasure in week 30% 1 to 2 hours; 29% 3 to 5 hours; 6% 11 to 15 hours and 18%
more than 20 hours in a week.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

529 www.ijergs.org

Figure1.28: BAR CART OF HOURS SPENT EACH WEEK ON ACTIVITY BY ONLINE SHOPING

ONLINE SHOPING
41% spent less than an hour for online shopping in a week; 43% 3 to 5 hours; 14% 11 to 15% respectively.

TREND ANALYSIS OF SKILLED LEVEL USING COMPUTER TECHNOLOGY AND STANDARD APPLICATION
SOFTBWARE

Figure 1.29: TREND ANALYSIS OF SKILLED LEVEL USING COMPUTER TECHNOLOGY AND STANDARD APPLICATION SOFTWARE

0
10
20
30
40
50
60
70
80
90
P
e
r
c
e
n
t
a
g
e
Trend analysis of unskilled, skilled and very skilled
level
UNSKILLED LEVEL IN (%)
SKILLED
VERY SKILLED LEVEL IN (%)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

530 www.ijergs.org

TREND ANALYSIS
59% of the students were skilled in using computer technology and its application, while 37% were very skilled but 15% remain
unskilled.
SUMMARY, CONCLUSION AND RECOMMENDATION
SUMMARY OF FINDINGS
Nearly forty five percent(45%) of the students agreed that they normally bring laptop to class while fifty five percent(55%) did not
and forty five percent(45%) of the students were between cumulative grade point of 3.0-3.24 while thirty seven percent(37%) were
between cumulative grade points of 3.25-3.49; twenty nine percent(29%) were between cumulative grade point of 3.50-3.74 and
twenty two percent(20) were between cumulative grade points of 3.74-4.0.

Eleven percent (11%) of the students were unskilled in Microsoft Word, fifty nine percent(59%) were skilled and thirty percent(30)
were very skilled respectively. Sixty four percent (64%) were skilled in Microsoft Excel and thirty six percent (30%) were very skilled
respectively.

Fourteen percent (14%) were unskilled in Power Point; twenty seven percent (27%) were skilled while, fifty nine percent (59%) were
very skilled in MS-Power Point. Thirteen percent (13%) of the students were unskilled in Photoshop and Flash while sixty nine
percent (69%) were skilled and nineteen (19%) were very skilled respectively. Seventy three percent (73%) of the students were
skilled while twenty seven percent (27%) were very skilled in creating and editing Video. Thirty eight percent (38%) of the
respondents were skilled in creating Web Page while, sixty three percent( 63%) were very skill in creating Web Page. Thirty one
percent (31%) of the student were vey unskilled in the use of management system while forty six percent (46%) were skilled and
twenty three percent (23%) very skilled in the used of manage system respectively. Sixteen (16%) of the students were unskilled in the
use of Online Library Resource eighty five (85%) were skilled. Sixty nine percent (69%) were skilled in the use of Computer
Operating System and thirty one percent (31%) were very skilled in the use of Computer Operating System. Fifty seven percent (57%)
were skilled in Computer maintenance and forty three percent (43%) very skilled respectively. Fifty six percent (56%) of the students
were skilled in the use of Firewalls, Antivirus Software while forty eighty percent (48%) were very skilled in the use Firewalls &
Antivirus Software.
Nearly fifty percent (50%) of the students each week in Njala University spent 1 to 2 hours on classroom activities and studying using
electronic device; seventeen percent (17%) 3 to 5 hours; one percent (1%), 11 to 15 hours and three percent (3%) 16 to 20 hours
respectively. On library resource to complete a course assignment thirty one percent (31%) spent 1 to 2 hours each week; nineteen
percent (19%) 3 to 5 hours; thirty five (35%) 6 to 10 hours; four percent (4%); 11 to 15 hours and four percent(4)% 16 to 20 hours
respectively.

Four percent (4%) used the internet each week for information; twenty percent (20%) spent less than an hour; seven percent (7%) 1
to 2 hours; fifteen percent (15%) 11 to 15 hours; seven percent (7%) 16 to 20 hours and seven percent (7%) more than 20 hours
respectively. Seventy eight percent (78%) of the students in Njala University spent 1 to 2 hours in writing document for course work,
while seventeen(17%) spent 3 to 5 hours and six percent ( 6%) 11 to 15 hours per week respectively. Twenty six percent (26%) of the
student spent less than one hour in creating, reading and sending e-mails in a week; nine percent (9%) spent 1 to 2 hours; thirteen
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

531 www.ijergs.org

percent (13%) 3 to 5 hours; nine percent (9%) 6 to 10 hours; seventeen percent (17%) 11 to 15 hours; thirteen ( 13%) 16 to 20 hours
and thirteen percent (13%) more than 20 hours in a week.

Twenty percent (20%) spent less than one hour in creating, reading and sending instant message in a week; thirty three percent (33%)
1 to 2 hours; Twenty percent (20%) 3 to 5 hours; seven percent (7%) 11 to 15 hours and twenty percent (20%) more than 20 hours in
a week. Twenty five percent (25%) of the students spent less than an hour in writing document for pleasure in a week, while forty two
percent (42%) 1 to 2 hours; twenty five percent (25%) 3 to 5 hours and eight percent (8%); 11 to 15 hours a week respectively.
Thirty percent (30%) of the students spent less than an hour in playing computer game per week; twenty percent (20%); 6 to 10 hours
and ten percent (10%); 11 to 15 hours a week playing computer games.
Seventy five percent (75%) of the students spent less than one hour in down loading and listening to music or DVDs and twenty five
percent (25%) 11 to 15 hours a week.
Nearly eighteen percent (18%) spent less than an hour in surfacing internet for pleasure in week twenty nine percent ( 29%) 1 to 2
hours; twenty nine percent (29%) 3 to 5 hours; six percent (6%); 11 to 15 hours and eighteen percent (18%); more than 20 hours in a
week.

Seventy eight percent (78%) of the students spent 1 to 2 hours in writing document for course work, while seventeen percent (17%)
spent 3 to 5 hours and six percent (6%); 11 to 15 hours per week respectively. Forty percent (40%) spent less than an hour for online
shopping in a week; forty three percent (43%) 3 to 5 hours; fourteen percent (14%) 11 to 15% respectively.

CONCLUSIONS
Based on the findings the following conclusions are made:
Nearly fifty nine percent (59%) of the students were skilled in using computer technology and its applications

Thirty seven percent (37%) were very skilled in using computer technology and its applications

Fifteen percent (15%) remained to be unskilled in using computer technology and its applications

Fifty percent (50%) of the students each week in Njala University spent 1 to 2 hours on classroom activities and studying
using electronic device; seventeen percent (17%); 3 to 5 hours; one percent (1%); 11 to 15 hours and three percent (3%); 16 to 20
hours respectively

On library resource to complete a course assignment thirty one percent (31%) spent 1 to 2 hours each week ; nineteen
percent (19%) 3 to 5 hours ;thirty five percent (35%); 6 to 10 hours; four percent (4%); 11 to 15 hours and four percent (4%) 16 to
20 hours respectively.

Four percent (4%) did not use the internet each week for information; twenty percent (20%) spent less than an hour; seven
percent (7%); 1 to 2 hours; fifteen percent (15%) 11 to 15 hours; seven percent (7%); 16 to 20 hours and seven percent ( 7%) more
than 20 hours respectively.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

532 www.ijergs.org

Nearly seventy eight percent (78%) of the students in Njala University spent 1 to 2 hours in writing document for course
work, while seventeen percent (17%) spent 3 to 5 hours and six percent (6%); 11 to 15 hours per week respectively.

Twenty six percent (26%) of the student spent less than one hour in creating, reading and sending e-mails in a week; nine
percent (9%) spent 1 to 2 hours; thirteen percent (13%) 3 to 5 hours; nine percent (9%) ; 6 to 10 hours; seventeen percent (17%); 11 to
15 hours; thirteen percent (13%); 16 to 20 hours and thirteen percent (13%) more than 20 hours in a week.

Twenty percent (20%) spent less than one hour in creating, reading and sending instant message in a week; thirty three
(33%); 1 to 2 hours; twenty percent (20%); 3 to 5 hours; seven percent (7%); 11to 15 hours and twenty percent (20%) more than 20
hours in a week.

Twenty percent (25%) of the students spent less than an hour in writing document for pleasure in a week, while forty two
percent (42%) 1 to 2 hours; twenty five percent (25%); 3 to 5 hours and eight percent (8%) 11 to 15 hours a week respectively.

Thirty percent (30%) of the students spent less than an hour in playing computer game per week; twenty percent (20%); 6 to
10 hours and ten percent (10%) 11 to 15 hours a week playing computer games.

Seventy five percent (75%) of the students spent less than one hour in down loading and listening to music or DVDs and
twenty five percent (25%); 11 to 15 hours a week.

Eighteen percent (18%) spent less than an hour in surfacing internet for pleasure in week twenty nine percent (29%) 1 to 2
hours; twenty nine percent (29%) 3 to 5 hours; six percent (6%) 11 to 15 hours and eighteen percent (18%) more than 20 hours in a
week.
Seventy eight percent (78%) of the students spent 1 to 2 hours in writing document for course work, while seventeen percent
(17%) spent 3 to 5 hours and six percent (6%) 11 to 15 hours per week respectively. Forty one percent (41%) spent less than an hour
for online shopping in a week; forty three percent (43%) 3 to 5 hours; fourteen percent (14%) 11 to 15% respectively.

This clearly manifests that the impact of information technology use skills on Njala University was above average and was a
progressive for the past years. This is clearly justified by the number of time students allocated to using computer applications. Almost
computer technology and its application affect all cadres of their academic activities of students. Time spent in using internet services
was relatively low. This may be as a result of students having no free access to internet services or the capacity of the service not
enough to accommodate the number of students that have right of access.
RECOMMENDATIONS
Based on the research findings the following recommendations would be made in order for IT to have more and effective impact on
students at higher educational institutions:

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

533 www.ijergs.org

School of technology needs to invest in technology so that the systems are always available, usable and secure. This will help
to adopt new technologies that can be utilized for greater access to information, improve and automate business process for greater
and also improve the quality and accessibility of teaching, learning and research.

University needs to consider technology as a tool that will help student plan to think more strategically, do things more
efficiently and wisely, and reduce the overall effort and cost of learning materials e.g. Textbooks etc for higher education.

IT as a Commodity, such as networks, telephone, wireless, email, antennas, data center server support and, in some cases,
desktop should be centrally managed in close coordination with college IT managers. This would allow these managers to focus on
college-specific applications that would help faculty adopt new technologies.

Less financial burden on students and IT facilities that will accommodate more students.

Schools/Faculty needs to be informed about the new technologies and current IT services available for their use. Faculty is
the change agents. This can be accomplished through the active involvement of the Technology Assisted Curriculum Center (TACC),
Media Solutions, Instructional Media Services and other onsite local college support groups.

University authorities to embrace technology as a tool that will help to think more strategically to do things more efficiently
and wisely and reduce the overall cost of higher education.

University authorities to strengthened academic staff. The strength of universities is the ability of their staff to apply
analytical skills to problems and issues in the world. Universities to apply their analytical skills to their own practice, that is, an
expanded research.

Stimulate participation in the university centralized projects by reducing the demanding administrative requirements and
bureaucracy associated with the participation in the IT programme.

REFERENCES:
[1] Agee, Anne S. and Catherine Yang, - Top Ten IT Issues 2009, Educause, July/August, 2009:
[2] Christensen, Clayton M., Michael B. Horn and Curtis W. Johnson, Disrupting Class: How Disruptive Innovation Will Change the
Way the World Learns. New York: McGraw Hill, 2008.
[3] Kamenetz, Anya, How WebSavvy Edupunks Are Transforming American Higher Education, Fast Company.com, 2009.

[4] Katz, Richard N., ed. - The Tower and the Cloud: Higher Education in the Age of Cloud Computing. USA: Educause, 2008.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

534 www.ijergs.org

[5] The New Media Consortium and the Educause Learning Initiative, The 2009 Horizon Report, Stanford, California: The New
Media Consortium, 2008

[6] Allan, J. (1996). Learning Outcomes in Higher Education. Studies in Higher Education, 21(1), 93-108
























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

535 www.ijergs.org

Power Factor Correction Circuits: Active Filters
Vijaya Vachak
1
, Anula Khare
1
,Amit Shrivatava
1
Electrical & Electronics Engineering Department
Oriental College of Technology, Bhopal, India
vijayavachak@gmail.com

Abstract: -The increasing growth in the use of electronicequipment in recent years has resulted in a greater need toensure that the line
current harmonic content of anyequipment connected to the ac mains is limited to meetregulatory standards. This requirement is
usually satisfiedby incorporating some form of Power Factor Correction(PFC) circuits to shape the input phase currents, so thatthey
are sinusoidal in nature and are in phase with theinput phase voltages. There are multiple solutions in whichline current is sinusoidal.
This paper provides a concisereview of the most interesting passive and active circuits ofpower factor correction, for single phase and
low powerapplications.The major advantages and disadvantages arehighlighted.

Keywords: Converter, Power factor correction,active Power factor correction circuit, passive power factor correction circuit.
INTRODUCTION
Power factor is defined as the cosine of the angle between voltage and current in an ac circuit. There is generally a phase difference
between voltage and current in an ac circuit. cos is called the power factor of the circuit. If the circuit is inductive, the current lags
behind the voltage and power factor is referred to as lagging. However, in a capacitive circuit, current leads the voltage and the power
factor is said to be leading.
In a circuit, for an input voltage V and a line current I,
VIcos the active or real power in watts or kW.
VIsin - the reactive power in VAR or kVAR.
VI- the apparent power in VA or kVA.
Power Factor gives a measure of how effective the real power utilization of the system is. It is a measure of distortion of the line
voltage and the line current and the phase shift between them.
Power Factor=Real power(Average)/Apparent power
Where, the apparent power is defined as the product of rms value of voltage and current.
Improvements in power factor and total harmonic distortion can be achieved by modifying the input stage of the diode rectifier filter
capacitor circuit. Passive solutions can be used to achieve this objective for low power applications. With a filter inductor connected in
series with the input circuit, the current conduction angle of the single-phase full-wave rectifier is increased leading to a higher power
factor and lower input current distortion. With smaller values of inductance, these achievements are degraded
However, the large size and weight of these elements, in addition to their inability to achieve unity power factor or lower current
distortion significantly, make passive power factor correction more suitable at lower power levels. The power factor correction
(PFC)technique has been gaining increasing attention in power electronics field in recent years. For the conventional single-phase
diode rectifier, a large electrolytic capacitor filter is used to reduce dc voltage ripple. This capacitor draws pulsating current only when
the input ac voltage is greater than the capacitor voltage, thus the THD is high and the power factor is poor. To reduce THD and
improve power factor, passive filtering methods and active wave-shaping techniques have been explored. Reducing the input current
harmonics to meet the agency standards implies improvement of power factor as well. Several techniques for power factor correction
and harmonic reduction have been reported and a few of them have gained greater acceptance over the others. Commercial IC
manufacturers have introduced control ICs in the market for the more popular techniques. In this paper, the developments in the field
of single-phase PFC are reviewed. In this paper the hysteresis is control method and average current control method is analysed and
simulated using MATLAB/ SIMULINK software and results are obtained near unity power factor.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

536 www.ijergs.org

POWER FACTOR CORRECTION CIRCUITS

The classification of single-phase PFC topologies is shown in Fig.The diode bridge rectifier has no sinusoidal line current. This is
because most loads require a supply voltage V
2
with low ripple, which is obtained by using a correspondingly large capacitance of the
output capacitor C
f
. Consequently, the conduction intervals of the rectifier diodes are short and the line current consists of narrow
pulses with an important harmonic contents.
There are several methods to reduce the harmonic contents of the line current in single-phase system.


Figure 1 Classification of single-phase PFC topologies.

Active PFC

Active PFC circuits are based on switchmode convertertechniques and are designed to compensate fordistortion as well as
displacement on the input currentwaveform. They tend to be significantly more complexthan passive approaches, but this complexity
isbecoming more manageable with the availability ofspecialized control ICs for implementing active PFC.Active PFC operates at
frequencies higher thanthe line frequency so that compensation of bothdistortion and displacement can occur within thetimeframe of
each line frequency cycle, resultingin corrected power factors of up to 0.99. Activeapproaches can be divided into two classes:
- Slow switching topologies
- High frequency topologies

Slow Switching Topologies

The slow switchingapproach can be thought of as a mix of passiveand active techniques, both in complexity andperformance. The
most commonimplementation is shown in Figure andincludes the line frequency inductor L. Theinductor is switched during the
operatingcycle, so this is considered an activeapproach, even though it operates at arelatively low frequency - typically twice theline
frequency. This is a boost circuit in thesense that the AC zero crossing is sensedand used to close the switch that placesthe inductor
across the AC input.

Consequently, the inductor current rampsup during the initial portion of the AC cycle.At time T1, the switch is opened so that the
energystored in the inductor can freewheel through the diodesto charge the capacitor. This energy transfer occurs fromT1 to T2 and
the input current drops as a result. FromT2 to T3 the input current rises again because the linevoltage is larger than the bulk capacitor
voltage. FromT3 to T4, the current reduces to zero. Consequently, theconduction angle as seen at the input is much longerthan that of
a non-compensated off-line rectifier, resultingin lower distortion and a power factor of up to 0.95.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

537 www.ijergs.org



Figure 2 Slow Switching Active PFC Circuit

This circuit is much simpler than the high frequencycircuit to be discussed next, but has a few shortcomingsin addition to its limited
maximum power factor. Sincethe switching activity is usually in the 100Hz to 500Hzrange, there can be audible noise associated with
itsoperation. Also, a large and heavy line frequencyinductor is required.

Advantages and Disadvantages of Slow Switching Active PFC

S.No. Advantages Disadvantages
1. Simple
Line Frequency
Components are
Largeand Heavy
2.
Cost Effective at Low
Power
Cannot Completely
Correct
NonlinearLoads -
95% Maximum
Power Factor
3.
High Efficiency - 98%
Typical
Audible Noise
4. Low EMC due to Inductor

High Frequency Topologies

Conceptually, any of thepopular basic converter topologies, including the fly backand buck, could be used as a PFC stage. We will
focus,however, on the boost topology since it is the mostpopular implementation. There are several possiblecontrol techniques that can
be used to implement aboost PFC converter, but the version shown in Figureis a good general representation of the concept andwill be
used here for illustration.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

538 www.ijergs.org



Figure 3 High Frequency Active PFC Circuit

Almost all present day boost PFC converters utilize astandard controller chip for the purposes of ease ofdesign, reduced circuit
complexity and cost savings.These ICs are available from many of the analog ICsuppliers and greatly simplify the process of
achieving areliable high-performance circuit. In order for theconverter to achieve power factor correction over theentire range of input
line voltages, the converter (in thePFC circuit) must be designed so that the output voltage,VOUT is greater than the peak of the input
line voltage*.Assuming a maximum line voltage of 240Vrms andallowing for at least a 10% margin results in a nominalVOUT in the
vicinity of 380 Vdc. VOUT is regulated viafeedback to the operational amplifier U1. The sensedVIN will be in the form of a rectified
sine wave, whichaccurately reflects the instantaneous value of the inputAC voltage. This signal is used as in input to themultiplier
along with the VOUT error voltage to formulate avoltage that is proportional to the desired current. Thissignal is then compared with
the sensed actual convertercurrent to form the error signal that drives the converterswitch Q1. The result is that the input current
waveformwill track the AC input voltage waveform almost perfectly.By definition, this constitutes a power factor approaching unity.
The active boost circuit will correct for deficienciesin both displacement and distortion.

During operation of the converter, the duty cycle will varygreatly during each half cycle of the input AC waveform.The duty cycle
will be the longest when theinstantaneous value of the AC is near zero and will bevery short during the peaks of each half cycle.
Thevoltage stress on the switch Q1 is equal to only VOUT andthe current levels are reasonable, resulting in aneconomical device
selection. Since Q1 is referenced toground, its control and driver circuits are relativelystraightforward and easy to implement. The
inductor L1assists in reducing EMC from the converter and insuppressing some input transients from the power line. Itis not large
enough in value, however, to be consideredas protection from start-up inrush current, which must beprovided by other methods.

This circuit, of course, is much morecomplex than the other PFC techniques wehave considered. However, there are someadditional
benefits to be derived from itsuse. The topology allows for inclusion ofautomatic range switching on the AC inputat essentially no
extra cost. Since thisuniversal input function is now arequirement on the majority of powerconverters to allow for operation in
allcountries without any manual settings, thisfeature helps offset the cost of theadditional componentry for the PFC function.
Becausethe circuit operates at high frequencies, typically over100 kHz, the components, including the inductor L1, tendto be small
and light and much more conducive toautomated manufacturing. The relatively high output voltage is actually an advantage for the
down-converterfollowing the boost stage. The current levels in thesilicon and transformer of the down-converter are modest, resulting
in lower cost devices. The efficiency ofthe active boost circuit is very high, approaching 95%.However, it will constitute a second
conversion stage insome applications and can somewhat degrade theoverall power conversion efficiency compared to asolution
without PFC.Considering all the tradeoffs, the active boost is a verygood solution for many applications, especially where thepower
level is high enough so that the cost of the extracomponents is not a big percentage of the total cost.

Advantages and Disadvantages ofHigh Frequency Active PFC

S.No. Advantages Disadvantages
1. High Power Factor 0.99 Complexity
2. Corrects both Distortion and
DisplacementAuto ranging
VOUT has to be >
Peak V IN 380
Vdc
3. Circuit includes Input
Voltage
Cost for Low Power
applications
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

539 www.ijergs.org

4. Regulated VOUT Adds 2nd
conversion Stage in
somecases and
Decreases efficiency
5. Small and Light
Components
No Inrush Current
Limiting
6. Good EMC Characteristics
7. Absorbs Some Line
Transients

8. Design Supported by
Standard Controller ICs

9. Low Stresses on Switching
Devices



SINGLE PHASE BOOST CONVERTER TOPOLOGY:
Design of input filters for power factor improvement in buck converters istherefore complex and provides only limited improvement in input current quality.
Onthe other hand the boost type converter generate dc voltage, which is higher than theinput ac voltage. However, the input current in these converters flows
through theinductor and therefore can easily be actively wave-shaped with appropriate current modecontrol. Moreover, boost converters provide regulated dc
output voltage at unity inputpower factor and reduced THD of input ac current. These converters have foundwidespread use in various applications due to the
advantages of high efficiency, highpower density and inherent power quality improvement at ac input and dc output.The preferred power circuit configuration
of single-phase boost converter5-18 isthe most popular and economical PFC converter consisting of diode bridge rectifier withstep-up chopper. The single
phase boost converter with unidirectional power flow shownin Figure 1 is realized by cascading single-phase diode bridge rectifier with boostchopper topology.


Figure 4 Boost converter with load

POWER FACTOR CORRECTION TECHNIQUES


In recent years, single-phase switch-mode AC/DC power converters have beenincreasingly used in the industrial, commercial, residential, aerospace, and
militaryenvironment due to advantages of high efficiency, smaller size and weight. However, theproliferation of the power converters draw pulsating input
current from the utility line,this not only reduce the input power factor of the converters but also injects a significantamount of harmonic current into the utility
line . To improve the power quality, variousPFC schemes have been proposed. There are harmonic norms such as IEC 1000-3-2introduced for improving
power quality. By the introduction of harmonic norms nowpower supply manufacturers have to follow these norms strictly for the remedy of signalinterference
problem.The various methods of power factor correction can be classified
as:(1) Passive power factor correction techniques(2) Active power factor correction techniquesIn passive power factor correction techniques, an LC filter is
inserted between theAC mains line and the input port of the diode rectifier of AC/DC converter as shown in Figure . This technique is simple and rugged but it
has bulky size and heavy weight andthe power factor cannot be very high [1]. Therefore it is now not applicable for the
35current trends of harmonic norms. Basically it is applicable for power rating of lowerthan25W. For higher power rating it will be bulky.In active power factor
correction techniques approach, switched mode powersupply (SMPS) technique is used to shape the input current in phase with the inputvoltage. Thus, the
power factor can reach up to unity. Figure shows the circuit diagramof basic active power correction technique. By the introduction of regulation norms
IEC1000-3-2 active power factor correction technique is used now a day.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

540 www.ijergs.org



Figure 5 Circuit diagram of active filter
The active PFC techniques can be classified as:

(1) PWM power factor correction techniques
(2) Resonant power factor correction techniques
(3) Soft switching powerfactor correction techniques.

In PWM power factor correction approach, the power switching device operatesat pulse-width-modulation mode. Basically in this technique switching
frequency of active power switch is constant, but turn-on and turnoff mode is variable.

Differenttopologies of PWM techniques are as follows:

(1) Buck type
(2) Fly back type
(3) Boost type
(4) Cuk type


Passive PFC

Although most switchmode power converters now useactive PFC techniques, we will give a couple of examplesof using the simpler
passive approach.

Figure shows the input circuitry of the power supply passive PFC. Note the line- voltage range switch connected to the center tap of
the PFC inductor. In the 230-V position (switch open) both halves of the inductor winding are used and the rectifier function as a full
wave bridge. In the 115V(switch closed) position only the left half of the inductor and the half of the rectifier bridge are used, placing
the circuit in the half wave double mode. As in the case of the full wave rectifier with 230V ac input, this produces 325 at the out put
of the rectifier. This 325 V
dc
bus is, of course, unregulated and moves up down with the input line voltage.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

541 www.ijergs.org



Figure 6 Passive PFC in a 250 W PC Power Supply

Advantages and Disadvantages of Passive PFC

S.No. Advantages Disadvantages

1. Simple Line Frequency
Components are Large
and Heavy
2. Cost Effective at Low
Power
Cannot Completely
Correct Nonlinear
Loads
3. Reliable and Rugged AC Range Switching
Required
4. Not a Source of EMC Needs to be Re-Designed
as Load Characteristics
Change
5. Can Assist with EMC
Filtering
Magnetics needed if
Load is Capacitive
6. Unity Power Factor for
Linear Loads



DIFFERENT CONTROL TECHNIQUES

There are various types of control schemes present for improvement of powerfactor with tight output voltage regulation .viz.

(a) Peak current control method
(b)Average current control method
(c) Borderline current control method
(d) Discontinuouscurrent PWM control method
(e) Hysteresis control method

1: Peak Current Control Method

Switch is turned on at constant frequency by a clock signal, and is turned off when the sum of the positive ramp of the inductor current
(i.e. the switch current) and an external ramp (compensating ramp) reach the sinusoidal current reference. This reference is usually
obtained by multiplying a scaled replica of the rectified line voltage vg times the output of the voltage error amplifier, which sets the
current reference amplitude. In this way, there reference signal is naturally synchronized and always proportional to the line voltage,
which is the condition to obtain unity power factor the converter operates in normal condition.
The objective of the inner loop is to control the state-space averaged inductor current, but in practice the instantaneous peak inductor
current is the basis for control. The switch current during the ON time is equal to the inductor current. If the inductor ripple current is
small, peak current control is nearly equivalent to the average inductor current control. In a conventional switching power supply
employing a buck derived topology, the inductor current is in the output.
Current mode control is then the output current control. On the other hand, in a high power factor pre-regulator using the boost
topology, the inductor is in the input. Current mode control then controls the input current, allowing it to be easily conformed to the
desired sinusoidal wave shape.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

542 www.ijergs.org

The peak method of inductor current control functions by comparing the upslope of inductor current (or switch current) to a current
program level set by the outer loop. The comparator turns the power switch off when the instantaneous current reaches the desired
level.

Figure 7 Peak current mode control circuit and its waveforms

2: Average Current Control Method

Here the inductor current is sensed and filtered by a current error amplifier whoseoutput drives a PWM modulator. In this way the inner current loop tends to
minimize theerror between the average input current and its reference. This latter is obtained in thesame way as in the peak current control. The converter works
in CICM, so the sameconsideration done with regard to the peak current control can be applied.

3. Borderline Control Method

In this control approach the switch on-time is held constant during the line cycleand the switch is turned on when the inductor current falls to zero, so that the
converteroperates at the boundary between Continuous and Discontinuous Inductor Current Mode(CICM-DICM).

4. Discontinuous Current PWM Control Method

With this approach, the internal current loop is completely eliminated, so that theswitch is operated at constant on-time and frequency. With the converter
working indiscontinuous conduction mode (DCM), this control technique allows unity power factorwhen used with converter topologies like fly back, Cuk.
Instead, with the boostPFC this technique causes some harmonic distortion in the line current.

Conclusion
In this paper, both PFC techniques have been presented. The operation principle has been discussed indetail. It shows that a high
power factor has beenobtained. Compared with the traditional PFC, the proposed PFChas the following advantages: 1) lower devices
rating, which reducescost, EMI, and switching losses, 2) no additional inductoris required, the line impedance is enough for most
cases, and3) the proposed double hysteresis control reduces the switchingfrequency significantly, which leads to higher efficiency.

REFERENCES:
[1] O. Garcia, J. A. Cobos, R. Prieto, P. Alou, and J. Uceda, Power factor correction: A survey, in Proc. IEEE Annu. Power
Electronics Specialists Conf. (PESC01), 2001, pp. 813.
[2] J. Itoh and K. Fujita, Novel unity power factor circuits using zero-vector control for single phase input system, in Proc.
IEEE Applied Power Electronics Conf. (APEC99), 1999, pp. 10391045.
[3] F. Z. Peng, Application issues and characteristics of active power filters, IEEE Ind. Applicat. Mag., vol. 4, pp. 2130,
Sep./Oct. 1998.
[4] C. Qiao and K. M. Smedley, A topology survey of single-stage power factor corrector with a boost type input-current-
shaper, in Proc. IEEE Applied Power Electronics Conf. (APEC00), 2000, pp. 460467.
[5] F. Z. Peng, H. Akagi, and A. Nabae, A new approach to harmonic compensation in power systemsA combined system of
shunt passive and series active filters, IEEE Trans. Ind. Applicat., vol. 26, no. 6, pp. 983990, Nov./Dec. 1990.
[6] O. Garcia, J. A. Cobos, R. Prieto, P. Alou, and J. Uceda, Single PhasePower Factor Correction: A Survey, IEEE Trans.
Power Electron., vol.18,no. 3, pp. 749755, May
[7] Z. Yang and P. C. Sen, Recent Developments in High Power FactorSwitch Mode Converters, in Proc. IEEE Can. Conf.
Elect. Comput.Eng., 1998, pp. 477488.
[8] Haipeng Ren, Tamotsu Ninomiya, The Overall Dynamics of Power-Factor-Correction Boost Converter, IEEE, 2005.
[9] Huai Wei, IEEE Member, and IssaBatarseh, IEEE Senior Member,Comparison of Basic Converter Topologies for Power
FactorCorrection, IEEE, 1998.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

543 www.ijergs.org

[10] Zhen Z. Ye, Milan M. Jovanovic and Brian T. Irving.DigitalImplementation of A Unity-Power-Factor Constant
Frequency DCMBoost Converter, IEEE, 2005
[11] A. Karaarslan, I. Iskender, The Analysis of Ac-Dc Boost PFCConverter Based On Peak and Hysteresis Current Control
Techniques,International Journal on Technical and Physical Problems ofEngineering, June 2011.
[12] Wei-Hsin Liao, Shun-Chung Wang, and Yi-Hua Liu, Member, IEEE,Generalized Simulation Model for a Switched-Mode
Power SupplyDesign Course Using Matlab/Simulink, IEEE Transactions onEducation, vol. 55, No. 1, February 2012.
[13] J. Lazar and S. Cuk, Open Loop Control of a Unity Power Factor,Discontinuous Conduction Mode Boost Rectifier, in
Proc. IEEEINTELEC, 1995,pp. 671677.
[14] K. Taniguchi and Y. Nakaya, Analysis and Improvement of InputCurrent Waveforms for Discontinuous-Mode Boost
Converter withUnity Power Factor, in Proc. IEEE Power Convers. Conf., 1997, pp.399404.
[15] Kai Yao, XinboRuan, Senior Member, IEEE, Xiaojing Mao, andZhihong Ye, Variable-Duty-Cycle Control to Achieve
High InputPower Factor for DCM Boost PFC Converter, IEEE Transactions onIndustrial Electronics, vol. 58, no. 5, May
2011.
[16] G. J. Sussman and R. A. Stallman, Heuristic techniques in computer aided circuit analysis, IEEE Trans. on Circuits and
Systems, vol. 22, pp. 857-865, 1975.
[17] S. Rahman and F. C. Lee, Nonlinear program based optimization of boost and buck-boost converter designs, PESC 81
IEEE Power Elec. Spec. Conf., pp. 180-191, 1981.
[18] S. Balachandran and F. C. Lee, Algorithms for power converter design optimization, IEEE Trans. on Aerospace and
Electronics Systems, vol. AES-17, no. 3, pp. 422-432, 1981.
[19] C. J. Wu, F. C. Lee, S. Balachandran, and H. L. Goin, Design optimization for a half-bridge dc-dc converter, IEEE Trans.
on Aerospace and Electronic Systems, vol. AES-18, no. 4, pp. 497-508, 1982.
[20] R. B. Ridley and F. C. Lee, Practical nonlinear design optimization tool for power converter components, PESC 87
IEEE Power Elec. Spec. Conf., pp. 314-323, 1987.
[21] C. Zhou, R. B. Ridley, and F. C. Lee, Design and analysis of a hysteretic boost power factor correction circuit, PESC 90
IEEE Power Elec. Spec. Conf., pp. 800-807, 1990.
[22] R. B. Ridley, C. Zhou, and F. C. Lee, Application of nonlinear design optimization for power converter components, IEEE
Trans. On Power Elec., vol. 5, no. 1, pp. 29-39, 1990















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

544 www.ijergs.org

Biological Effects of Yellow Laser- Induced of Cell Survival: Structural DNA
Damage Comparison is Undergoing Ultraviolet Radiation Photocoagulation
AL-TimimiZahra
,
a
College of Science for Women, Department of Laser Physics, Babylon University,Iraq
Email: zahraja2007@yahoo.com
Telephone:+964030241549

Abstract In this work, we have a tendency to study lymphocyte cells survive before irradiated with a yellow laser (578nm) , and
its protecting of deoxyribonucleic acid (DNA) damage once being irradiated with ultraviolet radiation (UVB-320nm) to activate DNA
damage. Total range of blood samples (200) is collected from healthy donors. Donor blood volume varied from 5ml to 7ml in heparin
tubes. Samples dole out to check the radiation effect on cell viability by using the trypan blue exclusion. The experiments
implemented during 1, 24, 48 and 72 hours prior UVB to achieve the repair development.DNA gel electrophoresis technique is
performed on samples to check the effect of radiation on the separating DNA molecules of varying sizes extending. The results show a
decrease of separating or breaking of DNA manifest on gel electrophoresis experiments as a result of the smear length is reduced
significantly for UVB, alternative results for cell viability tests showed that yellow laser ought to increase the survival of cells before
irradiated with UVB demonstrating (91%, 87% , 80% , and 71%) amelioration.Improvement of lymphocyte survival by yellow laser
can be attributed to the induction of endogenous radiation protection and doubtless enzymes elicited by laser irradiation that may be
scaled back the free radical either by scavenging concern or by improved cell repair, we have a tendency to may conclude that yellow
irradiation shall shield cells from radiation damage.Abstract must be of Time New Roman Front of size 10 and must be justified
alignment.
Keywords Ultraviolet Radiation, Deoxyribonucleic Acid, Cell Survival, Viability, Yellow Laser.
INTRODUCTION
The form LASER stands for Light Amplification by Stimulated Emission of Radiation. Laser is a device that creates and amplifies
electromagnetic wave of a particular frequency through the method of stimulating emission[1, 2]. Lasers work as a result of optical
resonator[3].
In a basic laser, a chamber known as an optical cavity is purposed to internally Reflect Infrared, visible light or ultraviolet waves in
order that they reinforce one another[4]. The cavity will contain gases, liquids, or solids. The monochromatic output of laser or laser
with high frequency stability[5]. The laser might have high energy density, which might use as centered laser in surgery or Low Level
laser therapy (LLLT) with power density (1-5)mW /cm
2
[6].
LLLT is very safe and therefore the advantage is going down in all organs and tissues of the body for the creation of excellent cell
perform[7] such as, treatment of each acute and/or chronic pain[8], stopping a tissue flow of fluids, disappearance of swelling
reduction and heat[9].Additionally speeding up of bone repair from the stimulation offered by fibroblasts and osteoplastic proliferation
increase blood circulation[10].
The laser will have an effect on the cell as a result of the light reacts with the cell and absorbed among the mitochondria and therefore
the infrared absorbed on the plasma membrane, this leads to ever changing within the membrane permeability during a class cell and
increased Adenosine triphosphate (ATP) levels[11].
Ultraviolet (UV) light is a type of electromagnetic radiation with a shorter wavelengths than visible light and longer than x-ray the
spectrum of ultraviolet light varies from 100 to 400nm and energies from (3-124) eV and is split into UVA, UVB, UVC. The
absorption of UV by DNA might cause cancer changes, kills cells by damaging their DNA[12, 13]. This happens as a result of
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

545 www.ijergs.org

chemical reactions, prone to deteriorate the genetic code, is triggered by the surplus energy deposited by the UV absorption[14]. The
excitation later on transfers to different components of the double helix, losing energy at every step[15]. It will manufacture many
varieties of the DNA damage single strand, double strand break and thymine dimer[16]. The defective piece of DNA will be repaired
by completely different mechanisms like excision repair and photoreactivation.[16] Up to now, it absolutely was not renowned
however. This excess energy was distributed among the bases of the DNA. It absolutely was solely postulated that every base absorbs
a photon separately.
MATERIAL AND METHOD:
This study is undertaken during the period from January 2013 to January 2014. It was mentioned in the pathology laboratory, Al-Hilla
General Hospital. A study of the yellow laser effect on the cell damage and repair was performed on (200) blood samples that were
obtained from adult healthy donors came to the hospital for a medical checkup. The amount of blood drawn varies from 5ml to 7ml in
heparin tubes to prevent blood clotting and the samples were checked for diseases such as HIV, malaria, and viral hepatitis. Two types
of irradiations (yellow laser and UVB) irradiated the cells.
Laser irradiation done by using 578nm yellow light with an output power of 10mW, operating in continuous wave and spot diameter
0.6mm was utilized to irradiate the cells for a period of 10 min at a distance of 2cm.The cells and the medium were rotated everyone
min to ensure that all cells receive the same amount of laser irradiation.
UVB irradiations after laser irradiation was completed cells were incubated for one hour prior to UVB irradiation. The UVB (320nm)
irradiation to the cells was operated in continuous wave with power 25W and positioned at a distance of 10 cm from cell suspension
with mixing the cells once every 2 minutes to ensure homogenous irradiation of the cell suspension, which was irradiated for period
10 min.
In all experiments, the cells were kept on ice this was done to retard cellular repair processes and thereby conserve DNA damage
induced during irradiation.
Gel electrophoresis was performed on DNA of lymphocyte post UVB irradiation in these experiments we assessed the DNA
fragmentation after UVB irradiation. Further experiments were performed for the assessment of DNA fragmentation before irradiated
cells with laser with 1h incubation then exposed to UVB light.
DNA was extracted from 6x104 cells then it was tested for purity by using spectrophotometry. Cell culture procedure; before each
experiment the cell concentration of lymphocytes counted by microscopic examination using a Neubauer Chamber Cell Counting.
After irradiation, the cells were grown in a suspension in 5ml RPMI - Roswell Park Memorial Institute 1640 medium with 10% FBS -
Fetal Bovine Serum and 0.4 mg/ ml of PHA- Phytohemagglutinins grown at 37oC for 3 days without changing. The difference was
estimated significant at 0.05 levels. ANOVA has also been undertaken to test the changes between the mean value of total cell
number, living cells, dead cells, and percentage survival.
RESULTS:
Gel electrophoresis indicate the difference in bond length. Samples, which were irradiated by UVB for 10 min, gave shorter smear. If
the cells were irradiated for 10 min, with 578nm yellow laser beam prior to UVB irradiation and incubated for 1 hour before UVB
irradiation the short smear became approximately a band similar to control and the long smear will be short.
Other experiments were performed. In these experiments, the cells were irradiated for 10 min by UVB radiation prior to laser exposure
tested for DNA fragmentation such experiments did not demonstrate a significant effect on DNA fragmentation. These results are
shown in figure 1.
Results for the survival test revealed that in cells exposed to laser only has a small effect was on cell survival compared with control
appeared on the trypan blue test other results were appearing when laser irradiation administered post UVB irradiation which has
given small change from cell survival nutrient medium. This allows us exposed to UVB only, Table 1.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

546 www.ijergs.org

A significant improvement to examine the effect of radiation during and after the cell division after 24 h, 48 h and 72 h when cells
were tested for viability.
Viability test using trypan blue to check the viability of the cells with trypan blue cells were centrifuged to remove the medium and re
suspended in PBS Phosphate Buffered Salines - pH 7.4 and checked by viewing under a microscope with phase contrast option the
dead cell discriminated visually from live cells because of their darker appearance.
STATISTICAL ANALYSIS:
Results expressed in (mean SD). Change, a percentage of the mean has calculated and used for comparison between the results. The
results were further analyzed for significance.
Unpaired student test (t-test) performed for compression between observed when laser irradiation administered one hour prior to
(91%, 87%, 80%, and 71%) for periods of 1, 24, 48, 72 hours of incubation respectively.
Another experiment, cells irradiated for 10 min by UVB radiation prior to laser exposure tested for cell viability. Such experiments did
not show a significant effect on cell survival, which, given an insignificant change between cells irradiated with UVB only or UVB
+laser.
DISCUSSION:
The yellow laser has provided protection, which was both cell survival and DNA fragmentation[17, 18] against UVB light
irradiation[13].
These results are observed on the cell survival of cells exposed to UVB tested by trypan blue for the four post irradiation periods (1,
24, 48, and 72 h) which resulted in (40%, 48%, 42%, and 46%). Survival extended to (70%, 66%, 69%, and 73%) when cells exposed
to yellow laser and left for 1h incubation at 37oC prior UVB.
To investigate and confirm these results we have performed gel electrophoresis. These experiments have shown an increased smear
length in cells exposed to UVB 3 short of smears with 11 long smears for 10 min UVB exposure.
While those exposed to laser given 1h incubation prior to UV exposure have bestowed shorter smears, length almost a band for 10 min
exposure compared with control.
These results indicate that low power laser yellow laser can improve cell survival for cells damaged by UV radiation. The mechanism
of the yellow laser induced protection appears sort of the adaptive response and this follows because yellow laser irradiation has
reported leading to generation of singlet oxygen and the observation, that yellow laser has led to an increase in the activity of
antioxidant enzymes.

CONCLUSION:
UVB is very a matter of damaging effect on DNA and this damage is dose dependent. These characteristics are derived from the UVB
photon's energy to transform chemical bonds in molecules, even without having sufficient energy to ionize.
The yellow laser can improve cell survival significantly when gave a time about 1hr prior to UVB irradiation. The cell survival
improvement by using yellow laser may be attributed to the induction of endogenous radio protectors or it may provide some
protection.


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

547 www.ijergs.org

REFERENCES:

1. Huang, Y.-Y., et al., Biphasic dose response in low levellightherapy. Dose-Response, 2009. 7(4): p. 358-383.
2. Bruch, R. Low Level Laser Therapy (LLLT).in Nevada Health Forum. 2003.
3. Paschotta, R.d., Encyclopedia of laser physics and technology.Vol. 1. 2008: Wiley-vch Berlin.
4. Silfvast, W.T., Laser fundamentals.2004: Cambridge University Press.
5. Khanin, I.A.k.I. and Y.I. Khanin, Fundamentals of laser dynamics. 2006: Cambridge Int Science Publishing.
6. Schindl, A., et al., Low-intensity laser therapy: a review.Journal of investigative medicine: the official publication of the American
Federation for Clinical Research, 2000. 48(5): p. 312-326.
7. Smith, C.F., Method of performing laser therapy.1995, Google Patents.
8. Tsivunchyk, O.S., Influence of low intensity laser radiation on different biological systems.2004, Philipps-Universitt Marburg.
9. Torres, C.S., et al., Does the use of laser photobiomodulation, bone morphogenetic proteins, and guided bone regeneration improve
the outcome of autologous bone grafts? An in vivo study in a rodent model.Photomedicine and laser surgery, 2008. 26(4): p. 371-
377.
10. Katona, E.V.A., et al., Low power red laser irradiation effects, as seen in metabolically intact and impaired human blood cel ls.
Rom. J. Biophys, 2003. 13(1-4): p. 1-16.
11. Niemz, M.H., Laser-tissue interactions: fundamentals and applications.2007: Springer.
12. Harm, W., Biological effects of ultraviolet radiation. 1980.
13. Sinha, R.P. and D.-P. Hder, UV-induced DNA damage and repair: a review. Photochemical &Photobiological Sciences, 2002.
1(4): p. 225-236.
14. Imlay, J.A. and S. Linn, DNA damage and oxygen radical toxicity.Science, 1988.240(4857): p. 1302-1309.
15. Kulms, D. and T. Schwarz, Molecular mechanisms of UVinduced apoptosis.Photodermatology,
photoimmunology&photomedicine, 2000. 16(5): p. 195-201.
16. Bohr, V.A. and G.L. Dianov, Oxidative DNA damage processing in nuclear and mitochondrial DNA. Biochimie, 1999.81(1-2): p.
155-160.
17. Blodi, C.F., et al., Direct and feeder vessel photocoagulation of retinal angiomas with dye yellow laser. Ophthalmology, 1990.
97(6): p. 791-795.
18. Lee, H.I., et al., Clinicopathologic efficacy of copper bromide plus/yellow laser (578 nm with 511 nm) for treatment of melasma in
Asian patients. Dermatologic Surgery. 36(6): p. 885-893.







International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

548 www.ijergs.org

Table1: Trypan blue used to determine the number of viable cells after exposure with yellow laser, UVB, laser before UVB and UVB
before laser
Type of Radiation Cell Survival
after 1h SD
Cell Survival
after 24h SD
Cell Survival
after 48h SD
Cell Survival
after 72hSD
Control 92% 1.9 86% 0.6 83% 0.6 80% 0.5
Yellow Laser 91% 0.6 87% 0.3 80% 0.9 71% 1.2
UVB 40% 1.5 48% 3.1 42% 0.8 46% 1.4
Yellow Laser + UVB 70% 2.7 66% 5.0 69% 0.4 73% 1.7
UVB + Yellow Laser 59% 2.4 46% 2.5 48% 0.8 50% 1.6



Figure 1: Classification of manifestation of DNA in gel electrophoresis according to size.
























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

549 www.ijergs.org

Wireless Automation System Based on Accessible Display Design
Y Chaitanya
1
, K.Jhansi Rani
2

1
Research Scholar (M.Tech), Department of ECE, JNTU, Kakinada
2
Asst. Professor, Department of ECE, JNTU, Kakinada
E-mail- chaitanyayeluri@hotmail.com

Abstract with recent advancements in electronics market the development trend is leaning towards deploying intelligent,
convenient and remote systems. The aim is to monitor and control devices from a central point. In this paper we present a
economically feasible yet flexible and secure Accessible Interface based remote monitoring and control system applicable at different
levels including houses, offices, hospitals, multiplexes etc.. The system is designed to be economically feasible with variet y of devices
to be controlled. This paper illustrates the design of an accessible interface for monitoring and controlling devices at remote places.
The central monitoring and control system discussed in this paper controls the nodes at remote zones. The system provides flexibility
to add nodes at a later time after setup. The communication mode used in this system is wireless and follows Zigbee protocol.

Index TermsRemote monitoring, user Interface, remote zones, Zigbee.

INTRODUCTION
With rapid advancements in computer technology and with the emergence of high performance embedded processors
the electronics market is undergoing a revolution. Embedded Systems are now becoming a part of peoples life beginning
with smart phones that help them to stay intact with the digital world to embedded web servers that are capable of
interconnection digital devices. At the same time, the rapid development in Internet Technology made internet based
remote monitoring increasingly common. As with the growing needs for automation of appliances and maintaining a
network that can monitor and control these appliances, it is a major challenge to develop a cost effective and reliable
system.
The system discussed in this paper provides a solution for embedded system access through Accessible interface with
which we can remotely access, monitor and maintain remote appliances conveniently. The solution is based on Embedded
Technology. The system provides user with an Accessible Interface through which the devices can be centrally monitored
and controlled [1]. This intelligent system may be a luxury item for many people but it is also necessary to invest
efforts in designing an accessible interface that provides the flexibility of usage for users with disabilities.
The communication module is the key element for automation systems. Wireless communication is mostly
preferred for designing of remote systems. There are various types of wireless communication media out of which Zigbee
is most prominent for remote control systems [2] and [3] Zigbee has become one of the promising technologies in home
networks and is attracting major attention in electronics market with its specification suite for networking, security and
application software layers using low power, low data rate communication technology based on IEEE 802.15.4 standard
for personal area networks. This system uses Zigbee as wireless communication module to transmit data to remote
location through Accessible Interface. This system uses S3C2440A (ARM 920T) as its core on which the Accessible
Interface is implemented.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

550 www.ijergs.org

SYSTEM DESCRIPTION
Accessible User Interface
An Accessible interface is a user interface that provides ease of access to persons with disability. The aim of accessible interface
is not the creation of exclusive spaces for people with disabilities which could be a form of discrimination but rather to develop
systems that can be used by everyone. Also it is possible to notice that works on user interface for automation systems are very
specific, addressing significant type of impairment. The project assistive housing [6] focuses on elderly comfort allowing home
automation through television set and remote control interface. Another work proposes touch screen based interface for people with
limitations in upper and lower limbs[7].Other solutions based on image processing, gesture based control system[8] that controls home
appliances based on hand gesture recognition. The user interface developed in this system is designed using Qtcreator Integrated
Development Environment (IDE) for Qtopia cross development platform deployed on ARM platform (S3C2440A). The interface
provides a touch screen control through which remote devices can be accessed. The choice of touch screen interface is based on two
factors: the wide spread use of touch based mobiles and PDAs and considering that most of the disabled persons have locomotion
difficulty. The design adopted in this system is inspired from quadrant approach proposed in [1]. The GUI layout offers simplified
label buttons to identify the appliance to monitor and control. The status of the appliance can be known with the help of labelled
button disable/enable. If the device is in on state the label is disabled and vice versa. This offers flexibility to the user with
monitoring and control done at the same instant.
System Architecture
Here we propose the setup of Central Monitoring and Control System that is standalone, cost effective, reliable. The whole system
setup is divided into three zones as follows:
- Remote zones
- Central Monitoring & Control System
- External peripherals connected to Central Monitoring & Control System.
The remote nodes are connected to sensors and devices to be controlled. The Central Monitoring System is the heart of the
System and it feeds the input received from sensors to the Accessible Interface and it sends commands to the remote zone to operate
the peripherals as shown in Fig.1 (a). An 8051 micro controller is placed in each zone and it acts as Remote Terminal Equipment
(RTE). The external peripherals and sensors at remote zone are connected to 8051 micro controller. The Central Monitoring and
Control System communicate through Zigbee interface to control the peripherals at remote location. If something undesirable happens
at the arbitrary zone the Central Monitoring System would alert the administrator by means of Short Message Service (SMS) through
GSM module connected to it. The USB camera continuously monitors the remote zones and sends the frames to the for live
monitoring.
The remote nodes have three types of executable elements namely:
- Sensors
- Webcam
- Electric Equipments
Fig.1 (b) describes the elements at the remote zone. The sensors feed the input to central monitoring and control
system. Webcams sends the live feed as frames for monitoring. Electrical equipments are executable elements controlled
from the central zone.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

551 www.ijergs.org


Fig. 1.System architecture of central monitoring and control system

Fig. 1(b).Structure of remote node
SYSTEM DESIGN
Hardware Design
The general hardware structure for central monitoring and control system is based on ARM processor (S3C2240A) as
shown in Fig.2.The choice of ARM9 processor is because of their low cost and low power consumption. The Board
operates at the frequency of 400 MHz with 64MB SDRAM, 64 MB NAND Flash. The board operates with 5V external
supply with working voltages of 3.3V, 1.8V, 1.25V generated on board. The S3C2440A CPU supports two types of boot
modes: booting from NAND flash and booting from NOR flash. An UVC compatible web cam is used for video
surveillance. The Ethernet interface uses a Davicom DM9000 chip. The three serial ports are led out on CON1 (UART0),
CON2 (UART1), and CON3 (UART2). UART0 is also connected to RS232 level converter. There are four ADC channels
connected to GPIO header (CON4). It also has a four wire resistive touch interface. Xbee 2.4 GHz RF-modules operating
under Zigbee protocol are used as communication medium. SIM900 GSM module is used for message alerts.8051 micro
controller is used as Remote Terminal Equipment (RTE).
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

552 www.ijergs.org


Fig. 2.Hardware structure of the system

Software Design
The software development process includes: the establishment of cross compiler, creation of root file system,
transplantation of boot loader, porting embedded Linux kernel, design of Accessible Interface using Qt creator IDE. ARM
Linux gcc is used as cross compiler. Host system uses Ubuntu 12.04 for development and target system uses embedded
Linux as operating system. Linux is used as it is open source kernel and can easily be transplanted to the requirements of
Embedded Systems. The software framework for the system is shown in Fig.3.

Fig. 3.Software framework of the system
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

553 www.ijergs.org

SYSTEM REALIZATION
Accessible Interface based Control System
The Accessible Interface allows administrator to control devices at remote zone. The Accessible Interface design is
achieved using from Qtcreator for Qtopia cross development platform. Qtopia is a graphical environment for Linux on
mobile phones, handheld PC or multimedia and embedded devices. Qtcreator uses C++ compiler from GNU compiler
connection on Linux. Device drivers are written in C++ language.
The central monitoring and control system has three UART interfaces. UART0 uses RS232 for communication with
host computer.UART1 uses TTL to connect with Xbee 2.4GHz RF-module. Xbee transceivers use Zigbee protocol to
communicate with remote microcontrollers. UART2 uses TTL to connect with SIM900 GSM/GPRS module. MAX232 IC
used as level converter from TTL to RS232 as shown in Fig.6. The central monitoring and control system has a touch
LCD interface as shown in Fig.5. The device drivers are written for LCD interface on Qtopia platform. The user
interface allows the administrator to select a remote zone to access the devices with ease.
At remote zone 8051 microcontroller is used control peripherals and to feed the input from sensors to central
monitoring and control system. Depending on the sensor feedback obtained the warning or alert message is sent to
administrator using SIM 900 module.
A remote zone can be monitored with the help of USB hub that connects various USB cameras placed at each remote
zone. The zones can be monitored through by connecting to USB hub. V4Linux drivers installed in the central monitoring
and control system allows us to capture frames from the USB camera and display it on the LCD interface.

Zigbee Addressing
Xbee 2.4GHz RF-modules used in this system operate within ISM 2.4 GHz frequency band and are pin-for-pin
compatible with each other. They have the indoor range of about 30m. Xbee module can be viewed as modem as it mainly
uses UART to communicate with the main board. Xbee modules communicate through AT command mode. Each module
has a unique device ID through which they establish a single communication channel within ISM 2.4GHz frequency band.
The baud rate is set to 9600.
Zigbee module acts as network coordinator connecting remote nodes with central system. But before making
communication Zigbee needs to be initialized. The initialization procedure between Zigbee sensor nodes and Zigbee
coordinator is shown in Fig. 4.
Short Message Service
SIM 900 GSM/GPRS module is used to send alerts to administrator. SIM 900 works on 900MHz frequency.
The baud rate is set to 115200. This module has an AT command interface that provides the services of GSM call, short
message and GPRS net service. It has SIM card slot into which any GSM SIM card can be inserted to avail services. The
use of GSM technology allows user to control appliances through GSM commands [3] which has not been proposed in
this system but can be designed in future systems of automation.


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

554 www.ijergs.org


Fig. 4.Zigbee information processing flow chart

INTERFACE DESIGN DIAGRAMS

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

555 www.ijergs.org




Fig. 5.Accessible touch interface on Qtopia platform

Fig. 6.External peripheral interface

CONCLUSION
The system described in this paper is economically feasible and flexible. The accessible interface is
simple and easy to use which does not require any prior knowledge for operation. It addresses the primary objective of
accessible interfaces and also serves the purpose of commercial applications. With slight modifications it can be easily
applied in various fields such as home automation, industrial control and intelligent appliances. Therefore, it has wide
variety range of application prospects and great promotional value. The functionality of system could be further extended
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

556 www.ijergs.org

with the help of embedded web server technology that extends the accessibility of system through internet with the help of
web browser.

REFERENCES:
[1] L.C.P Costa, I.K Ficheman,A.G.D Correa, R.D Lopes, M.K Zuffo, Accessible display design to control home area networks, IEEE Transactions on Consumer
Electronics, Vol. 59, No. 2, May 2013.
[2] Peng Jia, Wang Meiling, Wireless remote monitoring and control system based on Zigbee and web,25th Chinese Control and Decision Conference (CCDC)
[3] Carelin Felix, I.Jacob Raglend, Home automation using gsm, Proceedings of 2011 International Conference on Signal Processing, Communi cation, Computing
and Networking Technologies (ICSCCN 2011)
[4] ) P. Soumya Sunny, M.Roopa, Data acquisition and control system using embedded web server, International Journal of Engineering Trends and Technology-
Volume3Issue3- 2012.
[5] U.Sarojini Devi, R.Malikarjun, ARM Based Wireless Sensor Networks for Temperature Measurement, International Journal of Mathematics and Computer
Research,Vol. 1,Issue 2, February 2013
[6] M. Ghorbel, F. Arab, M. Monhtari, Assistive housing: case study in a residence for elderly people. IEEE Second International Conf. on Pervasive Computing
Technologies for Healthcare, pp.140-143, Jan.-Feb. 2008.
[7] M. Valles, F. Manso, M. T. Arredondo, F. Del Pozo, Multimodal environmental control system for elderly and disabled people. 18th Annual International Conf.
of the IEEE in Eng. in Medicine and Biology Society, pp. 516-517, Oct.-Nov., 1996.
[8] J. Do, H. Jang, S. H. Jung, J. Jung, Z. Bien, Soft remote control system in the intelligent sweet home. IEEE International Conf. on Intelligent Robots and
Systems, pp. 3984-3989, Aug. 2005


















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

557 www.ijergs.org

A Hybrid Dynamic Clustering Based Approach for Malicious Nodes Detection
in AODV Based Manets
Alka Sachdeva
1
, Ameeta
2
, Babita
3

1
Department of ECE, PCET, Lalru Mandi PTU Jalandhar, Punjab
2
Asst. Professor, Department of ECE, PCET, Lalru Mandi PTU Jalandhar, Punjab
2
Asst. Professor, Computer Science and Engineering Department, PCET, Lalru Mandi PTU Jalandhar, Punjab
E-mail- alkasachdevapcet@gmail.com

Abstract:- A Mobile ad hoc network (MANET) is a continuously self configuring infrastructure-less network
of mobile devices connected without wires.MANETS are extensively used these days for communication and
there are various communication networks with different standards and computing techniques, different Zone
Routing Protocol by varying transmission range and mobility of MANETS are used. As days are passing by the
size of MANETS is increasing day by day and its expansion is inevitable due to its high penetration and
popularity for the usage of mobile application but at the same time it is also prone to many attacks and network
failure due to technical vulnerability of the network. This paper discussed the detection and isolation of genuine
node from the main network under DOS attack using Watchdog approach. Therefore we need a mechanism
which would need to overcome such scenarios. Simulation results shows better results for packet loss ratio,
throughput, packet delivery ratio and other parameters by detecting malicious for proper and smooth
functioning of MANETS.
I.INTRODUCTION
1.1 Mobile ad-hoc Networks:-An ad hoc network is the cooperative engagement of a collection of mobile
nodes without the required intervention of any centralized access point or existing infrastructure. Ad hoc
networking for commercial uses; however, the main applications lie in military, tactical and other security-
sensitive operations. In these applications, secure routing is an important issue. Most of the protocols proposed
for Secure Routing are either proactive or reactive. In MANETS mobility is the major issue. There are several
problems in routing with mobile ad hoc network like asymmetric links, routing overhead, dynamic topology and
inference.
2. SECURITY GOALS:- Mobile ad-hoc networks (MANETS) are prone to a number of security threats.
There are five major security goals that need to be addressed in order to maintain a reliable and secure ad-hoc
network environment. The mechanisms which are used to detect, prevent and respond to security attacks They
are mainly:
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

558 www.ijergs.org

(i) Confidentiality: Protection of any information from being exposed to unintended entities. In ad hoc
networks this is more difficult to achieve because intermediates nodes receive the packets for other recipients,
so they can easily eavesdrop the information being routed.
(ii) Availability: Services should be available whenever required. There should be an assurance of survivability
despite a Denial of Service (DOS) attack. On physical and media access control layer attacker can use jamming
techniques to interfere with communication on physical channel. On network layer the attacker can disrupt the
routing protocol. On higher layers; the attacker could bring down high level services.
(iii) Authentication: Assurance that an entity of concern or the origin of a communication is what it claims to
be or from. Without which an attacker would impersonate a node, thus gaining unauthorized access to resource
and sensitive information and interfering with operation of other nodes.
(iv) Integrity: Message being transmitted is never altered.
(v) Non-repudiation: Ensures that sending and receiving parties can never deny ever sending or receiving the
message.
2.1 DENIAL OF SERVICES ATTACK:-Denial of service (DoS) is another type of attack, where the attacker
injects a large amount of junk packets into the network. These packets overspend a significant portion of
network resources, and introduce wireless channel contention and network contention in the MANET. A routing
table overflow attack and sleep deprivation attack are two other types of the DoS attacks. In the routing table
overflow attack, an attacker attempts to create routes to nonexistent nodes. Meanwhile the sleep deprivation
attack aims to consume the batteries of a victim node. For example, consider the following Fig. 3. Assume a
shortest path exists from S to X and C and X cannot hear each other, that nodes B and C cannot hear each other,
and that M is a malicious node attempting a denial of service attack. Suppose S wishes to communicate with X
and that S has an unexpired route to X in its route cache. S transmits a data packet toward X with the source
route S --> A --> B --> M --> C --> D --> X contained in the packets header.
2.2 ROUTING ATTACKS ON AODV PROTOCOL
We can classify routing attacks on AODV into four classes:
1) Route Disruption: A malicious node either destroys an existing route or prevents a new route from begin
established.
2) Route Invasion: A malicious node adds itself into a route between source and destination nodes.
3) Node Isolation: A given node is prevented from communicating with any other nodes. It differs from route
disruption in that route disruption is targeting at a route with two given nodes, while node isolation is targeting
at all possible routes to or from a given node.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

559 www.ijergs.org

4) Resource Consumption: The communication bandwidth in the network or storage space at individual nodes is
consumed.
B. Typical Attacks
In the following, we give a short description of some typical routing attacks on AODV.
1) Neighbour Attack: When an intermediate node receives a RREQ/RREP packet, it adds its own ID in the
packet before forwarding it to the next node. A malicious node simply forwards the packet without adding its
ID in the packet. This causes two nodes that are not within the communication range of each other believe that
they are neighbours, resulting in a disrupted route. The Neighbour and Black hole attacks prevent the data from
being delivered to the destination node. But in the Neighbour attack, the malicious node does not catch and
capture the data packets from the source node.
2) Black hole Attack: In the first type of the attack, a malicious node waits for its neighbours to initiate a route
discovery process. Once the malicious node receives a broadcasted RREQ packet, it immediately sends a false
RREP packet with a greater sequence number. So, the source node assumes that the malicious node is having a
fresh route towards the destination node and ignores RREP packets received from other nodes. The malicious
node takes all the routes towards itself and does not allow forwarding any packet anywhere. In the second type,
once a malicious node receives a broadcasted RREQ packet, it intentionally increases the broadcast ID and
source sequence number, and rebroadcast the modified packet with a spoofed source IP address.
3) Rushing Attack: Each intermediate node typically forwards only one RREQ packet originating from each
route discovery. A malicious node simply exploits this property of the operation of route discovery by quickly
forwarding RREQ packets. As a result, the source node will not be able to discover any valid routes that do not
include the malicious node. On-demand routing protocols (e.g., AODV) introduce a delay between receiving a
RREQ packet and forwarding it, in order to avoid collisions of RREQ packets. A malicious node ignoring this
delay will generally be preferred to similarly situated being nodes.
4) RREQ Flooding Attack: A malicious node sends a huge number of RREQ packets in an attempt to consume
the network resources. The source IP address is forged to a 68 randomly selected node and the broadcast ID is
intentionally increased.
3. PROPOSED METHOD AND OBJECTIVE:-Our proposed method primarily based on detection of DOS
attacks and isolating these malicious nodes from the network, so that rest of the genuine nodes can work
peacefully. Proposed mechanism works using AODV protocol for routing of nodes. We have designed a trust
based packet forwarding scheme for detecting and isolating the malicious nodes using the routing layer
information.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

560 www.ijergs.org

It uses trust values to favour packet forwarding by maintaining a trust counter for each node. A node is
punished or rewarded by decreasing or increasing the trust counter. If the trust counter value falls below a trust
threshold, the corresponding intermediate node is marked as malicious. We propose a Trust based packet
forwarding scheme in MANETs without using any centralized infrastructure. Each intermediate node marks the
packets by adding its hash value. And forward the packet towards the destination node. The destination node
verifies the hash value and check the trust counter value. If the hash value is verified, the trust counter is
incremented, otherwise it is decremented. If the trust counter value falls below a trust threshold, the
corresponding the intermediate node is marked as malicious. This scheme presents a solution to node
selfishness without requiring any pre-deployed infrastructure. It is independent of any underlying routing
protocol.
4. PERFORMANCE EVALUVATION:-

Quality of Service based performance metrics are designed for detection of malicious nodes under simulation
environment. These parameters are as follow:-
4.1 Throughput
Throughput or network throughput is the average rate of successful message delivery over a communication
channel. This data may be delivered over a physical or logical link, or pass through a certain network node. The
throughput is usually measured in bits per second (bit/s or bps), and sometimes in data packets per second or
data packets per time slot. Throughput is essentially synonymous to digital bandwidth consumption.

4.2 PDR
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

561 www.ijergs.org

It the ratio of number of packet actually delivered without duplication to destination verses the number of
packet supposed to be received. This number represents the effectiveness and throughput of a protocol in
delivering data to the intended receiver within the network.
PDR = TOTAL NO. OF PACKET RECEIVED /TOTAL NO. OF PACKET SEND

4.3 ENERGY CONSUMPTION
The total energy consumed in the network is energy consumption. It is measured in whr.
Figure 1 is also a reflection of how no. of message packets are
affected when there is an attack being introduced this graph shows how many packets have been lost (control
message) when there was no. of attacks.
4.4 NUMBER OF COLLISIONS
In a network, when two or more nodes wants to transmit data at the same time network collision occurs. When a
packet collision occurs the packet is either discarded or sent back to their originating stations and again
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

562 www.ijergs.org

retransmitted in a times based sequence to avoid collisions. Collisions can result in loss of packet integrity or
can impede the performance of a network. This metric is used to measure such collision in the network.

Figure 7 shows that there are less checksum errors before attack and after attack there are increase in checksum
errors.
4.5 PLR
Packet loss ratio = Number of lost packet / (Number of lost packet + Number of packets received successfully)
Knowing your packet loss ratio will help you determine if the slowness issue is based on your connection to the
nodes, or it stems from a different problem. Poor communication connections can be caused by a number of
reasons, so using a packet loss ratio formula is a part of the detection process.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

563 www.ijergs.org


Simulation results showed that, it is clear from the figure 3 that in attacks there is reduction in throughput of
system with respect to message arrival time
4.6 Routing overhead (RO)
Routing Overhead defines the ratio of the amount of routing related transmissions [Route REQuest (RREQ),
Route REPly (RREP), Route ERRor (RERR), ACK, S-ACK, and MRA].During the simulation, the source route
makes unicast and multicast an RREQ message to all the neighbours within its communication range.

The figure 6 shows that when the network is running smooth and fine without any introduction of any attack
there is normal communication of packet being send and receive which leads to packet delivery ratio above
90% which can be seen in the session of IDS but when there is an attack occurring there is a sudden dip in
throughput as well as PDR about 10% less than normal.
5. CONCLUSION AND FUTURE SCOPE
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

564 www.ijergs.org

Simulation results have shown that the problem of DOS attacks in MANETS and proposed our simulated
approach for analysis of security in MANETS. Our results confirm that DOS attacks can be detected easily and
efficiently using the AODV based reputation protocol. In future scope of this research work we can be designed
for Fuzzy Logic system for multi-node optimization, enhanced reliability and accuracy. This research
work can be developed for mathematical model for detection of many types of attacks.

REFERENCES:
[1] Ashish K Maurya and Dinesh Singla Simulation based performance comparison of AODV,FSR,ZRP
routing protocols in MANET, International Journal of computer applications. Foundat ion of computer
science 12(2), December 2014, pp 23-18.
[2] T Ravi Nayak et al. Implementation of Adaptive Zone Routing protocol for wireless network,
International Journal of engineering Science and Technology Vol.2 (12), 2013 pp 7273-7288.
[3] Rajneesh Kumar Gujral, Manpreet Singh "Analyzing the Impact of Scalability on QoS Aware
Routing for MANETs "International Journal of Computer Science MANETs vol. 8(3), pp no. 487-
495, May 2013,Issue ISSN (online): 1694-0814.
[4] Preeti Arora and GN Purohit Comaparative Analysis of Adhoc Routing Unicast Protocols(using
WiMAX Environment, IJCSI International Journal of computer science issues ,Vol-8 Issue2,March
2011.
[5] Sree Ranga Raju and Jitendranath Mungara Performance Evaluation of ZRP over AODV and DSR in
Manet using Qualnet, European Journal of Scientific Research ISSN 1450-216X Vol. 45 No 4(2010)
pp 651-667.
[6] Md. Saiful Azad, Mohammad Moshee Uddin, Farhat Anwar and Md. Arafatur Rahman Performance
Evaluation of Wireless Routing protocols in Mobile Wimax Environment, Proceedings of the
international multiconference of engineers and computer scientists 2008, vol. 2 IMECS 2008, 19-21
March,2008 Hong Kong.
[7] Brijesh Patel and Sanjay Srivastava, Performance Analysis of Zone Routing Protocols in Mobile Ad
hoc Networks Dhirubhai Ambani Institute of Information and Communication Technology
Gandhinagar 382 007, India, 2006.
[8] Charles E.Perkins and Elizabeth M. Royer, Ad hoc on demand distance vector (AODV) routing
(Internet-Draft), Aug- 2013.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

565 www.ijergs.org

[9] H. Yang, et al, "SCAN: Self-Organized Network-Layer Security in Mobile Ad Hoc Networks," IEEE
Network, vol. 24, 2012, pp. 1-13.
[10] Hoang Lan Nguyen, Uyen Trang Nguyen. A study of different types of attacks on multicast in mobile
ad hoc networks. Ad Hoc Networks, Volume 6, Issue 1, Pages 32-46, January 2012
























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

566 www.ijergs.org

Impact of Network Density on ZRP and Malicious Nodes Detection under
Varying Transmission Range and Mobility of Nodes in Manets
Richa Arora
1
, Dr. Swati Sharma
2
1
Department of ECE, Universal Group of Colleges, Lalru Mandi PTU Jalandhar, Punjab
2
Asst. Professor, Department of ECE, Universal group of colleges, Lalru Mandi PTU Jalandhar, Punjab
E-mail- richaarora068@gmail.com

Abstract A Mobile ad hoc network (MANET) is a continuously self configuring infrastructure-less network of mobile devices
connected without wires.
MANETS are extensively used these days for communication and there are various communication networks with different standards
and computing techniques, different Zone Routing Protocol by varying transmission range and mobility of MANETS are used. As
days are passing by the size of MANETS is increasing day by day and its expansion is inevitable due to its high penetration and
popularity for the usage of mobile application but at the same time it is also prone to many attacks and network failure due to technical
vulnerability of the network. This paper discuss the impact of network density on ZRP and malicious nodes detection under varying
transmission range and mobility of nodes in MANETS. Therefore we need a mechanism which would need to overcome such
scenarios. Simulation results shows better results for packet loss ratio, throughput, packet delivery ratio and other parameters by
detecting malicious nodes in Zone routing protocol under varying transmission and mobility for proper and smooth functioning of
MANETS. Abstract must be of Time New Roman Front of size 10 and must be justified alignment.
Keywords MANET, DOS, Mobile Ad-hoc Network ,AODV Protocol, PDR, PLR, RO, Throughput
I.INTRODUCTION
1.1 Mobile ad-hoc Networks:-An ad hoc network is the cooperative engagement of a collection of mobile nodes without the required
intervention of any centralized access point or existing infrastructure. Ad hoc networking for commercial uses; however, the main
applications lie in military, tactical and other security-sensitive operations. In these applications, secure routing is an important issue.
Most of the protocols proposed for Secure Routing are either proactive or reactive. In MANETS mobility is the major issue. There are
several problems in routing with mobile ad hoc network like asymmetric links, routing overhead, dynamic topology and inference.
1.2 Zone Routing Protocol:- ZRP is an example of a hybrid reactive/proactive routing protocol based on parameter called routing
Zone .ZRP is proposed to reduce the control overhead of proactive routing protocols and decrease the latency caused by routing
discover in reactive routing protocols. In ZRP a node proactively maintains routes to destinations within a local neighbourhood which
is referred to as routing zone.
2. SECURITY GOALS:- Mobile ad-hoc networks (MANETS) are prone to a number of security threats.
There are five major security goals that need to be addressed in order to maintain a reliable and secure ad-hoc network environment.
The mechanisms which are used to detect, prevent and respond to security attacks They are mainly:
(i) Confidentiality: Protection of any information from being exposed to unintended entities. In ad hoc networks this is more difficult
to achieve because intermediates nodes receive the packets for other recipients, so they can easily eavesdrop the information being
routed.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

567 www.ijergs.org

(ii) Availability: Services should be available whenever required. There should be an assurance of survivability despite a Denial of
Service (DOS) attack. On physical and media access control layer attacker can use jamming techniques to interfere with
communication on physical channel. On network layer the attacker can disrupt the routing protocol. On higher layers; the attacker
could bring down high level services.
(iii) Authentication: Assurance that an entity of concern or the origin of a communication is what it claims to be or from. Without
which an attacker would impersonate a node, thus gaining unauthorized access to resource and sensitive information and interfering
with operation of other nodes.
(iv) Integrity: Message being transmitted is never altered.
(v) Non-repudiation: Ensures that sending and receiving parties can never deny ever sending or receiving the message.
3. PROPOSED METHOD AND OBJECTIVE
Our proposed method primarily based on detection and isolating of malicious nodes from the zone in a network, so that rest of the
genuine nodes can work peacefully. Our approach based on ZRP integrates two main features of varying transmission range and
mobility of nodes and detecting malicious nodes in MANETS by utilizing different kinds of centrality of nodes even in highly mobile
and disconnection prone scenarios..
4. QoS BASED PERFORMANCE METRICS:-
Quality of Service based performance metrics are designed for detection of malicious nodes under simulation environment. These
parameters are as follow:-
4.1 Throughput
.
4.2 PDR
PDR = TOTAL NO. OF PACKET RECEIVED /TOTAL NO. OF PACKET SEND
4.3 ENERGY CONSUMPTION
4.4 NUMBER OF COLLISIONS
In a network, when two or more nodes wants to transmit data at the same time network collision occurs. When a packet collision
occurs the packet is either discarded or sent back to their originating stations and again retransmitted in a times based sequence to
avoid collisions. Collisions can result in loss of packet integrity or can impede the performance of a network. This metric is used to
measure such collision in the network.
4.5 PLR
Packet loss ratio = Number of lost packet / (Number of lost packet + Number of packets received successfully)
4.6. Node placement strategy:-Node placement strategy is random over 100 nodes.
4.7 Data Rate:- Data rate is set to 2mbps
4.8 Routing layer protocol:-Routing layer protocol is zrp
5. RESULTS AND DISCUSSION
Simulation results shown in table1, parameters of network density with varying mobility rate and transmission range and calculated
the throughput and packet delivery ratio which is better. And PDR is calculated after bifurcating the genuine nodes in a network to
ensure the quality of services.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

568 www.ijergs.org

Table 1


As the network is running smoothly, the packets are delivered to its maximum value between 90-100% ,but with the introduction of
attack its value dropped from 90-100% to 10%.
Similar results are obtained for packet loss, packet received, throughput etc.
When the network is running smooth and fine without any introduction of any attack there is normal communication of packet being
send and receive which leads to packet delivery ratio above 90% which can be seen in the session of IDS but when there is an attack
occurring there is a sudden dip in throughput as well as PDR about 10% less than normal.
x- axis=Mobility rate
y-axis=PDR







Mobility Rate


Figure 2. Mobility Rate Vs PDR

Figure 2 shows that as the zone radius increases , PDR decreases.

Parameter value
Routing Load 10.89
Average Delay 0.168
Actual Start Time 87.16 Sec
Supposed Time 0.0 Sec
Simulation Time 200 Sec
Throughput 715.5
Packet Send 30446
Packet Received 29952
Packet Delivery Ratio 98.33%
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

569 www.ijergs.org




Figure 3 is also a reflection of how no. of message packets are affected when there is an attack being introduced this graph shows how
many packets have been lost (control message) when there was no. of attacks and after attack
x-axis=mobility Rate
y-axis=Average Delay






Figure 4, Mobility Vs End to End Average Delay
Figure 4 shows that the impact of mobility rate on average delay and after attack there is increase in average delay
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

570 www.ijergs.org


Figure5. shows the throughput ,average delay and packet delivery ratio under zone routing protocol








Figure 6 Mobility Rate Vs Routing Overhead
6. CONCLUSION AND FUTURE SCOPE
Simulation results have shown that mobility and transmission range do have impact on zone routing protocol as zone size gets
increased then delay keeps on reducing. We have considered the problem of DOS attacks in MANETS and proposed our simulated
approach for security in MANETS. Our results confirm that DOS attacks can be detected easily and efficiently than the AODV based
reputation protocol. In future scope of this research work we can develop a mathematical model for detection of many types of attacks.



Mobility rate

R
O
U
T
I
N
G

O
V
E
R
H
E
A
D

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

571 www.ijergs.org

REFERENCES:
[1] Ashish K Maurya and Dinesh Singla Simulation based performance comparison of AODV,FSR,ZRP routing protocols in
MANET, International Journal of computer applications. Foundation of computer science 12(2), December 2013, pp 23-18.

[2] T Ravi Nayak et al. Implementation of Adaptive Zone Routing protocol for wireless network, International Journal of
engineering Science and Technology Vol.2 (12), 2013 pp 7273-7288.

[3] Rajneesh Kumar Gujral, Manpreet Singh "Analyzing the Impact of Scalability on QoS Aware Routing for MANETs
"International Journal of Computer Science MANETs vol. 8(3), pp no. 487-495, May 2013,Issue ISSN (online): 1694-
0814.

[4] Preeti Arora and GN Purohit Comaparative Analysis of Adhoc Routing Unicast Protocols(using WiMAX Environment,
IJCSI International Journal of computer science issues ,Vol-8 Issue2,March 2011.

[5] Sree Ranga Raju and Jitendranath Mungara Performance Evaluation of ZRP over AODV and DSR in Manet using
Qualnet, European Journal of Scientific Research ISSN 1450-216X Vol. 45 No 4(2010) pp 651-667.

[6] Md. Saiful Azad, Mohammad Moshee Uddin, Farhat Anwar and Md. Arafatur Rahman Performance Evaluation of Wireless
Routing protocols in Mobile Wimax Environment, Proceedings of the international multiconference of engineers and
computer scientists 2008, vol. 2 IMECS 2008, 19-21 March,2008 Hong Kong.

[7] Brijesh Patel and Sanjay Srivastava, Performance Analysis of Zone Routing Protocols in Mobile Ad hoc Networks
Dhirubhai Ambani Institute of Information and Communication Technology Gandhinagar 382 007, India, 2006.
[8] Charles E.Perkins and Elizabeth M. Royer, Ad hoc on demand distance vector (AODV) routing (Internet-Draft), Aug- 2013.
[9] H. Yang, et al, "SCAN: Self-Organized Network-Layer Security in Mobile Ad Hoc Networks," IEEE Network, vol. 24, 2012, pp.
1-13.
[10] Hoang Lan Nguyen, Uyen Trang Nguyen. A study of different types of attacks on multicast in mobile ad hoc networks. Ad Hoc
Networks, Volume 6, Issue 1, Pages 32-46, January 2012






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

572 www.ijergs.org

Low Power Design of Johnson Counter Using DDFF Featuring Efficient
Embedded Logic and Clock Gating
Chandra shekhar kotikalapudi, Smt. P. Pushpalatha,
Dept. of Electronics and communications engineering
University College of engineering, JNTUK, Kakinada, India
E-Mail-Sekhar2790@gmail.com


AbstractIn this paper, we have proposed a power efficient design of a 4-bit up-down Johnson counter by using Dual Dynamic
Node Pulsed Flip-flop(DDFF) featuring efficient embedded logic module(DDFF-ELM) and then clock gating is incorporated in order
to reduce the power dissipation further. The proposed design employs a DDFF which mainly reduces the power consumption by
eliminating the large capacitance present in the pre-charge node of several existing flip-flop designs by separately driving the output
pull-up and pull-down transistors by following a split dynamic node structure. This reduces the power up to 40% compared to
conventional architectures of the flip-flops. Then the Embedded logic module is an efficient method to incorporate complex logic
functions into the flip-flop. Clock gating is applied to reduce the power consumption up to 50% due to unnecessary clock activities.
Key wordsJohnson counter, DDFF, DDFF-ELM, clock gating, XCFF, CDMFF, low power.
I. INTRODUCTION
Low power, high speed and area efficient circuit design is the major concern in now-a-day VLSI design. Designing a low
power circuit involves a proper architecture of the sequential and combinational circuits used in the design by using
minimum CMOS logic gates and then eliminating the redundant operations by using efficient techniques. In order to
improve the performance of flip-flop architecture and reduce the power consumption extensive work has been carried out
in the past few decades. With steady growth in clock frequency and chip capacity, power dissipation of the CMOS design
has been increasing tremendously. This results in necessity for development of new techniques for reducing the power
dissipation in VLSI design.
High speed can be achieved in synchronous systems by using advanced pipelining techniques. In all kinds of digital
circuits flip-flops are used as basic storage elements. Design styles of flip-flops can be classified as static and dynamic.
Static flip-flops are those which can preserve their stored values even if the clock is stopped.
Many static design flip-flop topologies have been proposed in the past. Classic transmission-gate latch based master-slave
flip-flop (TGMS) and PowerPC 603 master slave latch are the examples of static design style. Basically dynamic flip-
flops can achieve higher speed and lower power consumption. Dynamic flip-flops can be either purely dynamic or pseudo
dynamic structures. Hybrid latch flip-flop (HLFF) and Semi-dynamic flip-flop (SDFF) are the examples of pseudo
dynamic structures. Purely dynamic structures are Conditional data mapping flip-flop (CDMFF) and Cross charge control
flip-flop (XCFF). To eliminate the drawback of the existing flip-flop architectures a Dual dynamic node hybrid flip-flop
(DDFF) has been developed. And to reduce the pipeline overhead an embedded logic module (DDFF-ELM) has been
developed. Both of these eliminate the drawbacks present in dynamic flip-flop architecture.
In this paper, we have proposed a power efficient design of 4-bit up-down Johnson counter using DDFF-ELM and
applying clock gating. This design using DDFF-ELM reduces the power consumption up to 27% compared to
conventional flip-flop architectures. Counter designed by applying clock gating has a maximum power reduction of up to
40%.
In section II, a brief introduction of conventional flip-flop architectures along with their disadvantages has been described
and challenges in achieving high performance are also been discussed. Section III provides the details about the 4-bit
synchronous up-down Johnson counter. Section IV provides the details of proposed 4-bit Johnson counter using clock
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

573 www.ijergs.org

gating. In section V, we have provided simulation results of the proposed design. And finally, in section VI, we conclude
with the improvements of the proposed design.



II. FLIP-FLOP ARCHITECTURES
In order to improve the performance of the flip-flops to achieve higher speeds and low power consumption several flip-flop
architectures have been designed over the past decade. All these architectures can be grouped under static and dynamic design styles.
Static flip-flops are those which can preserve their stored values even if the clock is stopped. Many static design flip-flop topologies
have been proposed in the past. Classic transmission-gate latch based master-slave flip-flop (TGMS) and PowerPC 603 master slave
latch are the examples of static design style. TGMS is realized by using two transmission gate based latches operating on
complementary clocks. PowerPC 603 master slave flip-flop is a combination of TGMS flip-flop and m

MOS flip-flop. Main features


of the static designs are that they dissipate low power and also they have low clock-to-output (clock-Q) delay. But because of their
large positive setup time static designs have large data to output delay. Whenever in the designs if the speed is not a concern static
designs are more suitable.
Modern high performance flip-flops are considered under dynamic design style. Dynamic designs may be purely dynamic or
pseudo dynamic. Pseudo dynamic design styles are also referred to as semi-dynamic or hybrid structures as they consist of static
output and dynamic frontend. The flip-flops under this category are Hybrid latch flip-flop (HLFF) and Semi-dynamic flip-flop
(SDFF). HLFF is basically a level sensitive latch which is clocked with an internally generated sharp pulse. This sharp pulse is
generated at the positive edge of the clock and delayed version of clock. Two main building blocks of SDFF are a pulse generator and
a level sensitive latch. With an internally generated sharp pulse which is of short duration latch is clocked so as to make it behave like
a flip-flop. Both HLFF and SDFF benefit from clock overlap to perform latching operation. Even though HLFF is not the fastest but it
has low power consumption. The reason for being slow compared to SDFF is that HLFF has longer stack of nMOS transistors at the
output node. SDFF is the fastest classic hybrid structure but because of the large clock load and large pre charge capacitance it is not
efficient as far as power consumption is considered.
In hybrid structured designs redundant data transitions and large pre charge capacitance are the main sources of power
dissipation. In order to eliminate these two problems different architectures have been designed. To reduce the redundant operations in
the flip-flops Conditional data mapping flip-flop (CDMFF) is the most efficient architecture. Similarly Cross charge control flip-flop
(XCFF) is considered as the best architecture to eliminate the large pre charge capacitance. In CDMFF unwanted transitions are
eliminated by using an output feedback structure to conditionally feed the data to flip-flop. Thus when a redundant event is predicted
this architecture reduces power dissipation by eliminating unwanted transitions. This flip-flop is bulky because of the presence of
additional transistors in conditional and has more power consumption at higher data activities. In XCFF the large pre charge
capacitance present at the output node is eliminated by driving output pull-up and pull-down transistors separately. Total power
consumption is reduced considerably as only one of the two dynamic nodes is switched during one clock cycle. Main drawbacks in
this design are conditional shutoff and charge sharing.
The Dual dynamic node hybrid flip-flop (DDFF) eliminates the large pre charge capacitance present in the output node of
several conventional designs by following a split dynamic node structure to separately drive the output pull up and pull down
transistors. Figure 1 shows the architecture of DDFF.

Fig 1: DDFF
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

574 www.ijergs.org


Fig 2: DDFF-ELM (MULTIPLEXER)
In the DDFF architecture, node X1 is pseudo-dynamic and X2 is dynamic. Instead of the conditional shutoff mechanism present in
XCFF an unconditional one is present in DDFF. Its operation is based on whether the clock is high or low. The performance
improvements show that the DDFF design is well suited for modern high performance designs where minimum delay and low power
dissipation are required.
The figure 2 shows the architecture of DDFF-ELM. Complex logic functions can be efficiently embedded into the architecture of
DDFF. The main advantages of DDFF-ELM over the other flip-flops that are having embedded logics are lower power consumption
and capable of embedding complex logic functions.
III. 4-BIT UP-DOWN JOHNSON COUNTER
Counter is a device that stores the number of times a particular event or process has occurred often in relation to a clock signal.
Counters are used in almost all the digital circuits for counting operations. There are many types of counters used in digital circuits.
Johnson counter also called as twisted ring counter is the modified form of a ring counter. Figure 3 shows the architecture of 4-bit
synchronous up-down Johnson counter. In Johnson counter output of the last stage is complemented and connected to the input of the
first stage.


Fig 3:4-bit up-down Johnson counter

In the 4-bit up-down Johnson counter designed Dual dynamic pulsed hybrid flip-flop is used. By embedding a multiplexer into the
flip-flop architecture as shown in the figure 2 counting operation can be performed in either up counting mode or down counting
mode. Last stage output is complemented and connected to the input of the first stage for an up counter and first stage output is
complemented and connected to last stage input as shown in the figure 3. The counting sequence repeats for every eight clock pulses.
The counting sequences of the Johnson up and down counters are shown in table 1 and table 2.




International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

575 www.ijergs.org

clock Q0 Q1 Q2 Q3
0 0 0 0 0
1 0 0 0 1
2 0 0 1 1
3 0 1 1 1
4 1 1 1 1
5 1 1 1 0
6 1 1 0 0
7 1 0 0 0
8 0 0 0 0

Table 1: Johnson up counter Table 2: Johnson down counter
IV. JOHNSON COUNTER USING CLOCK GATING
Clock gating technique is used to reduce the dynamic power dissipation by stopping the clock signal to the segments of the circuit that
are inactive at that instant of time. Clock transitions contribute to the major part of power consumption in a digital circuit. By
eliminating the unwanted distribution of the clock signal to the segments that are not active power dissipation can be reduced in the
digital circuits. For clock gating to the Johnson counter in this design XOR and NAND gates are used.

Fig 4: Clock gating to Johnson counter
Each of the Clock gating blocks in the figure 4 consists of combination of XOR and NAND gate. Clock gating module is shown in
figure 5.

Fig 5: Clock gating module for generation of clock for Q0



clock Q0 Q1 Q2 Q3
0 0 0 0 0
1 1 0 0 0
2 1 1 0 0
3 1 1 1 0
4 1 1 1 1
5 0 1 1 1
6 0 0 1 1
7 0 0 0 1
8 0 0 0 0
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

576 www.ijergs.org

V. SIMULATION RESULTS
The existing and the proposed design are implemented in Tanner EDA tools in 250nm technology. Schematics for the design are
designed in S-edit .Figure 1 shows the schematic design of the DDFF and figure 6 shows the timing diagram of the DDFF simulated
using T-spice and resulting waveforms are obtained in W-edit.

Fig 6: Timing diagram of DDFF
The simulated waveform of the Johnson counter is shown in figure 7.

Fig 7: Timing diagram of Johnson counter
The layout designs of the DDFF, DDFF-ELM (MULTIPLEXER) and the Johnson counter are shown in the figures 8, 9 and 10
respectively. Layouts are designed and areas of the designs are calculated by using L-EDIT.

Fig 8: Layout design of the DDFF
0 10 20 30 40 50 60 70 80
Ti me (ns )
0. 0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1. 0
1. 1
1. 2
V
o
l t a
g
e
( V
)
v( c lk)
C ell5
0 10 20 30 40 50 60 70 80
Ti me (ns )
0. 0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1. 0
1. 1
1. 2
V
o
l t a
g
e
( V
)
v( Q 0)
C ell5
0 10 20 30 40 50 60 70 80
Ti me (ns )
0. 0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1. 0
1. 1
1. 2
V
o
l t a
g
e
( V
)
v( Q 1)
C ell5
0 10 20 30 40 50 60 70 80
Ti me (ns )
0. 0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1. 0
1. 1
1. 2
V
o
l t a
g
e
( V
)
v( Q 2)
C ell5
0 10 20 30 40 50 60 70 80
Ti me (ns )
0. 0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1. 0
1. 1
1. 2
V
o
l t a
g
e
( V
)
v( Q 3)
C ell5
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

577 www.ijergs.org


Fig 8: Layout design of the DDFF-ELM (MUX)

Fig 8: Layout design of the Johnson counter
Simulation results of the various dynamic flip-flops are shown in table3.
Flip-flop Total Layout area (m
2
) Total Power (w) Minimum D-Q
(ns)

PDP
(fJ)
CDMFF 540.67 62.34 1.37 85.40
XCFF 470.43 68.53 1.31 89.77
DDFF 430.01 59.62 1.34 79.89
DDFF-ELM 528.74 85.90 1.29 110.81

Table 3: Performance comparison of various dynamic flip-flops (250nm technology)
Area and power consumption comparison of the Johnson counter with and without clock gating are shown in table 4.

Johnson counter Power consumption (w) Total Layout Area ((

)
Without Clock Gating 344.24 2820.32
With Clock Gating 160.56 2932.62

Table 4: Johnson counter with and without Clock gating
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

578 www.ijergs.org

Power consumption of the 4-bit synchronous up-down Johnson counter is 344.24 w. Power consumption of the 4-bit synchronous
up-down Johnson counter with clock gating is 160.56 w. So by using the clock gating power dissipation in the Johnson counter is
reduced up to 50%.
VI. CONCLUSION
In this paper, a dual dynamic node pulsed hybrid flip-flop (DDFF), an embedded logic module, a 4-bit synchronous up-down Johnson
counter and clock gated Johnson counter are presented. Simulation results show improvements in power and speed. The dual dynamic
node pulsed hybrid flip-flop (DDFF) shows improvement over the cross charge control flip-flop (XCFF) as it eliminates the redundant
power dissipation. Further complex logic function can be incorporated efficiently in to the flip-flop. The 4-bit up-down Johnson
counter with clock gating shows an improvement in terms of power consumption of up to 50% compared to the 4-bit up-down counter
without clock gating though there is an increase in area. Thus in modern high performance designs where power consumption is a
major concern the presented architecture is well suited.

REFERENCES:
[1] Kalarikkal Absel, Lijo Manuel and R. K. Kavitha, 2013, Low-Power Dual Dynamic Node Pulsed Hybrid Flip-Flop Featuring
Efficient Embedded Logic, VLSI Systems, IEEE Transactions (Volume: 21 Issue: 9) pp. 1693 1704.
[2] Ahmed Sayed and Hussain Al-Assad, 2011, Low-Power Flip-Flops: Survey, Comparative Evaluation and a New Design,
IACSIT International Journal of Engineering and Technology, Vol.3, No.3.
[3] Ismail.S.M and Islam.F.T. 2012, Low power design of Johnson Counter using clock gating International Conference on
Computer and Information Technology (ICCIT), pp 510-517.
[4] Itamer Levi and Alexander Fish, 2013, Dual Mode Logic Design for energy efficiency and high performance, IEEE Access.
[5] F. Klass,Semi-dynamic and dynamic flip-flops with embedded logic, in Proc. Symp. VLSI Circuits Dig. Tech. Papers, Honolulu,
HI, Jun. 1998, pp. 108109. [6] J. M. Rabaey, A. Chandrakasan, and B. Nikolic,
Digital Integrated Circuits: A Design Perspective, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 2003. [7] N. Nedovic and V. G.
Oklobdzija,Hybrid latch flip-flop with improved power efficiency, in Proc. Symp. Integr.Circuits Syst.Design, 2000, pp.211215.
[8] Li Li, Ken Choi, and Haiqing Nan, Activity-Driven Fine-grained Clock Gating and Run
Time Power Gating Integration IEEE Transactions on very large scale integration (VLSI) systems, vol. 21, no. 8, august 2013.
[9] G. Gerosa, S. Gary, C. Dietz, P. Dac, K. Hoover, J. Alvarez, H. Sanchez, P. Ippolito, N. Tai, S. Litch, J. Eno, J. Golab, N.
Vanderschaaf, and J. Kahle, A 2.2 W, 80 MHz superscalar RISC microprocessor, IEEE J. Solid-State Circuits, vol. 29, no. 12, pp.
14401452, Dec. 1994. [10] V. Stojanovic and V. Oklobdzija, Comparative analysis of master slave
latches and flip-flops for high-performance and low-power systems, IEEE J. Solid-State Circuits, vol. 34, no. 4, pp. 536548, Apr.
1999.
[11] B.-S. Kong, S.-S. Kim and Y.-H. Jun, Conditional-capture flip-flop for statistical power reduction, IEEE J. Solid-State Circuits,
vol. 36, no. , pp. 12631271, Aug. 2001.
[12] N. Nedovic, M. Aleksic, and V. G. Oklobdzija, Conditional pre-charge techniques for power-efficient dual-edge clocking, in
Proc. Int. Symp. Low-Power Electron. Design, 2002, pp. 5659.
[13] P. Zhao, T. K. Darwish, and M. A. Bayoumi, High-performance and low-power conditional discharge flip-flop, IEEE Trans.
Very Large Scale Integr. (VLSI) Syst., vol. 12, no. 5, pp. 477484, May 2004.
[14] C. K. Teh, M. Hamada, T. Fujita, H. Hara, N. Ikumi, and Y. Oowaki, Conditional data mapping flip-flops for low-power and
high performance systems, IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 14, no. 12, pp. 13791383, Dec. 2006.
[15] S. H. Rasouli, A. Khademzadeh, A. Afzali-Kusha, and M. Nourani, Low-power single- and double-edge-triggered flip-flops for
high-speed applications, Proc. Inst. Elect. Eng. Circuits Devices Syst., vol. 152, no. 2, pp. 118122, Apr. 2005.
Very Large Scale Integr. (VLSI) Syst., vol. 17, no. 1, pp. 3344, Jan. 2009.
[17] A. Ma and K. Asanovic, A double-pulsed set-conditional-reset flip flop, Laboratory for Computer Science, Massachusetts Inst.
technology,
Cambridge, Tech. Rep. MIT-LCS-TR-844, May 2002.
[18] O. Sarbishei and M. Maymandi-Nejad, Power-delay efficient overlapbased charge-sharing free pseudo-dynamic D flip-flops, in
Proc. IEEE Int. Symp. Circuits Syst., May 2007, pp. 637640.
[19] O. Sarbishei and M. Maymandi-Nejad, A novel overlap-based logic cell: An efficient implementation of flipflops with
embedded logic,
IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 18, no. 2, pp. 222231, Feb. 2010.
[20] M. Hansson and A. Alvandpour, Comparative analysis of process variation impact on flip-flop power-performance, in Proc.
IEEE Int.Symp. Circuits Syst., May 2007, pp. 37443747.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

579 www.ijergs.org

[21] S. Yang, W. Wolf, N. Vijaykrishnan, Y. Xie, and W. Wang, Accurate stacking effect macro-modeling of leakage power in sub-
100 nm
[16] H. Mahmoodi, V. Tirumalashetty, M. Cooke, and K. Roy, Ultra low power clocking scheme using energy recovery and clock
gating, IEEE
Trans.
Circuits, in Proc. IEEE 18th Int. Conf. VLSI Design, Jan. 2005, pp. 165170.
[22] Y.-F. Tsai, D. Duarte, N. Vijaykrishnan, and M. J. Irwin, Implications of technology scaling on leakage reduction techniques,
in Proc. Design Autom. Conf., Jun. 2003, pp. 187190























International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

580 www.ijergs.org

Real-Time Finger-Vein Recognition System
S. Nivas
1
, P. Prakash
2

1
Head of Department, Maharaja Co-Educational Arts and Science College, Erode- India
2
Research Scholar, Maharaja Co-Educational Arts and Science College, Erode- India
E-mail- prkshmca@gmail.com
ABSTRACT
As the need for personal authentication increases, many people are turning to biometric authentication as an alternative to
traditional security devices. Concurrently, users and vendors of biometric authentication systems are searching for methods to
establish system performance. However, most existing biometric systems have high complexity in time or space or both. In this paper,
we propose a novel Finger-Vein recognition algorithm for authentication. It consists of two phases. Enrolment phase and verification
phase. Both stages start with finger-vein image pre-processing, which includes detection of the region of interest (ROI), image
segmentation, alignment, and enhancement. For the enrolment stage, after the pre-processing and the feature extraction step, the
finger-vein template database is built. For the verification stage, the input finger-vein image is matched with the corresponding
template after its features are extracted. Feature Selection process is based on SURF algorithm.
Keywords
(Finger-Vein recognition is Enrolment phase and verification phase both stages vein image pre-processing after the feature extraction
step, the finger-vein template database is built verification stage; the input finger-vein image is matched with the corresponding
template after its features are extracted)
INTRODUCTION:
What is a Biometric?
Biometrics(or biometricauthentication) refers to the identification of humans by their characteristics or traits.
Biometrics is used in computer science as a form of identification and access control. It is also used to identify individuals
in groups that are under surveillance.
Biometric identifiers are the distinctive, measurable haracteristics used to label and describeindividuals. Biometric
identifiers are often categorized as physiological versus behavioral characteristics. Physiological characteristics are
related to the shape of the body. Examples include, but are not limited to fingerprint, face recognition, DNA, Palm
print, hand geometry, iris recognition, retina and odour/scent. Behavioral haracteristics are related to the pattern of
behavior of a person, including but not limited to: typing rhythm, gait, and voice. Some researchers have coined the
term behaviometrics to describe the latter class of biometrics.
More traditional means of access control include token-based identification systems, such as a driver's license or
passport, and knowledge-based identification systems, suchaspassword or personal identification number. Since
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

581 www.ijergs.org

biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based
methods; however, the collection of biometric identifiers raises privacy concerns about the ultimate use of this information.
Many different aspects of human physiology, chemistry or behavior can be used for biometric authentication. The selection
of a particular biometric for use in a specific application involves a weighting of several factors. Jain et al. (1999) identified seven
such factors to be used when assessing the suitability of any trait for use in biometric authentication. Universality means that every
person using a system should possess the trait. Uniqueness means the trait should be sufficiently different for individuals in the
relevant population such that they can be distinguished from one another. Permanence relates to the manner in which a trait varies
over time. More specifically, a trait with 'good' permanence will be reasonably invariant over time with respect to the specific
matching algorithm. Measurability (collectability) relates to the ease of acquisition or measurement of the trait. In addition, acquired
data should be in a form that permits subsequent processing and extraction of the relevant feature sets. Performance relates to the
accuracy, speed, and robustness of technology used (see performance section for more details).Acceptability relates to how well
individuals in the relevant population accept the technology such that they are willing to have their biometric trait captured and
assessed. Circumvention relates to the ease with which a trait might be imitated using an artifact or substitute.
No single biometric will meet all the requirements of every possible application.

The basic block diagram of a biome in the following two modes.
[3]
In verification mode the system performs a one-to-one
comparison of a captured biometric with a specific template stored in a biometric database in order to verify the individual is the
person they claim to be. Three steps involved
in person verification. In the first step, reference models for all the users are generated and stored in the model database. In
the second step, some samples are matched with reference models to generate the genuine and impostor scores and calculate the
threshold. Third step is the testing step. This process may use a smart card, username or ID number (e.g. PIN) to indicate which
template should be used for comparison. 'Positive recognition' is a common use of verification mode, "where the aim is to prevent
multiple people from using same identity".
In Identification mode the system performs a one-to-many comparison against a biometric database in attempt to establish the
identity of an unknown individual. The system will succeed in identifying the individual if the comparison of the biometric sample to
a template in the database falls within a previously set threshold. Identification mode can be used either for 'positive recognition' (so
that the user does not have to provide any information about the template to be used) or for 'negative recognition' of the person "where
the system establishes whether the person is who she (implicitly or explicitly) denies to be".The latter function can only be achieved
through biometrics since other methods of personal recognition such as passwords, PINs or keys are ineffective.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

582 www.ijergs.org

The first time an individual uses a biometric system is called enrollment. During the enrollment, biometric information from
an individual is captured and stored. In subsequent uses, biometric information is detected and compared with the information stored
at the time of enrollment. Note that it is crucial that storage and retrieval of such systems themselves be secure if the biometric system
is to be robust. The first block (sensor) is the interface between the real world and the system; it has to acquire all the necessary data.
Most of the times it is an image acquisition system, but it can change according to the characteristics desired. The second block
performs all the necessary pre-processing: it has to remove artifacts from the sensor, to enhance the input (e.g. removing background
noise), to use some kind of normalization, etc. In the third block necessary features are extracted. This step is an important step as the
correct features need to be extracted in the optimal way. A vector of numbers or an image with particular properties is used to create
a template. A template is a synthesis of the relevant characteristics extracted from the source. Elements of the biometric measurement
that are not used in the comparison algorithm are discarded in the template to reduce the filesize and to protect the identity of the
enrollee.
If enrollment is being performed, the template is simply stored somewhere (on a card or within a database or both). If a
matching phase is being performed, the obtained template is passed to a matcher that compares it with other existing templates,
estimating the distance between them using any algorithm (e.g. Hamming distance). The matching program will analyze the template
with the input. This will then be output for any specified use or purpose (e.g. entrance in a restricted area). Selection of biometrics in
any practical application depending upon the characteristic measurements and user requirements. We should consider Performance,
Acceptability, Circumvention, Robustness, Population coverage, Size, Identity theft deterrence in selecting a particular biometric.
Selection of biometric based on user requirement considers Sensor availability, Device availability, Computational time and
reliability, Cost, Sensor area and power consumption.
3.1SYSTEMIMPLEMENTATION:
3.1.1 Finger Vein Recognition:
Finger vein recognition is a method of biometric authentication that uses pattern-recognition techniques based on
images of human finger vein patterns beneath the skin's surface. Finger vein recognition is one of many forms of biometrics used to
identify individuals and verify their identity.
To obtain the pattern for the database record, an individual inserts a finger into an attester terminal containing a near-infrared
LED (light- emitting diode) light and a monochrome CCD (charge-coupled device) camera. The hemoglobin in the blood absorbs
near-infrared LED light, which makes the vein system appear as a dark pattern of lines. The camera records the image and the raw
data is digitized, certified and sent to a database of registered images. For authentication purposes, the finger is scanned as before and
the data is sent to the database of registered images for comparison. The authentication process takes less than two seconds.
Blood vessel patterns are unique to each individual, as are other biometric data such as fingerprints or the patterns of the iris.
Unlike some biometric systems, blood vessel patterns are almost impossible to counterfeit because they are located beneath the skin's
surface. Biometric systems based on fingerprints can be fooled with a dummy finger fitted with a copied fingerprint; voice and facial
characteristic-based systems can be fooled by recordings and high-resolution images. The finger vein ID system is much harder to fool
because it can only authenticate the finger of a living person.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

583 www.ijergs.org


3.2 FEATURES USED:
3.2.1 SURF Features:
SURF (Speeded Up Robust Features) is a robust local feature detector, first presented by Herbert Bay et al. in 2006, that can
be used in computer vision tasks like object recognition or 3D reconstruction. It is partly inspired by the SIFT descriptor. The standard
version of SURF is several times faster than SIFT and claimed by its authors to be more robust against different image transformations
than SIFT. SURF is based on sums of 2D Haar wavelet responses and makes an efficient use of integral images.
It uses an integer approximation to the determinant of Hessian blob detector, which can be computed extremely quickly with
an integral image (3 integer operations). For features, it uses the sum of the Haar wavelet response around the point of interest. Again,
these can be computed with the aid of the integral image.
An application of the algorithm is patented in the US. The task of finding correspondences between two images of the same
scene or object is part of many computer vision applications. Camera calibration, 3D reconstruction, image registration, and object
recognition are just a few. The search for discrete image correspondences the goal of this work can be divided into three main
steps. First, interest points are selected at distinctive locations in the image, such as corners, blobs, and T-junctions. The most
valuable property of an interest point detector is its repeatability, i.e. whether it reliably finds the same interest points under different
viewing conditions. Next, the neighbourhood of every interest point is represented by a feature vector. This descriptor has to be
distinctive and, at the same time, robust to noise, detection errors, and geometric and photometric deformations. Finally, the descriptor
vectors are matched between different images. The matching is often based on a distance between the vectors, e.g. the Mahalanobis or
Euclidean distance. The dimension of the descriptor has a direct impact on the time this takes, and a lower number of dimensions is
therefore desirable.
A wide variety of detectors and descriptors have already been proposed in the literature. Also, detailed comparisons and
evaluations on benchmarking datasets have been performed. While constructing our fast detector and descriptor, we built on the
insights gained from this previous work in order to get a feel for what are the aspects contributing to performance. In our experiments
on benchmark image sets as well as on a real object recognition application, the resulting detector and descriptor are not only faster,
but also more distinctive and equally repeatable.
When working with local features, a first issue that needs to be settled is the required level of invariance. Clearly, this
depends on the expected geometric and photometric deformations, which in turn are determined by the possible changes in viewing
conditions. Here, we focus on scale and image rotation invariant detectors and descriptors. These seem to over a good compromise
between feature complexity and robustness to commonly occurring deformations. Skew, anisotropic scaling, and perspective effects
are assumed to be second-order effects that are covered to some degree by the overall robustness of the descriptor.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

584 www.ijergs.org

As also claimed by Lowe, the additional complexity of full affine-invariant features often has a negative impact on their
robustness and does not pay off, unless really large viewpoint changes are to be expected. In some cases, even rotation invariance can
be left out, resulting in a scale-invariant only version of our descriptor, which we refer to as upright SURF (U-SURF). Indeed, in
quite a few applications, like mobile robot navigation or visual tourist guiding, the camera often only rotates about the vertical axis.
The benefit of avoiding the overkill of rotation invariance in such cases is not only increased speed, but also increased discriminative
power. Concerning the photometric deformations, we assume a simple linear model with a scale factor and offset. Notice that our
detector and descriptor dont use colour.
3.2.2 Lacunarity:
Lacunarity, from the Latin lacuna meaning "gap" or "lake", is a specialized term in geometry referring to a measure of how
patterns, especially fractals, fill space, where patterns having more or larger gaps generally have higher lacunarity. Beyond being an
intuitive measure of gappiness, lacunarity can quantify additional features of patterns such as "rotational invariance" and more
generally, heterogeneity. This is illustrated in Figure 1 showing three fractal patterns. When rotated 90, the first two fairly
homogeneous patterns do not appear to change, but the third more heterogeneous figure does change and has correspondingly higher
lacunarity.
3.2.3 Measuring Lacunarity:
In many patterns or data sets, lacunarity is not readily perceivable or quantifiable, so computer-aided
methods have been developed to calculate it. As a measurable quantity, lacunarity is often denoted in scientific literature
by the Greek letters or but it is important to note that there is no single standard and several different methods exist
to assess and interpret lacunarity.
3.2.4 Box Counting Lacunarity
One well-known method of determining lacunarity for patterns extracted from digital images uses box
counting, the same essential algorithm typically used for some types of fractal analysis.[1][4] Similar to looking at a
slide through a microscope with changing levels of magnification, box counting algorithms look at a digital image
from many levels of resolution to examine how certain features change with the size of the element used to inspect
the image. Basically, the arrangement of pixels is measured using traditionally square (i.e., box-shaped) elements
from an arbitrary set of sizes, conventionally denoted s. For each , the box is placed successively over the
entire image, and each time it is laid down, the number of pixels that fall within the box is recorded.[note
1] In standard box counting, the box for each in is placed as though it were part of a grid overlaid on the image
so that the box does not overlap itself, but in sliding box algorithms the box is slid over the image so that it overlaps
itself and the "Sliding Box Lacunarity" or SLac is calculated.[3][6] Figure 2 illustrates both types of box counting.

3.2.5 Relationship to the Fractal Dimension
Lacunarity analyses using the types of values discussed above have shown that data sets extracted from
dense fractals, from patterns that change little when rotated, or from patterns that are homogeneous, have low lacunarity,
but as these features increase,[clarification needed] so generally does lacunarity. In some instances, it has been
demonstrated that fractal dimensions and values of lacunarity were correlated,[1] but more recent research has shown
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

585 www.ijergs.org

that this relationship does not hold for all types of patterns and measures of lacunarity. Indeed, as Mandelbrot originally
proposed, lacunarity has been shown to be useful in discerning amongst patterns (e.g., fractals, textures, etc.) that share
or have similar fractal dimensions in a variety of scientific fields including neuroscience.

Fig 3.1 Surf feature pints to detect the finger vein propertied

Fig 3.2 Shows the matched feature Points of the surf Algorithm

3.3 HARRIS CORNER
Corners are image locations that have large intensity changesin more than one directions.
Shifting a window in any direction should give a large change in intensity detect feature points, also called keypoints match feature
points in different images The nearest neighbor is defined as the keypointwith minimum Euclidean distance for the invariant
descriptor
Our a_ne invariant interest point detector is an a_ne-adapted version ofthe Harris detector. The a_ne adaptation is based on
the second moment matrixand local extrema over scale of normalized derivatives. Locations of interest points are detected by the
a_ne-adapted Harris detector. For initial- ization, approximate localizations and scales of interest points are extracted by the multi-
scale Harris detector. For each point we apply an iterative procedure which modi_es position as well as scale and shape of the point
neighbourhood. This allows to converge toward a stable point that is invariant to a_netrans- formations. This detector is the main
contribution of the paper. Furthermore, we have developed a repeatability criterion which takes into account the point position as well
as the shape of the neighbourhood. A quantitative comparison with existing detectors shows a signi_cant improvement of our method
in the presence of large a_ne transformations. Results for wide baseline matching and recognition based on our a_ne invariant points
are excellent in the presence of signi_cant changes in viewing angle and scale and clearly demonstrate their invariance.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

586 www.ijergs.org

3.3.1 Filtration Process
The presence of the fragments in an image is determined bythe combined use of a similarity measure and a detection
threshold. Using a sliding window over the image, we measure the presence of the fragment in the window with normalized cross-
correlation, a common method used in computer vision to measure visual similarity, and compare the score to a threshold.
3.4 MSER ((Maximally Stable Extremal Regions) Detection:
For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale
and full affine transform (i.e. a region should correspond to the same pre-image for different viewpoints. Viewpoint changes can be
locally approximated by affine transform if assuming locally planar objects and orthographic camera, that is perspective effects
ignored)
3.4.1Analyzing Minimal regions:
Here we detect anchor points (f.e. using Harris detector for corners). Anchor points detected at multiple scales are local
extremas of intensity explore image around rays from each anchor point. Go along every ray starting from this point until an
extremum of function f is reached.
MSER is a method for blob detection in images. The MSER algorithm extracts from an image a number of co-variant regions,
called MSERs: an MSER is a stable connected component of some gray-level sets of the image .
MSER is based on the idea of taking regions which stay nearly the same through a wide range of thresholds. All the pixels
below a given threshold are white and all those above or equal are black. If we are shown a sequence of thresholded images It with
frame t corresponding to threshold t, we would see first a black image, then white spots corresponding to local intensity minima will
appear then grow larger. These white spots will eventually merge, until the whole image is white. The set of all connected
components in the sequence is the set of all extremal regions. Optionally, elliptical frames are attached to the MSERs by fitting
ellipses to the regions. Those regions descriptors are kept as features.Sweep threshold of intensity from black to white, performing a
simple luminance thresholding of the imageExtract connected components (Extremal Regions) Find a threshold when an extremal
region is Maximally Stable, i.e. local minimum of the relative growth of its square. Due to the discrete nature of the image, the
region below above may be coincident with the actual region, in which case the region is still deemed maximal. Approximate a region
with an ellipse (this step is optional)Keep those regions descriptors as features
4. ALGORITHM:
4.2.1 Feature extraction:
The method of feature extraction is described in this section, s the intensity of the pixel s the intensity of the pixel Rf is the
set of pixels within the ngers outline, and Tr is s the locus space. Suppose the pixel in the lower left in the image to be (0,0), the
positive direction of the x-axis to be rightward in the image, the positive direction of the y-axis to be upward within the image, and
Tr(x, y)to be initialized to 0.
Step 1: Determination of the start point for line tracking and the moving-direction attribute
Step 2: Detection of the direction of the dark line and movement of the tracking point
Step 3: Updating the number of times points in the locus space have been tracked.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

587 www.ijergs.org

Step 4: Repeated execution of step 1 to step 3 (N times).
Step 5: Acquisition of the finger-vein pattern from the locus space.

The details of each step are described below.
Step 1: Determination of the start point for line tracking and the moving-direction attribute.
The start point for line tracking is (xs, ys), a pair of uniform random numbers selected from Rf. That is, the initial value of the current
tracking point (xc, yc) is (xs, ys). After that, the moving-direction attribute Dlr, Dud is determined. Dlr, Dud are the parameters that
prevent the tracking point from following a path with excessive curvature. Dlrand Dud are independently determined as follows:

whereRnd(n) is a uniform random number between 0 and n.
Step 2-1: Initialization of the locus-position table TcThe positions that the tracking point moves to are storedin the locus-position
table, Tc. The table is initialized in thisstep.
Step 2-2: Determination of the set of pixels Ncto which the current tracking point can move
A pixel to which the current tracking point (xc, yc) moves must be within the finger region, have not been a previous (xc, yc) within
the current round of tracking, and be one of the neighboring pixels of (xc, yc). Therefore, Ncis determined as follows:


whereNr(xc, yc) is the set of neighboring pixels of (xc, yc), selected as follows:

where N8(x, y) is the set of eight neighboring pixels of a pixel (xc, yc) and N3(D)(x, y) is the set of three neighboring pixels of (xc,
yc) whose direction is determined by the moving-direction attribute D ( defined as (Dx,Dy)). N3(D)(x, y) can be described as follows:

Parameters plrand pudin Eq. 4 are the probability of selecting the three neighboring pixels in the horizontal or vertical direction,
respectively, as Nr(sc, yc). The veins in a finger tend to run in the direction of the fingers length. Therefore, if we increase the
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

588 www.ijergs.org

probability thatN3(Dlr)(xc, yc) is selected as Nr(xc, yc), we obtain a faithful representation of the pattern of finger veins. In
preliminary experiments, excellent results are produced when plr= 50 and pud= 25.
Step 2-3: Detection of the dark-line direction near the current tracking point
To determine the pixel to which the current tracking point (xc, yc) shouldmove, the following equation, referred to as the line-
evaluation function, is calculated. This reflects the depth of the valleys in the cross-sectional profiles around the current tracking point

whereWis the width of the profiles, r is the distance between (xc, yc) and the cross section, and iis the angle between the line
segments (xc, yc) (xc + 1, yc) and (xc, yc) (xi, yi).
In this paper, in consideration of a thickness of the veins that are visible in the captured images, these parameters are set at W = 11 and
r = 1.
Step 2-4: Registration of the locus in the locus-position table Tcand moving of the tracking point
The current tracking point (xc, yc) is added to the locuspositiontable Tc. After that, if Vlis positive, (xc, yc) is thenupdated to (xi, yi)
where Vlis maximum.
Step 2-5: Repeated execution of steps 2-2 to 2-4
If Vlis positive, go to step 2-2; if Vlis negative or zero, leave step 2 and go to step 3, since (xc, yc) is not on the dark line.
Step 3: Updating the number of times points in the locus space have been tracked Values of elements in the locus space Tr(x, y) are
incremented (x, y) Tc.
Step 4: Repeated execution of steps 1 to 3 (N times) Steps 1 to 3 are thus executed N times. If the number of repetitions N is too small,
insufficient feature extraction is performed. If, on the other hand, N is too big, computational costs are needlessly increased. Through
an experiment, we
determined that N = 3000 is the lower limit for sufficient feature extraction.
Step 5:Acquisition of the pattern of veins from the locus space The total number of times the pixel (x, y) has been the current tracking
point in the repetitive line tracking operation is stored in the locus space, Tr(x, y). Therefore, the fingervein pattern is obtained as
chains of high values of Tr(x, y).






International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

589 www.ijergs.org

5. FLOW CAHRT


6. SCREEN SHOT:
6.1 Enrolment Phase:

Fig (1) Enrolment phase
6.2 Testing Phase:
Get the input
Image
Extract the ROI
Enhance the
cropped region
Extract
features
Store in
Database
Get the input
Image
Extract the ROI
Enhance the
cropped region
Match with
Database
Result
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

590 www.ijergs.org



Fig(2.1)Authorizedvein,2.2 Unauthorized person
7. CONCLUSION
The present study proposed an end-to-end finger-vein recognition system based on the blanket dimension. The
proposedsystem includes a device for capturing finger-vein images, amethod for ROI segmentation, and a novel method
combiningblanket dimension features and lacunarity features and surf feature for recognition algorithm An approach to correct the
non-uniform brightnessand to improve the contrast is proposed. During recognition, the corresponding featuresare matched using
nearest-neighbourhood-ratio method with SURF matching scores are fused using weighted sum rule toobtain fused matching score. It
is observed that the system performs with CRRof atleast 98:62%.
8. FUTURE WORK:
The Full proposed work of enhancement has been predicting the feature points with more accuracy that can be implemented
using SIFT algorithm in future and we can classify the each part of the stages using neural networks which results the high percentage
rate of predicting the outcomes of the system in future.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

591 www.ijergs.org

REFERENCES:

[1] A. K. Jain, S. Pankanti, S. Prabhakar, H. Lin, and A. Ross, Biometrics:a grand challenge, Proceedings of the 17th International
Conference onPattern Recognition (ICPR), vol. 2, pp. 935-942, 2004.

[2] P. Corcoran and A. Cucos, Techniques for securing multimedia content in consumer electronic appliances using biometric
signatures, IEEE Transactionson Consumer Electronics, vol 51, no. 2, pp. 545-551, May 2005.

[3] Y. Kim, J. Yoo, and K. Choi, A motion and similarity-based fake detection method for biometric face recognition systems, IEEE
Transactions onConsumer Electronics, vol.57, no.2, pp.756-762, May 2011.

[4] D. Wang , J. Li, and G. Memik, User identification based on fingerveinpatterns for consumer electronics devices, IEEE
Transactions onConsumer Electronics, vol. 56, no. 2, pp. 799-804, 2010.

[5] H. Lee, S. Lee, T. Kim, and HyokyungBahn, Secure user identification for consumer electronics devices, IEEE Transactions on
ConsumerElectronics, vol.54, no.4, pp.1798-1802, Nov. 2008.

[6] D. Mulyono and S. J. Horng, A study of finger vein biometric for personal identification, Proceedings of the International
SymposiumBiometrics and Security Technologies, pp. 134-141, 2008.

[7]. Anil K. Jain, Patrick Flynn, and Arun A. Ross. Handbook of Biometrics. Springer-Verlag, USA, 2007













International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

592 www.ijergs.org

A Novel Biometric System Based on Lips
K. Decika
1
, M. Mohanraj
2

1
Assistant Professor, Maharaja Co-Educational Arts and Science College, Erode- India
2
Research Scholar, Maharaja Co-Educational Arts and Science College, Erode- India
E-mail- mohanraj.manoleela@gmail.com

Abstract:
In this paper, the A Novel Biometric System based on Lips for identity recognition is investigated. In fact, it is a challenging
issue for identity recognition solely by the lips. In the first stage of the proposed system, a fast box filtering is proposed to
generate a noise-free source with high processing efficiency. Afterward, five various mouth corners are detected through
the proposed system, in which it is also able to resist shadow, beard, and rotation problems. For the feature extraction, two
geometric ratios and ten parabolic-related parameters are adopted for further recognition through the support vector
machine.
Index Terms Lip tracking, localized colour active contour, localized energy, deformable model.
1. INTROUCTION
Detecting lip contour with high accuracy is an important requirement for a lip identification system, and it has
been widely discussed in former works. One of the studying directions considers the colour information. Hosseini and
Ghofranis work and the method presented in [2] converted a RGB colour image into CIE

and CIE

color
spaces. Components

and

were combined together to generate a new image to emphasize the lip features, and the
results were analyzed through the 2-D fast wavelet transform. Finally, the morphology was employed to smooth and
binarize the image and then removed the noises to obtain the lip region. The images were converted into Caetano and
Barones chromatic colour space. Afterward, the mean and the threshold were computed from the red channel of each
pixel, and these were employed to separate the lips and nonlips regions. A new colour mapping method, which
integrated colour and intensity information, was developed for the lips contour extraction, and Otsus thresholding was
adopted to extract the binary result.





International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

593 www.ijergs.org

Mouth Corner Detection:

2.Pre-processing:
In this paper, the face region is directly detected by the powerful method, namely, Viola and Jones face detection
algorithm from an image, an example of which is shown in figure with the red box. For further refining the possible
region of the lips, a subregion is roughly extracted by the following estimation:

Where (
0
,
0
)and (
1
,
1
)denote the origin and the top-right position of the faces bounding box of size ;
positions(
2
,
2
)and(
3
,
3
) denote the origin and the top-right position of the estimated lip region of size .
For easing the influences from the camera noise and various lighting changes, and achieving a lower computation
complexity simultaneously, the proposed fast box filtering (FBF) and the well-known histogram stretching method are
used to obtain a contrast enhanced and smoothed result. The first step of the proposed FBF method is to obtain an integral
image as derived by

Where
,
and
,
denote the integral value and the original gray-scale value, respectively. Each of the
obtained
,
denotes the summation result of the whole gray-scale values withinits bottom-left region. Afterward, the
response of the box filtering (BF) can be obtained by the following calculation:
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

594 www.ijergs.org


Where
,
denotes the smoothed result, the odd parameter denotes the size of the employed filter size, and
notations and denote the round-down and round-up operators, respectively.
The rough lip region (bounding with the obtained (
2
,
2
)and(
3
,
3
)positions) is processed with the proposed FBF
and the histogram stretching method. The corresponding region, as shown in figure is adopted as an example to exhibit the
result, and the enhanced smoothed image with = 3 is shown in figure. This image is named

in this paper, which is


widely used in this paper.




3.Understanding Support Vector Machines
- Separable Data
- Non-separable Data
- Nonlinear Transformation with Kernels
Separable Data
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

595 www.ijergs.org

You can use a support vector machine (SVM) when your data has exactly two classes. An SVM classifies data by finding
the best hyperplane that separates all data points of one class from those of the other class. The best hyperplane for an
SVM means the one with the largest margin between the two classes. Margin means the maximal width of the slab
parallel to the hyperplane that has no interior data points.
The support vectors are the data points that are closest to the separating hyperplane; these points are on the boundary of
the slab. The following figure illustrates these definitions, with + indicating data points of type 1, and indicating data
points of type

Fig (1)
MathematicalFormulation: Primal. This discussion follows Hastie, Tibshirani, and Friedmanv [12] and Christianini
and Shawe-Taylor.
The data for training is a set of points (vectors) xi along with their categories yi. For some dimension d, the xi Rd, and
the yi = 1. The equation of a hyperplane is
<w,x> + b = 0,
Where w Rd, <w,x> is the inner (dot) product of w and x, and b is real.
The following problem defines the best separating hyperplane. Find w and b that minimize ||w|| such that for all data
points (xi,yi),
yi(<w,xi> + b) 1.
The support vectors are the xi on the boundary, those for which yi(<w,xi> + b) = 1.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

596 www.ijergs.org

For mathematical convenience, the problem is usually given as the equivalent problem of minimizing <w,w>/2. This is a
quadratic programming problem. The optimal solution w, b enables classification of a vector z as follows:
class(z) = sign(<w,z> + b).
Mathematical Formulation: Dual. It is computationally simpler to solve the dual quadratic programming problem. To
obtain the dual, take positive Lagrange multipliers i multiplied by each constraint, and subtract from the objective
function:

where you look for a stationary point of LP over w and b. Setting the gradient of LP to 0, you get

Substituting into LP, you get the dual LD:

Which you maximize over i 0. In general, many i are 0 at the maximum. The nonzero i in the solution to the dual
problem define the hyperplane, as seen in Equation 16-1, which gives w as the sum of iyixi. The data
points xi corresponding to nonzero i are the support vectors.
The derivative of LD with respect to a nonzero i is 0 at an optimum. This gives
yi(<w,xi> + b) 1 = 0.
In particular, this gives the value of b at the solution, by taking any i with nonzero i.
The dual is a standard quadratic programming problem. For example, the Optimization Toolbox quadprog solver solves
this type of problem.
Nonseparable Data
Your data might not allow for a separating hyperplane. In that case, SVM can use a soft margin, meaning a hyperplane
that separates many, but not all data points.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

597 www.ijergs.org

There are two standard formulations of soft margins. Both involve adding slack variables si and a penalty parameter C.
- The L1-norm problem is:

such that


The L1-norm refers to using si as slack variables instead of their squares. The SMO svmtrain method minimizes the L1-
norm problem.
- The L2-norm problem is:

subject to the same constraints. The QP svmtrain method minimizes the L2-norm problem.
In these formulations, you can see that increasing C places more weight on the slack variables si, meaning the
optimization attempts to make a stricter separation between classes. Equivalently, reducing C towards 0 makes
misclassification less important.
Mathematical Formulation: Dual. For easier calculations, consider the L1 dual problem to this soft-margin formulation.
Using Lagrange multipliers i, the function to minimize for the L1-norm problem is:

where you look for a stationary point of LP over w, b, and positive si. Setting the gradient of LP to 0, you get

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

598 www.ijergs.org

These equations lead directly to the dual formulation:

subject to the constraints

The final set of inequalities, 0 i C, shows why C is sometimes called a box constraint. C keeps the allowable values
of the Lagrange multipliers i in a "box", a bounded region.
The gradient equation for b gives the solution b in terms of the set of nonzero i, which corresponds to the support
vectors.
You can write and solve the dual of the L2-norm problem in an analogous manner. For details, see Christianini and
Shawe-Taylor [7], Chapter 6.
svmtrain Implementation. Both dual soft-margin problems are quadratic programming problems.
Internally, svmtrain has several different algorithms for solving the problems. The default Sequential Minimal
Optimization (SMO) algorithm minimizes the one-norm problem. SMO is a relatively fast algorithm. If you have an
Optimization Toolbox license, you can choose to use quadprog as the algorithm.quadprog minimizes the L2-norm
problem. quadprog uses a good deal of memory, but solves quadratic programs to a high degree of precision (see Bottou
and Lin [2]). For details, see the svmtrain function reference page.
Nonlinear Transformation with Kernels
Some binary classification problems do not have a simple hyperplane as a useful separating criterion. For those problems,
there is a variant of the mathematical approach that retains nearly all the simplicity of an SVM separating hyper plane.
This approach uses these results from the theory of reproducing kernels:
- There is a class of functions K(x,y) with the following property. There is a linear space S and a
function mapping x to S such that
K(x,y) = <(x),(y)>.
The dot product takes place in the space S.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

599 www.ijergs.org

- This class of functions includes:
o Polynomials: For some positive integer d,
K(x,y) = (1 + <x,y>)d.
o Radial basis function: For some positive number ,
K(x,y) = exp(<(xy),(x y)>/(22)).
o Multilayer perceptron (neural network): For a positive number p1 and a negative number p2,
K(x,y) = tanh(p1<x,y> + p2).
The mathematical approach using kernels relies on the computational method of hyperplanes. All the calculations for
hyperplane classification use nothing more than dot products. Therefore, nonlinear kernels can use identical calculations
and solution algorithms, and obtain classifiers that are nonlinear. The resulting classifiers are hyper surfaces in some
space S, but the space S does not have to be identified or examined.


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

600 www.ijergs.org





International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

601 www.ijergs.org






Fig.1 . Tracking result using the proposed method.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

602 www.ijergs.org


Table 1. Computing time of the proposed algorithm.

Algorithm
Step
Extracting Tracking
Iteration
[average]
25 5
Computing
time[average]
0.695s 0.103s

Tracking results are shown in Fig. 1 and
Table 1.

It is found that the proposed algorithm has achieved a promising tracking result, which is robust against the appearance of
teeth and tongue. Furthermore, the utilization of a 16-point deformable model to describe a lip shape is physically
meaningful. In addition, the computing time of tracking one lipframe is less than the extracting process. In particular,
when there exists a long lip sequences, it is effective to utilize the previous lip contour as the important parameter to track
the current one. It is expected that such an operation can reduce a large amount of computing time.

5. CONCLUSION
In this paper, we have proposed a robust lip tracking algorithm using localized colour active contours and deformable
models. This approach is adaptive to the lip movements, and also robust against the appearance of teeth and tongue.
Hence, it provides a promising way for lip tracking.




REFERENCES:

[1] M.U. Ramos Sanchez, J. Matas, and J. Kittler, Statisticalchromaticity-based lip tracking with b-splines, in
Proceedings
of ICASSP1997, 1997, vol. 4, pp. 29732976.
[2] A.W.C. Liew, S.H. Leung, and W.H. Lau, Lip contourextraction from color images using a deformablemodel,
Pattern Recognition, vol. 35, no. 12, pp. 29492962, 2002.
[3] G.I. Chiou and J.N Hwang, Lipreading from colorvideo, IEEE Transactions on Image Processing, vol. 6,no. 8, pp.
11921195, 1997.
[4] I. Matthews, T. F. Cootes, J. A. Bangham, S. Cox, andR. Harvey, Extraction of visual features for lipreading,IEEE
Transactions on Pattern Analysis and Machine Intelligence,vol. 24, no. 2, pp. 198213, 2002.[5] N. Eveno, A. Caplier,
and P.Y. Coulon, Accurate andquasi-automatic lip tracking, IEEE Transactions on Circuitsand Systems for Video
Technology, vol. 14, no. 5,pp. 706715, 2004.
[6] S.L. Wang, W.H. Lau, and S.H. Leung, Automatic lipcontour extraction from color images, Pattern Recognition,vol.
37, no. 12, pp. 23752387, 2004.[7] S. Lankton and A. Tannenbaum, Localizing regionbasedactive contours, IEEE
Transactions on Image Processing,vol. 17, no. 11, pp. 20292039, 2008.[8] M. Li and Y.M. Cheung, Automatic lip
localization underface illumination with shadow consideration, SignalProcessing, vol. 89, no. 12, pp. 2425
2434, 2009



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

603 www.ijergs.org

Comparison between Two Compensation Current Control Methods
of Shunt Active Power filter
Nenceey Jain, Amit Gupta
#1PG scholar, M. Tech. in Power System, GGCT, Jabalpur, 8109681397,
nenceeyjec@gmail.com
Abstract This paper presents the analysis and simulation using Matlab/Simulink of a Shunt Active Power Filter (SAPF) for
reducing the harmonics current generated by nonlinear loads and also compared the two current control method. Due to increasing the
usage of power electronics equipment with linear load, the increase of the harmonics disturbance in the ac mains currents has became
a major concern due to the adverse effects on all equipment mostly capacitors, transformers, and motors, causing additional losses,
overloading, malfunctioning and overheating and interferences. Shunt Active Power filter is used to compensating the harmonic non-
linear loads harmonics by injecting equal but opposite harmonic compensating current which gives pure sinusoidal wave.
Keywords Power System, Harmonic Distortion, Shunt Active Power Filter, Non-Linear Loads, Total Harmonic Distortion.
INTRODUCTION
Power electronic control devices due to their inherent non linearity draw harmonic and reactive power form the supply mains.
Due to wide use power electronic equipments with linear load, causes an increasing harmonics distortion in the ac mains currents.
Harmonics component is a very serious and a harmful problem in Electric Power System. In three phase systems, they could also
cause unbalance and excessive neutral currents.
The injected harmonics, reactive power burden, unbalance and excessive neutral currents cause low system efficiency and
poor power factor and also cause transients. These transients also would affect the voltage at distribution levels. Excessive reactive
power of loads would increase generating capacity of generating stations and also increase the transmission losses in lines. Hence
supply of reactive power at the load ends becomes essential. Mostly non-linear loads based on solid-state converters are like UPS,
SMPS etc. These Non-linear loads draw current that is not sinusoidal and thus create voltage drops in distribution conductors.
The main adverse effect of harmonic current and voltage on power system equipment such as overheating, overloading,
perturbation of sensitive control and electronic equipment, capacitor failure, communication interferences, process problem, motor
vibration, resonances problem and low power factor. As a result, effective harmonic suppression from the system has become very
important for both the utilities and the users. Active Power filtering constitutes the most effective proposed solutions. Active power
filter (APF) can solve the problems of harmonic and reactive power at the same time. The quality of electric power is deteriorating
mainly due to current and voltage harmonics, negative sequence components, voltage sag, voltage swell, etc.
In reference paper [15], the authors compared the two current control methods of shunt active power filter under unbalance
and non sinusoidal condition. As per the result d-q method is the best one which used in any voltage condition.
Many theories have been developed for instantaneous current harmonics detection in active power filter such as FFT (fast
Fourier technique) technique, neural network, instantaneous p-q theory(instantaneous reactive power theory), synchronous d-q
reference frame theory or by using suitable analog or digital electronic filters separating successive harmonic components, PLL with
fuzzy logic controller, neural network etc. This paper basically deals with the modeling and simulation of shunt active filter with
hysteresis current control method for harmonic compensation and power filtering and then studied the compensation principle used for
current harmonics compensation and harmonic control method provides a quick and easy response in the system. The comparative
study of the current control method will do.



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

604 www.ijergs.org

SHUNT ACTIVE POWER FILTER
Figure 1 shows the basic configuration of a shunt active filter for harmonic current compensation of a specific load. Shunt
active filter inject harmonic current equal and opposite in phase to harmonic current produced by load into line.

Fig. 1 Principle of Shunt Active Filter
INSTANTANEOUS REACTIVE POWER THEORY
Akagi et al. [1, 2] have proposed the "The Generalized Theory of the Instantaneous Reactive Power in Three-Phase Circuits",
also known as instantaneous reactive power theory or p-q theory. In this theory, instantaneous three-phase current and voltages are
transformed into - coordinates from a-b-c coordinates, known as Clarke transformation as shown in equation (1) and (2)
respectively.

=
2
3

1 1/2 1/2
0 3/2 3/2

........... (1)

=
2
3

1 1/2 1/2
0 3/2 3/2

............. (2)
The instantaneous real power is defined as follows in equation 3.
=

. (3)
From above equations, the instantaneous power can be rewritten as shown below in equation (4).

.. (4)
The instantaneous reactive power is set into opposite vectors in order to cancel the reactive component in the line current. From the
above equations, yield eq. 5.

=
1

2
+

0
+

0
..........(5)
The compensating current of each phase can be derived by using the inverse Clarke transformations as shown below in
equation (6).
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

605 www.ijergs.org

=
2
3

1 0
1/2 3/2
1/2 3/2

.... (6)

Figure 2: Principle of instantaneous active and reactive power theory.
SYNCHRONOUS REFERENCE D-Q THEORY
In this method, the park transform is used to transformed load current from three phase frame reference abc into synchronous
reference d-q coordinate in order to separate the harmonic contents from the fundamentals.

Where, is the angular position of the synchronous reference. This is a linear function of the fundamental frequency. The
harmonic reference current can extract from the load currents using a simple LPF. The currents in the synchronous reference can be
decomposed into two terms as:

Only the alternating terms -which are related to the harmonic contents - will be seen at the output of the extraction system. The APF
reference currents will be then:

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

606 www.ijergs.org

In order to find the APF currents in three phase system, the inverse Park transform can be used as follow:


Fig 3 Principle of d-q method
HARMONIC CURRENT CONTROL METHOD
The principles of hysteresis band current control can be seen in figure 5. The difference between the reference value and the
actual value will be directed to one comparator with a tolerance band. The controller generates the sinusoidal reference current of
desired magnitude and frequency that is compared with the actual motor line current. Hence, the actual current is forced to track the
reference current within the hysteresis band as shown in fig. 5 and fig. 6.

Fig.4 Basic principle of Hysteresis Current Control
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

607 www.ijergs.org


Fig.5 Hysteresis Band
SIMULINK MODEL OF THE APF
The overall system model containing the power source, the shunt active power filter and the nonlinear loads is shown in fig.6.The
main components of the system are described below:
- The power source, which was designed as a three-phase 11KV/50Hz voltage sources connected together in a Y configuration with
neutral and a three phase RL branch.
-The single-phase nonlinear loads are containing a single-phase uncontrolled diode rectifier supplying a series RL load for phase A, a
single-phase uncontrolled diode rectifier supplying a parallel RC load for phase B, a single-phase uncontrolled diode rectifier
supplying a series RL loads for phase C.
- The PWM IGBT voltage source inverter, which contains a three-leg voltage source inverter with neutral clamped DC capacitors and
the control scheme, as shown in fig. 6.

Fig. 6 system model with filter
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

608 www.ijergs.org


(a)


(b)
Fig.7 Model of APF (a) p-q method (b) d-q method

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

609 www.ijergs.org

SIMULATION RESULT
The complete model of active power filter is presented in fig.6 and result were obtained by using MATLAB/Simulink
Simpowersystem Toolbox software for a three phase neutral clamped APF compensating harmonics, reactive power produced by
nonlinear loads.
Fig 8 shows the current wave for the system with and without SAPF. Fig. 9 shows the simulation results obtained in
harmonic distortion analysis of the load current, for each phase with nonlinear load. Without APF, the total harmonic distortion (THD)
is 20.49%.
Fig. 10 shows the simulation result of the source current obtained with APF using p-q method to compensate harmonics created by
nonlinear load. The THD of the source current is now 0.53% of the fundamental value, thus meeting the limit of harmonic standard of
IEEE STD. 519-1992.
Fig. 11 shows the simulation result of the source current obtained with APF using d-q method to compensate harmonics created by
nonlinear load. The THD of the source current is now 0.08% of the fundamental value, thus meeting the limit of harmonic standard of
IEEE STD. 519-1992.

(a)

(b)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

610 www.ijergs.org


(c)
Fig 8: Current Waveform of system (a) without SAPF, (b) with SAPF using p-q method, (c) with SAPF using d-q method


Fig. 9 Load Current (System without APF)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

611 www.ijergs.org


Fig.10 Source Current (System with APF using p-q method)

Fig.11 Source Current (System with APF using d-q method)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

612 www.ijergs.org

COMPARATIVE ANALYSIS
The comparison between system without SAPF and with SAPF using different current control method is shown in table 1 and
2. Table 1 shows the % of individual harmonics distortion present in the system and table 2 shows the Total Harmonic distortion
present in the system before and after using filter with different control method. From the table 1 and 2 the system with SAPF using d-
q method give the better result as compare to p-q method.
Table 1: Harmonic Improvement (in % of fundamental frequency component)
Harmonic order System without SAPF System with SAPF using
p-q method
System with SAPF using d-q
method
3rd order harmonic 0.4% 0.22% 0.03%
5th order harmonics 16.94% 0.12% 0.03%
7th order harmonics 7.76% 0.09% 0.02%
9th order harmonics 0.03% 0.07% 0.01%
11th order
harmonics
6.61% 0.06% 0.01%
13th order
harmonics
3.74% 0.05% 0.01%
15th order
harmonics
0.03% 0.04% 0.01%
17th order
harmonics
3.18% 0.04% 0.01%
19th order
harmonics
2.06% 0.03% 0.01%

Table 2: Total Harmonic Distortion of System (in % of fundamental frequency component)
System Without SAPF With SAPF using p-q
method
With SAPF using d-q
method
%THD ( in % of fundamental) 20.49% 0.53% 0.08%

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

613 www.ijergs.org

Graph shown in figure 12 summarize the performance of the distribution system without and with shunt active power filter
using different control strategies.

Figure 12: Comparative Graph between System without and with SAPF
Graph shown in figure 13 summarize the performance of the system with shunt active power filter using p-q and d-q
methods. Results presented confirm superior performance of d-q method.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

614 www.ijergs.org


Figure 13: Comparative Graph between p-q and d-q Method
CONCLUSION
Simulation results using matlab/simulink shows that shunt active filter gives effective compensation of harmonics and
reactive power. Total Harmonic Distortion of system with APF using p-q method is reduced to 0.53% and Total Harmonic
Distortion of system with APF using p-q method is reduced to 0.08 % which are very below than the harmonics limit 5% imposed
by the IEEE-519 standard. As per the result d-q method give better result as compare to p-q method

REFERENCE :
[1] Akagi. H, 1996. New Trends in Active Filters for Power Conditioning, IEEE Transaction on Industrial Applications, vol. 32, No
6, Dec., pp 1312-1322.
[2] Akagi. H, 2006. Modern active filter and traditional passive filters, Bulletin of the polish academy of sciences technical sciences
vol.54.No.3.
[3] Ali Ajami and Seyed Hossein Hosseini, 2006. Implementation of a Novel Control Strategy for Shunt Active Filter, ECTI
Transactions on Electrical Eng., Electronics, And Communications Vol.4, No.1
[4] Akagi, Hirofumi. Active Filters for Power Conditioning. In Timothy L.Skvarenina. The Power Electronics Handbook: Industrial
Electronics Series. United State of America: CRC Press. Chap. 17:30-63. 2002.
[5] .Peng, F. Z., Akagi, H. and Nabae, A. A Novel Harmonics Power Filter.IEEE Transactions on Power Electronics Specialists
Conference. April 11-14. PESC 88 Record: IEEE. 1988. 1151-1159.
[6] .Grady, W. M., Samotyi, M. J. and Noyola, A. H. Survey of Active Line Conditioning Methodologies. IEEE Transactions on
Power Delivery.1990. 5 (3): 1536-1542.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

615 www.ijergs.org

[7] C. Sankaran, Power Quality, CRC Press, 2002.
[8] H. Akagi, E. Hirokazu Watanabe and M. Aredes, Instantaneous power theory and applications to power conditioning, IEEE-
press, 2007.
[9] S. Buso, L. Malesani and P. Mattavelli, Comparison of current control techniques for active filter applications, IEEE Trans. Ind.
Electron., vol. 45, no. 5, pp. 722-729, Oct. 1998.
[10] Karuppanan P, Kamala kanta mahapatra, Jeyaraman K and Jeraldine Viji, Fryze power theory with adaptive hcc based active
power line conditioners. ICPS, Dec 22-24, 2011.
[11]Bhakti I. Chaughule, Amit L. Nehete, Rupali Shinde, Reduction in Harmonic Distribution of the system using active power filter
in MATLAB/SIMULINK, IJCER, vol. 03, issue, 6, june 2013.
[12] George Adam, Alina G. Stan (Baciu) and Gheorghe Livint, A MATLAB-SIMULINK approach to shunt active power filters,
Technical University of Iasi 700050, Iasi, Romania.
[13] Amit Gupta , Shivani Tiwari , Palash Selot, Reduction Of Harmonics By Using Active Harmonic Filter, International Journal
of Engineering Research & Technology (IJERT) Vol. 1 Issue 9, November- 2012 ISSN: 2278-0181.
[14] Patel Pratikkumar T., P. L. Kamani, A. L. Vaghamshi, Simulation of Shunt Active Filter for Compensation of Harmonics and
Reactive Power, International Journal Of Darshan Institute On Engineering Research & Emerging Technologies Vol. 2, No. 2, 2013.
[15] M. Suresh , S.S.Patnaik, Y. Suresh, Prof. A.K. Panda, Comparison of Two Compensation Control Strategies for Shunt Active
Power Filter in Three-Phase Four-Wire System, Innovative Smart Grid Technology, IEEE PES, 17-19 Jan. 2011, pp. 1 - 6.
[16] Mohd Izhar Bin A Bakar, Active Power Filter with Automatic Control Circuit for Neutral Current Harmonic Minimization
Technique, Ph.D. Thesis, Universite Sains Malaysia, 2007













International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

616 www.ijergs.org

Low Voltage, Low Power Rail to Rail Operational Transconductance
Amplifier with Enhance Gain and Slew Rate
Sakshi Dhuware
1
, Mohammed Arif
2
1
M-Tech.4
th
sem., GGITS Jabalpur, dhuware.sakshi@gmail.com

2
Professors, GGITS Jabalpur, arif@ggits.org



Abstract Proposed paper deals with well-defined design criteria of rail to rail operational transconductance amplifiers. The
system supply voltage is 1.6 V and the power consumption goes up to 15.03uW. The proposed amplifier was implemented in a
45nm-CMOS technology. Simulation results of proposed OTA achieves high 76.6 dB DC gain and slew rate 200 (V/us) with,
87.67dB PSRR and 82.66 dB CMMR.
Keywords OTA, amplifier, transconductance, PSRR, CMRR, low voltage, RtR.

INTRODUCTION
Due to the highly demand of smaller area (size) and longer battery life for convenient applications in all marketing segments
including consumer electronics, medical, computers and telecommunications low voltage and low power silicon chip design has been
growing rapidly. To reduce current consumption and power consumption of the system the supply voltage is being scaled downward
.The objective of this method is to implement the design of low power and low voltage op-amp for Telecommunications and
Biomedical applications [1].
In design of most closed loop systems, design of the OTA is most challenging unit from design perspective. It has to achieve
high DC gain and low thermal and flicker noise, also high band width required for systems with high frequency clock, especially in
switched capacitor applications. Additionally, power consumption of the OTA is one of critical issues for applications with low power
consumption target. Slew-rate and input common mode range are other important aspects of the OTA [2]. Telescopic and folded-
cascode structures are two common structures for single stage opamps.
Two main drawback of first one are low input common mode range and large voltage headroom in output and main
drawbacks of folded one is higher power consumption and lower UGBW. In this work to benefit high input common mode range of
folded-cascode and also having higher DC gain and UGBW, total transconductance of the amplifier is increased adding extra paths for
signal from input to output [3].
Other techniques for increasing DC gain of the op-amp such as using positive feedback or gain boosting are based on
increasing output resistance of the op-amp and so only DC gain of the op-amp increases with these techniques and UGBW remains
constant [4]-[5]. The OTA is an amplifier without buffer at output stage drives only load .Which is called as VCCS because its
differential input voltage produces a current at output stage.
OTA is the backbone of analog circuits. OTA faces many difficulties with low voltage design providing high gain and low
power consumption [6]. To improve the gain, of cascoded transistors is not easy for low voltage and low power design due to its
output swing restriction. The current equation of OTA is shown in below which signifies that the transconductance of design is highly
depends on the biasing current [7]
Io = Gm {V (+) V (-)}
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

617 www.ijergs.org

The proposed amplifier has giving better performance and consuming a fraction of the power at less supply voltage.
The design procedure is based on following main parameters: noise, phase margin, gain, load capacitance, slew rate(SR),
input common mode range, common mode rejection ratio(CMRR)and power mode rejection ratio (PSRR) with less power
consumption.

PROPOSED METHODOLOGY
As the input stage, the differential amplifier is used for operational amplifiers .the problem is that it behaves as a differential
amplifier only over a limited range of common mode input. Therefore, to make the operational amplifier versatile, its input stage
should work for RtR common- mode input range .the most common method to achieve this range is to use a complementary
differential amplifier at the input stage. Where N1, N2 and P1, P2 constitute the n- type and p-type differential input pairs,
respectively.
The N-MOS differential pair is shown in fig 1.in which input pair, N1 & N2, is able to reach the positive supply rail. The
range extends from the positive supply to (Vgs, n + VDsat, b) above the negative supply. This minimum voltage is needed to keep the
NMOS differential pair and the tail current source in saturation. The role of tail current source is to suppress the effect of input CM
level variations on the operation of N1 and N2 and the output level .A similar analysis can be carried out for the PMOS differential
pair.
The proposed circuit is shown in fig. 2. RtR input means that input signal can be anywhere between the supply voltages with
all the transistors in the saturation region.
To have a RtR common mode input range, two complementary differential pairs are required to form the input stage .N-
channel input pair,N1 & N2, is able to reach the positive supply rail while the P- channel input pair, P1 & P2, is able to reach the
negative supply rail.

Fig.1 NMOS differential pair
The constant-gm control circuit is achieved through transistor N3-N6 and P3-P6.this circuit maintains a constant tail current
when either of the two differential pairs goes off. Vbn_tail and Vbp_tail is the control voltage of N3 and P3 MOSFET.
g
m, np =
g
m,

n
+ g
m, p

g
m, n
= 2
n
C
ox
(W/L)I
D

Where g
m,

n
and g
m, p
are the transconductance of NMOS and PMOS respectively.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

618 www.ijergs.org

In order to describe the operation of constant gm control circuit, first, it is supposed that PMOS and NMOS differential pairs
are both in operation and the transistor P3 and N3 as the tail current source provide the same current for PMOS and NMOS
differential pairs respectively.
The constant gm circuit (P4-P6) and (N4-N6) are used to control transconductance. Through adjusting the ratio of width to
length of the input differential pairs, the tail current can be kept constant and stable. The input differential pairs are kept biased in
saturation region under all conditions [18].



Fig .2 Proposed rail to rail OTA

SIMULATION AND RESULT
The proposed rail to rail operational transconductance amplifier is operates with 1.6v power supply and fabricated in a
standard 45nm CMOS process with 9.04uA current consumption. The proposed amplifier has giving better performance and
consuming a fraction of the power 15uW at less power supply.
The gain of proposed RtR is 76.6 dB, phase margin 38.03 (deg), slew rate is 200(V/us), CMRR is 82.66dB and PSRR is
87.67 dB. Simulation result summary is shown in below table I.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

619 www.ijergs.org




Fig.3. Gain (dB) and Phase Margin (deg) Versus Frequency (Hz)

Table I
Simulation Result Summary

Design Result
technology 45nm
Slew rate(V/us) 200
CMRR(dB) 82.66
Gain(dB) 76.6
Phase margin(deg) 38.03
Supply voltage 1.6v
Figure of merit(FOM) (MHZpF/mA) 3676.59
Maximum output current (uA) 4.1
PSRR(dB) 87.67
Power consumption (uW) 15.04
GBW(MHz) 331.2

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

620 www.ijergs.org

CONCLUSION
Operational amplifiers input stages utilize a single differential pair have a common mode input range that extends to only one
rail. This limits the application of operational amplifiers. An RtR common mode input range is a desirable characteristic for the input
stage which makes op-amp more versatile. This characteristic can be achieved using a compound differential pair structure called the
complementary differential pair (both NMOS and PMOS differential pair).
The proposed RtR OTA does not require an extra circuit which reduces design complexity, area and power consumption. It
has been demonstrated that the proposed circuit can boost the gain, phase margin, slew rate, CMRR, PSRR using 1.6 supply voltages.

REFERENCES:
[1] Suparshya Bbu Sukhavasi, Susrutha Babu Sukhavasi, Dr. Habibulla Khan ,S R Sastry Kalavakolanu, Vijaya Bhaskar Madivada
,Lakshmi Narayana Thalluri, Design Of Low Power Operational Amplifier By Compensating The Input Stage,International Journal
Of Enginnering Research And Applications (IJERA),VOL.2,PP.1283-1287,Mar-April 2012.
[2] Razavi, B. Design of analog CMOS integrated circuits.McGraw-Hill (2001).
[3] F. Roewer and U. kleine, A Novel Class of Complementary Folded-Cascode Opamps for Low Voltage, Circuits, VOL. 37, NO.
8, August 2002.
[4] Laber, C. A., & Gray, P. R. (1988). A positive transconductance amplifier with applications to high high-Q CMOS switched-
capacitor filters, IEEE Journal of Solid State Circuits, 23(6), 13701378.
[5] Lloyd, J.; Hae-Seung Lee. A CMOS op amp with fully differential gain enhancement IEEE Transaction on Circuit and
System II Analog and Digital Signal Processing,Vol. 41, NO. 3, MARCH 1994.
[6] Soni.H, Dhavse.N, Design of Operational Transconductance Amplifier using 0.35m technology, International Journal of
Wisdom Based Computing vol 1, pp28-31, 2011.
[7] Razavi.B, Design of Analog CMOS Integrated Circuits, publisher McGraw-Hill, 2000.
[8] Sudhir.M.Mallya, Joseph.H.Nevin, Design Procedures for a fully differential Folded Cascode CMOS operational Amplifier,
IEEE Journal of Solid-State Circuits, Vol.24, No.6, December 1989, pp 1737- 1740.
[9] Katsufumi Nakamura and L. Richard Carley, A Current based positive-feedback technique for efficient cascode
bootstrapping, Symposium on VLSI Circuits Digest Technical Papers, May 1991, pp 107-108.
[10] K.Nakamura and L.R. Carley, An enhanced fully differential folded cascode op-amp, IEEE Journal of Solid-State Circuits,
Vol.27, No.4 APR.1992. pp.563-568.
[11] RidaS.Assaad and Jose Silva-Martinez, The Recycling folded cascode: A general enhancement of the folded cascode amplifier,
IEEE Journal of Solid State Circuits, Vol.44, No.9, September 2009, pp 2535-2542.
[12] Y.L.Li, K.F.Han, X.Tan, N.Yan, and H.Min, Transconductance enhancement method for operational transconductance
amplifiers, IET Electronics Letters, Vol.46, No.9, September 2010, pp 1321-1322.
[13] Abhay Pratap Singh, Sunil Kumar Pandey, Manish Kumar, Operational Transconductance Amplifier For Low Frequency
Application, International Journal Computer Technology & Applications, Vol.3 (3), May-June 2012.
[14] Sansen, Analog design essentials, Springer, Dordrecht, The Netherlands, 2006.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

621 www.ijergs.org

[15] Katsufumi Nakamura, L. Richard Carley, An Enhanced Fully Differential Folded-Cascode OP Amp, IEEE Journal of Solid-
State Circuits, Vol.27, No.4, 1992.
[16] Rida S. Assaad, Jose Silva-Martinez, Enhancing general performance of folded-cascode amplifier by recycling current,
ELECTRONICS LETTER , VOL. 43, NO. 23, November 2007.
[17] Xiao Zhao, Huajun Fang And Jun Xu A Low Power Constant-Gm Rail-To-Rail Operational Trans-Conductance Amplifier By
Recycling Current Electron Devices And Solid- State Circuits (EDSSC) IEEE International Conference, November 2011.
[18] Sakshi Dhuware, Mohammed Arif Enhanced Gain Constant Gm Low Power Rail to Rail Operational transconductance
Amplifier for Wideband Application International Journal of Science and Research (IJSR), Vol.3, No.9,pp. 12571260,September
2014





















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

622 www.ijergs.org

dB - Linear Variable Gain Amplifier with Addition of Diode Connected Load
Pashupati Nath Jha
1
,

Vinod Kapse
2

1
M-Tech.4
th
sem., GGITS Jabalpur, parasjha2013@gmail.com

2
Professors, GGITS Jabalpur, vinodkapse@ggits.org

Abstract In this paper, a CMOS linear-in-dB variable gain amplifier (VGA) is presented. Based on the diode connected load
technique which improves the gain range, a 108.77dB(68.67dB to 40.10dB) continuous gain range is achieved with a single-stage
structure. Simulation results show that the VGA core consumes of power consumption 0.6uW from a 1V supply with 0.04 dB gain
error.
Keywords CMOS, Variable gain amplifier (VGA), automatic gain control (AGC), dB-linear, Diode connected load, Gain range,
Detector, Gain error.

INTRODUCTION
Variable gain amplifiers (VGAs) are indispensable blocks in modern wireless communication systems such as Bluetooth,
WLANs, and UWB. Variable gain amplifiers (VGAs) are an important building block of wireless communication systems. The main
function of a VGA is to provide a fixed output power from a large different input signal level, increasing the dynamic range of the
entire system.
The dB-linear gain characteristic is required for the VGA to maintain a uniform loop transient response and settling time in
an automatic gain control (AGC) loop [1] and to prevent a resolution problem of control voltages for a wide variable gain range. For
most applications of VGAs, the dB-linear characteristic should be accurate across a large signal range with a small gain error [2], [3].
Although many techniques have been employed to generate the exponential function, these techniques require complex
circuitry with extra chip areas [4][7].]. One of the critical issues in dB-linear VGA design is building a dB-linear gain characteristic.
With a bipolar junction transistor (BJT), a dB-linear VGA can be easily designed using its exponential characteristic [8]
[10]. However, using MOS devices, it is difficult to obtain a dB-linear function with the inherent square-law and linear
characteristics. Although a dB-linear VGA using a MOS device in subthreshold region has been reported [11], it can be used to
limited applications owing to its large noise contribution. VGA with current squiring technique introduces a single stage CMOS VGA
with continuous exponential tuning characteristics, it proposes a new structure to extend the decibel-linear gain range [12].
Gain error compensation technique to provide accurate exponential approximations over the small and large gain ranges of
the dB-linear VGA [13]. In this paper dB- linear VGA with addition of diode connected load is presented which introduces improved
performance as compare to VGA with gain error compensation technique. Proposed method reduces power consumption without use
of any additional circuits, resulting in a robust VGA. The approximation function of the VGA is very accurate across a wide dB-linear
range owing to the proposed diode connected load technique. The dB-linear gain is linearly controlled by the gate bias of the control
loop circuits, which is very simple.
Variable Gain Amplifiers are used in automatic-gain-control (AGC) amplifiers as feedback loop shown in figure 1.Where the
amplitude of the output signal is kept constant for all input signal levels.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

623 www.ijergs.org


Fig.1. Diagram of AGC

PROPOSED METHODOLOGY
In this paper dB- linear VGA with addition of diode connected load is presented. A VGA provides a means of amplifying
such signals, with less distortion or saturation, and can be used as the controlled element of an Automatic Gain Control (AGC) circuit
in a receiver, or as the controlling amplifier in a Timed-Gain-Control circuit of an Ultrasound system. The load of differential pair
need not be implemented by linear resistor so it is desirable to replace resistor with MOS.
the main reason in this M3, M4 are always in saturation region .Because the drain and gate have the same potential MOS is
three terminal device which can be used as a resistor (two terminal device ) by shorting the gate to the its own drain. Resistor takes
more area & noisier so that resistor is replaced by mos. the basic differential amplifier with diode connected load is shown in figure 2.
Voltage gain is A
v is
given by
A
v
= -g
mN
(g
mP
-1
||r
ON
||r
OP
)
- (g
mN
/g
mP
)
Where subscript N and P denotes NMOS and PMOS, respectively.

Fig.2. Basic differential amplifier with diode connected load

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

624 www.ijergs.org

A
v
- [
n
(W/L)
N
/
p
(W/L)
P
]


A variable gain amplifier is a special kind of amplifier whose gain can be dynamically controlled in real-time by an externally
applied control voltage. In its simplest form, it can be visualized as an amplifier with an electronic gain control. Proposed circuit of
VGA is shown in fig. 3. Where N1, N2 is the n-type differential input pairs. Control voltage is applied at the gate of N3 MOS, which
controls the gain of device.
Three terminals MOS is in saturation region act as amplifier and in linear region act as a resistor. Become two terminal
devices and in saturation region it work as resistor. P1 and P4 are two terminal devices which are connected at load side. Proposed
circuit is operated at very less power supply and provides large dB- linear range with less power consumption [14].


Fig.3. Prorposed diagram of VGA

SIMULATION AND RESULT
The dB-linear VGA was fabricated using a 45-nm CMOS process. The power consumption of this VGA is 0.6(uW) and require less
area at 1v power supply and having 96 (dB) dB-linear ranges. Proposed VGA can boost dB-linear range with less gain error as
compare to gain error compensation technique. Simulation summary is shown in table I

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

625 www.ijergs.org

Table I
Simulation result summary

Design [13] This work
Gain range (dB) -13 to 63 68.67 to
40.10
dB-linear range (dB) 50 96
Gain error (dB)% 0.5 0.04
Supply (V) 1.2 1
No of stage 3 1
Technology 65nm 45nm
Current consumption(without
buffer)
1.8mA 0.6uA
No. of MOS(with buffer) 26 8
Gain bandwidth (MHz) 14.8 58.53
Power consumption 3.84(mW) 0 .6(uW)

Fig.4. gain (dB) versus frequency (Hz) graph
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

626 www.ijergs.org


Fig.5. Measured gain of dB linear VGA at 67.78 KHz to 58.53MHz

CONCLUSION
In this paper, we introduced a diode connected load method for a dB-linear VGA. The proposed approximation does not
require an extra circuit for generating the exponential function and it drastically reduces design complexity and chip area. Moreover,
the dB-linear gain can be controlled easily using the gate bias. Because of the simple control method, this VGA is robust to the
process variation. The VGA based on the proposed method can be fabricated using any VGA that has a linear gain characteristic. by
using of diode connected load ,we achieved a dB-linear gain range 96dB within 0.04dB gain error.

REFERENCES:
[1] Y. S. Youn and J. H. Choi, A CMOS IF transceiver with 90 dB linear control VGA for IMT-2000 application, in IEEE Symp.
VLSI Circuits Dig. Tech. Papers, 2003, pp. 131134.
[2] P. Antoine et al., A direct-conversion receiver for DVB-H, IEEE J. Solid-State Circuits, vol. 40, no. 12, pp. 25362546, Dec.
2005.
[3] B. E. Kim et al., A CMOS single-chip direct conversion satellite receiver for digital broadcasting system, in IEEE Symp. VLSI
Circuits Dig. Tech. Papers, 2002, pp. 131134.
[4] C. C. Cheng and S. I. Liu, Pseudo-exponential function for MOSFETs in saturation, IEEE Trans. Circuits Syst. II, Analog Digit.
Signal Process. vol. 46, no. 6, pp. 789801, Jun. 1999.
[5] H. Elwan, A. Tekin, and K. Pedrotti, A differential-ramp based 65 dB-linear VGA technique in 65 nm CMOS, IEEE J. Solid-
State Circuits, vol. 44, no. 9, pp. 25032514, Sep. 2009.
[6] Y. Zheng et al., A CMOS VGA with DC-offset cancellation for direct conversion receivers, IEEE Trans. Circuits Syst. I, Reg.
Papers, vol. 56, no. 1, pp. 103113, Jan. 2009.
[7] S. Vlassis, CMOS current-mode pseudo-exponential function circuit, Electron Lett., vol. 37, pp. 9981000, 2001.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

627 www.ijergs.org

[8] T.-W. Pan and A. A. Abidi, A 50 dB variable gain amplifier using parasitic bipolar transistors in CMOS, IEEE J. Solid-State
Circuits, vol. 24, no. 4, pp. 951961, Aug. 1989.
[9] S. Otaka, G. Takemura, and H. Tanimoto, A low-power low-noise accurate linear-in-dB variable-gain amplifier with 500-MHz
bandwidth, IEEE J. Solid-State Circuits, vol. 35, no. 12, pp. 19421948, Dec. 2000.
[10] H. D. Lee, K. A. Lee, and S. Hong, A wideband CMOS variable gain amplifier with an exponential gain control, IEEE Trans.
Microw. Theory Tech., vol. 55, no. 6, pp. 13631373, Jun. 2007.
[11] T. Yamaji et al., A temperature stable CMOS variable-gain amplifier with 80-dB linearly controlled gain range, IEEE J. Solid-
State Circuits, vol. 37, no. 5, pp. 553558, May 2002.
[12] Xin Cheng, Haigang Yang, Fei Liu, A 47- dB Linear CMOS Variable Gain Amplifier Using Current Squaring Technique
Circuits And Systems (APCCAS) IEEE Asia Pacific Conference,pp. 76-79,Dec. 2010
[13] Inyoung Choi, Heesong Seo and Bumman Kim, "Accurate dB-Linear Variable Gain Amplifier with Gain Error Compensation"
IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 48, NO. 2, FEBRUARY 2013.
[14] Pashupati Nath Jha , Vinod Kapse, Low Power ,Low Voltage 95.1-dB Linear Variable Gain Amplifier With Diode Connected
Load International Journal of Science and Research (IJSR), Vol.3,No. 9,pp. 12721276 ,September 2014

















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

628 www.ijergs.org

Field Data Based Mathematical Modeling (FDMM): State of the Art and
Literature Review
Ashish D Vikhar
1
, Dr.D.S.Deshmukh
2
Dr. J.P.Modak
3

1
P.hd Research Scholar and Lecturer in Mechanical Engineering (Asstt. Professor), Government, Polytechnic, Jalgaon (MS), India,
ashishvikhar09@gmail.com
2Professor & Head, Department of Mechanical Engineering, SSBTs College of Engineering and Technology, Bambhori, Jalgaon
deshmukh.dheeraj@gmail.com
2
Emeritus Professor & Dean (R & D),Priyadarshani College of Engineering, Nagpur( MS), India.
jayant_modak@yahoo.co.in

Abstract: The term model is to refer to the ensemble of equations which describe and interrelate the variables and
parameters of a physical system or process. The term field data based mathematical modeling in turn refers to the
derivation of appropriate equations that are solved for a set or system of process variables and parameters. These solutions
are often referred to as simulations, i.e., they simulate or reproduce the behavior of physical systems and processes. This
paper is organized to summarize a number of field data based mathematical modeling case studies and revises state of the
art and its literature review.
Key words: Field data based mathematical modeling (FDMM), optimization, and validation.
1. Introduction
Mathematical model is nothing but the algebraic relationship amongst the response variable and independent
variable. Here, response variable is called dependant variable. Any phenomena can be presented mathematically by
knowing the physics involve in it. These mathematical models are of three types (1) Logic based (2) Field data based (3)
Experimental data based. Some phenomena can be presented by application of basic balances. In certain situations, it is
not possible to formulate a mathematical model for complex phenomena on the basis of application of the basic balances
of mechanics. In such situations, it becomes inevitable to collect experimental data for the process and to utilize the
generated experimental data to formulate the generalized algebraic relationship amongst the various physical quantities
involved in the process which may be called as experimental data based modeling. Field data based modeling is applicable
for any type of man-machine system [1].Field data based model form the relationship between input and output variables.
This type of modeling is used for improving the performance of system by suggesting or modifying the inputs for
improving output [2]. Data sets contain information and the behavior of the process variables, often much more than can
be learned from just looking at plots of those observed data. Mathematical models based on observed input and output
data from real life situation help us to gain new information and understanding from these data. Thus, it is not possible to
plan such activities on the lines of design of experimentation When one is studying any completely physical phenomenon
but the phenomenon is very complex to the extent that it is not possible to formulate a logic based model correlating
causes and effects of such a phenomenon, then one is required to go in for the field data based models [3]. Hence the
approach of formulating a field data based model is utilized to make analysis of any process. It is not possible to plan such
activities on the lines of design of experimentation [11] when one is studying any completely physical phenomenon but
the phenomenon is very complex to the extent that it is not possible to formulate a logic based model correlating causes
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

629 www.ijergs.org

and effects of such a phenomenon then one is required to go in for the field data based models [4] [5].In such a situation
the various steps involved in formulating model for such a complex phenomenon is same as follows [4] Identify the
Causes and Effects performing qualitative analysis of physics of such a phenomenon. Establish dimensional equation for
such a phenomenon. Once a dimensional equation is formed, it is a confirmation that all involved physical quantities are
considered. Then perform Test Planning which involves deciding Test Envelope, Test Points, Test Sequence. Test
Envelop: - To decide range of variation of an individual independent term. Test Points:- To decide and specify values of
independent terms at which experimental setup be set during experimentation. Test Sequence: - To decide the sequence
in which the test points be set during experimentation. Plan of Experimentation: Whether to adopt Classical Plan or
Factorial Plan. Physical Design of an Experimental Setup: Here it is necessary to work out physical design of an
experimental setup including deciding specifications and procuring instrumentation, subsequently it is necessary to
fabricate the set up. Next step would be to execute experimentation as per test planning and gather data regarding causes
(Inputs) and effects (Responses). Next step is to purify the gathered data using statistical method and finally to establish
the relationship between outputs (effects) and inputs (causes) using various graph papers called field data based
mathematical model.
1. Formulation of FDBM Model
1.1 Study of the present method
Study the present method or process and enumerate problems with an existing system. Collect data on system
specifications, input variables, as well as performance of the existing system. Identify sources of randomness in the
system, i.e., the stochastic input variables. Select an appropriate input probability distribution for each stochastic input
variable and estimate corresponding parameters.
1.2 Identification of Independent and Dependent variables:
This step involves identification of the Causes and Effects by performing qualitative analysis of physics of such a
phenomenon. Causes or independent variables and Extraneous Variables: These are other parameters, which could not
identified, as inputs would be considered as extraneous variables such as loss of human energy by other means, effect of
enthusiasm and motivations in workers performing the activity.
1.3 Reduction of independent variables adopting dimensional analysis: T
This step involves establishing dimensional equation for such a phenomenon. Once a dimensional equation is
formed, it is a confirmatory face that all involved physical quantities are considered. Getting dimensionless input
quantities through physical quantities, these are the causes (or Inputs). According to Theories of Engineering
experimentation by H. Schenck Jr., Chapter 4 [10], The choice of Primary Dimensions. Most systems require at least
three primaries, but the analyst is free to choose any reasonable set he wishes, the only requirement being that the
variables must be expressible in his system. In this research all the variables are expressed in mass (M), length (L),
time(T) hence M, L, and T are chosen for the dimensional analysis. The process variables, their symbols and dimensions
are listed in M, L and T form are the symbols for the mass, length and time respectively.
1.4 Selection of mathematical approach and development of model:
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

630 www.ijergs.org

The mathematical relation between inputs and outputs could be of any form may be polynomial, exponential or
log linear. The Buckingham theorem [8] found suitable for developing the model. As it states that if the inputs and
outputs represented in dimensionless pie terms by dimensional analysis then they can be represented by eqn. (1)
Y = K x A
a
x B
b
x C
c
x D
d
x E
e
(1)
Moreover, the controls over the variables are not affected.
1.5 Combining of variables in pie terms
The obtained independent variables can be utilized only after modifying or converting into standard
dimensionless form. The various parameters recorded were converted into desired form i.e. the dimensionless pie terms.
A, B, C, D, E are the final independent pie terms representing workers data, environmental data, tools data, workstation
data and materials data and the response variable Y. This dimensionless statement (1), transformed into linear
relationship using log operation. The log linear relationship so obtained is easy to understand and does not damage any
facets of original relationship. For determining the indices of the relation between output and inputs, we use
multiple regressions and Matlab software, thus the models for Y obtained.
2. Literature review
The work done in the area of field data based mathematical modeling is as under.
2.1 Field data based model Scheffler reflector
Rupesh Patil, Gajanan K. Awari and Mahendra P. Singh [7], discussed about work carried out on the Scheffler
reflector. It has been focused about having scope for experimental data based modeling to establish relationship in
different variables of Scheffler reflector. Scheffler reflector is studied with a typical experimental plan of simultaneous
variation of independent variables. Experimental response data is analysed by formulating dimensional equations and
validated by using neural network analysis.
2.2 Optimization of passenger cars scheduled service process.
K.S. Dixit, J.P. Modak, M. P. Singh [6], explained an approach to formulate Field Data Based Model (FDBM) for
optimization of passenger cars scheduled service process. In view of the sustainably increasing competition in the
automobile sector, different automobile companies are taking great efforts to improve their after sales service. One of the
most important aspects of after sales service is the scheduled servicing of a vehicle. The scheduled servicing offers certain
advantages, such as preplanning (ordering spares, costs are distributed more evenly, no initial costs for instruments for
supervision of equipment) and avoiding inconvenience. However, often the delays during these scheduled servicing
negates the advantages offered. Hence, generated a reliable and valid approach such as Field data based mathematical
modeling and its optimization of Scheduled Servicing Functions of automobiles in general and Passenger Cars in
particular.
2.3 Field data based model for turning process
Mangesh R. Phate, Chetan K.Mahajan, Mangesh L.Mote, Bharat V.Patil, Harshal G.Patil [3], shown the clear
idea about the detailed methodology of mathematical model formulation for the surface roughness, tool temperature, and
machine vibration and operator pulse rate during the turning process. It helps to develop an accurate and reliable model
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

631 www.ijergs.org

for predicting and optimizing the critical process parameters which affects the quality, productivity and the safety of the
operator during a step turning process. This work represents the detailed about the formulation of field data based model
to analyze the impact of various machining field parameters on the machining of Aluminum 6063, S.S304, BRASS,
EN1A, EN8.In Indian scenario where majority of total machining operation are still executed manually which needs to be
focused and develop a mathematical relation which simulate the real input and output data directly from the machining
field where the work is actually being executed. The findings indicate that the topic understudy is of great importance as
no such approach of field data based mathematical simulation is adopted for the formulation of mathematical model.
2.4 Framework of reinforced concrete structure using field data based mathematical modeling
Satya Prakash Mishra, Parbat D.K [8], carried out the sensitivity analysis of manual formwork activity of
Reinforced concrete structures and applied field data based mathematical approach (FDMM). They demonstrated
sensitivity analysis is used to analyze how sensitive a system response with respect to change in key input parameters of
construction phenomenon.
2.5 Field data based model foe manual stirrup making activity
S.R.Ikhar, Dr.A.V.Vanalkar, Dr.J.P.Modak [9], made the detailed study of present manual stirrup making activity
which indicates that the process suffers from various draw back like lack of accuracy, low production rate and resulting in
to severe fatigue in the operator. Stirrup or lateral tie is one of the essential element of reinforce cement concrete in civil
construction. These stirrups are used for strengthening columns and beams, avoiding buckling of long slender column and
avoiding sagging of horizontal beam. The construction operator not only subjects his hands to hours of repetitive motion
but also some times suffers internal injury to his body organ that is disorder carpel tunnel syndrome. In order to remove
above draw backs authors have determine an appropriate sample size for the activity and formulated various field data
based mathematical models (FDMM) such as multivariable linear model, polynomial model, exponential model,
logarithmic model, on the basis of gathered field data by applying theories of experimentation. The formulated model can
use to optimize the human energy of worker, production rates and inaccuracy of stirrups.
2.6 Formulation of generalized field data based model for the process of tractor axle assembly of an enterprise
Manish Bhadke, Dr.K.S.Zakiuddin [2], described an approach for formulation of generalized field data based
model for the process of tractor axle assembly of an enterprise. The Tractor axle assembly process is considered for study
which is a complex phenomenon. The aim of field data based modeling for axle assembly process is to improve the
performance of system by correcting or modifying the inputs for improving output. The reduction of human energy
expenditure while performing axle assembly is main objective behind study. Reduced human energy consumption will
increase overall productivity of assembly process. The work identifies major ergonomics parameters and other
workstation related parameters which will affect the productivity of axle assembly process. The identified parameters are
raw material dimensions, workstation dimensions, energy expenditure of workers, anthropometric data of the workers and
working conditions. Working conditions include humidity of air, atmospheric temperature, noise level, intensity of light
etc. at workstation which influence the productivity of assembly operation. Out of all the variables identified, dependant
and independent variables of the axle manufacturing system are identified. The no of variables involved were large so
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

632 www.ijergs.org

they are reduced using dimensional analysis into few dimensionless pi terms. Buckingham pi theorem is used to establish
dimensional equations to exhibit relationships between dependent terms and independent terms. A mathematical
relationship is established between output parameters and input. The mathematical relationship exhibit that which input
variables is to be maximized or minimized to optimize output variables. Once model is formulated it is optimized using
the optimization technique. Sensitivity analysis is a tool which can be used to find out the effect of input variables on
output variables. Simultaneously it would be interesting to know influence of one parameter over the other. The model
will be useful for an entrepreneur of an industry to select optimized inputs so as to get targeted responses.
2.7 Prediction of unbalance effect of on journal bearing through field data based mathematical model.
Vijaykumar.S.Shende, Prerna.S.Borkar [16], presented works on a possible approach which provides the
prediction of unbalance through mathematical model and the effect of unbalance on journal bearing. Now a days
vibration based condition monitoring technique is widely used in several core companies. These companies are like -
Cement Companies, Thermal Power Stations, Rolling Mills etc. This technique prevents excessive failure of the machine
component. Hence in such a companys special departments are there, which handles the problem related to the health of
machine. Sometimes, maintenance department of the company has this responsibility. There are so many process
machines used in the industries. Amongst such a machine some machines have rotor system. Even in some machine the
journal bearings bear the load of different rotor.
2.8 Formulation of Field Data Based Model (FDBM) for any Man Machine System
O.S.Bihade, S.S.Kulkarni, J.P.Modak, K.S.Zakiuddin [5], discussed the approach to formulate Field Data Based
Model (FDBM) for any Man Machine System of construction activity. Man-Machine System means an activity occurring
with the involvement of a human operator either a male and or a female with the help of some tools used to interact with
the material. The common building materials used in various activities are bricks, cement, coarse aggregate, fine
aggregate, water, mild steel bars, timber, marble, granite, glass etc. The construction methods are being practiced over
several decades. No investigation has been made as regards appropriate use of the posture, parameters of tools and
construction materials for every construction activity. It is therefore felt necessary to ascertain the scope of improvement
in the method of performing a construction activity. It is necessary to form such a Field Data Based Model for deciding
strengths and weaknesses of the traditional method of performing any construction activity. Once the weaknesses are
known, the corrective action can be decided. Specific application of Civil Engineering activities is treated. The
investigation reports Field Data Based Modeling of some of the construction activities.
2.9 Fall in liquor revenue in terms of various causes by applying field data based mathematical modeling
Satish Chaturvedi, Shubha Johri, J.P. Modak [10], described that before getting independence of INDIA from
British regime, large number of Leaders of India was required to take strong agitation against British Government for
getting freedom. One of the prominent leaders was Mr. M. K. Gandhi. During the period 1920 to 1942 in Central
Provinces and Berar specifically pertaining to the period June 1930 to September 1930, strong agitations took place
towards reducing income to Government by way of reducing liquor consumption. Several events took place towards this
objective. Based on the facts, the attempt is made in this paper to present the entire agitation as one social phenomenon in
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

633 www.ijergs.org

the form of a Field data based Mathematical Model correlating the fall in liquor revenue in terms of various causes
responsible for this fall in revenue. It is only through the Mathematical Model that it is possible to get quantitative idea of
intensity of interaction of causes on effects of any phenomena may be it be scientific or socio-economic or of any other
type. Particularly the approach of Field Data Based Model [11] is applicable in such a situation as this is a Field
Phenomena. Such models serve as most reliable tools to plan future such activities. This could be known as a process of
PROGNOSIS.
2.10 Application of field data based modeling in corrugated box production process
Sachin G Mahakalkar, Dr.Vivek H Tatwawadi, Jayant P Giri, Dr. J. P. Modak [12], presented that Response
surface methodology (RSM) is a statistical method useful in the modeling and analysis of problems in which the response
variable receives the influence of several independent variables, in order to determine which are the conditions under
which should operate these variables to optimize a corrugated box production process (similar to field data based
mathematical modeling). The purpose of this research is to create response surface models through regression on
experimental data which has been reduced using DA to obtain optimal processing conditions. Studies carried out for
corrugated sheet box manufacturing industries having man machine system revealed the contribution of many
independent parameters on cycle time. The independent parameters include anthropometric data of workers, personal data,
machine specification, workplace parameters, product specification, environmental conditions and mechanical properties
of corrugated sheet. Their effect on response parameter cycle time is totally unknown. The developed model was
simulated and optimized with the aid of MATLAB R2011a and the computed value for cycle time is obtained and
compared with experimental value.
2.11 Modeling and simulation of human powered flywheel motor by field data based modeling
As per geographical survey of India, A. R. Lende, J. P. Modak [13], observed that about 65% of human
population is living in rural areas where urban resources like electricity, employment accessibility, etc are very deprived.
The country is still combating with fundamental needs of every individual. The country with immense population living in
villages ought to have research in the areas which focuses and utilizes the available human power. Some Authors related
to this work had already developed a pedal operated human powered flywheel motor (HPFM) as an energy source for
process units. The various process units tried so far are mostly rural based such as brick making machine (both rectangular
and keyed cross sectioned), Low head water lifting, Wood turning, Wood strips cutting, electricity generation etc. This
machine system comprises three sub systems namely (i) HPFM (ii) Torsionally Flexible Clutch (TFC) (iii) A Process
Unit. Because of utilization of human power as a source of energy, the process units have to face energy fluctuation
during its supply. To evaporate this rise and fall effect of the energy, the concept of use of HPFM was introduced. During
its operation it had been observed that the productivity has great affection toward the rider and producing enormous effect
on quality and quantity of the product. This document takes a step ahead towards the development of a controller which
will reduce system differences in the productivity. A. R. Lende, J. P. Modak contributes in development of optimal model
through artificial neural network which enables to predict experimental results accurately for seen and unseen data. The
paper evaluates ANN modeling technique on HPFM by alteration of various training parameters and selection of most
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

634 www.ijergs.org

excellent value of that parameter. The field data based mathematical model developed which then could be utilized in
design of a physical controller.
2. 12 Formulation of a Field Data Based Model for a Surface Roughness
Mangesh R.Phate, Dr. V.H.Tatwawadi [14], focused on new approach of model formulation using response
surface methodology (RSM) in the convectional turning (CT) of ferrous and nonferrous material. The data was collected
from the actual field where the actual work is carried out. Random plan of experimentation based on the industry interests
were considered for the data collection. The various independent parameters considered in this research are operator data,
tool data, work piece data, cutting process parameters, machine data and the environmental parameters while the
dependent parameter is surface quality achieved during the convectional turning process. The surface quality is measured
in terms of surface roughness of the finished product.
3. Summary of the work done in Field Data based Mathematical Modeling
Key findings of different authors in field data based mathematical modeling are summarized in the following
table.
Sr.No Authors and Years Title of the research Work Key Findings
1 Rupesh Patil,
Gajanan K. Awari,
Mahendra P. Singh.
(2011).
Formulation of Mathematical
Model and Neural Network
Analysis of Scheffler
Reflector
This study has developed dimensionless
correlations for analyzing the performance
of Scheffler reflector. Dimensional
analysis shows that generated water
temperature is determined primarily by
ratio of product of wind speed and time of
operation to Dish size. The models have
been formulated mathematically for the
local conditions. After training the
Artificial Neural Network it is found that
every case of experimental results are in
good agreement with the predicted values
obtained by ANN. From the results it is
seen that the mathematical models can be
successfully used for the computation of
dependent terms for a given set of
independent terms.
2 Satish Chaturvedi,
Shubha Johri,
J.P. Modak.
(2013)
Formulation
of Mathematical Model
of Picketing of Liquor Shops
and Warehouses
Based on the numerical data established
and applying the methodology of model
formulation, the mathematical model for
falling liquor sell is formulated. The value
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

635 www.ijergs.org

of curve fitting constant in this model is
54.5. This collectively represents the
combined effect of all extraneous
variables.
3 K.S. Dixit, J.P. Modak,
M. P. Singh.
(2012)
Optimization of scheduled
servicing functions of
passenger cars using a
mathematical modeling
approach.
A generalized field data based model was
developed to simulate the scheduled
servicing process for passenger cars. They
had found that vehicle design parameters
like accessibility of air filter, fuel filter &
oil filter have maximum influence on cycle
time of scheduled servicing of the
passenger cars. Also The difficulty in
dismantling, changing seal & assembling
seemed to have significant effect on cycle
time of scheduled servicing of the
passenger cars. Inspection for leakage
becomes difficult & time consuming if
accessibility is poor. Anthropometric
factors seem to have impact on the cycle
time in as much as the service operation is
being performed in all three positions i.e.
sitting on legs, bending & standing
position. Other influencing factor include
those parameters, which are workplace
related.
4 Mangesh R. Phate, Chetan
K.Mahajan,
Mangesh.L.Mote,
Bharat.V.Patil,
Harshal G.Patil
(2013)
Investigation of Turning
Process Using Field Data
Based Approach in Indian
Small Scale Industries.
In this study, a generalized field data based
model was developed to simulate the step
turning process for aluminum and brass.
The approach of generalized model
formulation model provided an excellent
and simple way to analyze the engineering
complex process where the impact of field
data is dominating the performance.
5 Satya Prakash Mishra,
Parbat D.K. (2012)
Sensitivity analysis of multi
parameter mathematical model
In this work, Field data based modelling
concept found very useful and can be
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

636 www.ijergs.org

in reinforced
concrete construction.
applied to any complex construction
activity as the observations for variables
are obtained directly from the work place
and include all kind of data such as
workers anthropometrics, environmental
conditions, tools used and its geometry,
layout of work station and materials
properties. Modelling and proper analysis
can suggest a correct method of doing such
activities and modifying the tools
geometry, materials for tools with
changes in layout of workstation will
improve productivity, reduce losses of
materials, losses due to error in
construction work and ergonomic
construction.
6 S. R. Ikhar, Dr. A. V.
Vanalkar, Dr. J. P. Modak.
(2013)
Field Data Based
Mathematical Model for
Stirrup Making Activity in
Civil Construction
Stirrup or lateral tie is one of the essential
element of reinforce cement concrete in
civil construction. The process suffers
from various draw back like lack of
accuracy, low production rate and resulting
in to severe fatigue in the operator. The
construction operator not only subjects his
hands to hours of repetitive motion but also
some times suffers internal injury to his
body organ that is disorder carpel tunnel
syndrome. In order to remove above draw
backs authors have determine an
appropriate sample size for the activity and
formulated various field data based
mathematical models (FDBM) such as
multivariable linear model, polynomial
model, exponential model, logarithmic
model, on the basis of gathered field data
by applying theories of experimentation
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

637 www.ijergs.org

7 Manish Bhadke,
Dr.K.S.Zakiuddin.
(2013)
Formulation of Field Data
Based Model for Productivity
Improvement of an Enterprise
Manufacturing Tractor Axle
Assembly: an Ergonomic
Approach
The aim of field data based modeling for
axle assembly process is to improve the
performance of system by correcting or
modifying the inputs for improving output.
It is found that reduced human energy
consumption will increase overall
productivity of assembly process. The
work identifies major ergonomics
parameters and other workstation related
parameters which will affect the
productivity of axle assembly process.
8 Prof. Girish D. Mehta,
Prof.Vijaykumar.S.Shende,
Prof.Prerna.S.Borkar.
(2013)
A Mathematical Model For
Vibration Based prognosis For
Effect Of Unbalance On
Journal Bearing
In this investigation following some
important conclusion are made. 1) As there
is increase in unbalance mass, amplitude at
1 x frequency gets increased of journal
bearing. 2) For this phenomenon of
unbalance the mathematical model for
prognosis of amplitude at 1x frequency of
rotor is established for the individual
bearing. 3) As unbalance mass is
increased, this will increases coefficient of
friction between journal and oil film.
9. O.S.Bihade,
S.S.Kulkarni,
J. P. Modak,
K.S. Zakiuddin
(2012)
Mathematical Modeling and
Simulation of Field Data
based Model for Civil Activity
Paper details the use of contemporary
techniques for the purpose of study,
compression and generalized approach for
the FDMM of any Man Machine System.
By this. once the weaknesses are known,
the corrective action can be decided.
Specific application of Civil Engineering
activities is treated. The present
investigation reports Field Data Based
Modelling of some of the construction
activities. The scope of these activities is
restricted to either exclusively for a single
storied residential building or maximum up
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

638 www.ijergs.org

to the building with G+1 floor.
10. Sachin G Mahakalkar,
Dr. Vivek H Tatwawadi,
Jayant P Giri,
Dr. J. P. Modak.
(2013)

Corrugated box production
process optimization using
dimensional analysis
and response
surface methodology
The study presents an illustration of how
dimensional analysis (DA) can be applied
to significantly reduce the number of
independent variables used to optimize the
cycle time as response variable using
response surface methodology (RSM) (like
FDMM). Using DA 43 independent
variables has been reduced to 07
dimensionless Pi terms. This can greatly
help in constructing a response surface
approximation function of fewer variables.
10. A. R. Lende,
J. P. Modak.
(2013)
Modeling and simulation of
human powered flywheel
motor for field data in the
course of artificial neural
network- a step forward in the
development of artificial
intelligence.
The optimization methodology adopted is
unique and rigorously derives the most
optimum solution for field data available
for Human Powered Flywheel Motor. The
effect on prediction of network is observed
very consciously with variation each ANN
parameter.
4. Conclusion
This paper presents the overview of filed data based modeling and its formulation. Further it describes a detailed
review of the various methods used for establishing field data based mathematical models. The performance
characteristics and usefulness of the various models are critically examined. The existing methods of field data based
models based on both long term data and short-term measured data are also presented.
References
[1] J.P.Modak, K.S. Zakiuddin, Mathematical modeling and simulation of field data based
model for civil activity, International Journal of Scientific & Engineering Research, Volume
3, Issue 3, March-2012.
[2] Manish Bhadke, Dr.K.S.Zakiuddin, Formulation of Field Data Based Model for Productivity
Improvement of an Enterprise Manufacturing Tractor Axle Assembly: an Ergonomic
Approach, Proceedings of the 1st International and 16th National Conference on Machines
and Mechanisms (iNaCoMM2013), IIT Roorkee, India, Dec 18-20 2013.
[3] Mangesh R. Phate, Chetan K.Mahajan, Mangesh L.Mote, Bharat V.Patil, HarshalG.Patil,
Investigation of Turning Process Using Field Data Based Approach in Indian Small Scale
Industries, International Journal of Research in Mechanical Engineering & Technology, Vol.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

639 www.ijergs.org

3, Issue 2, May - Oct 2013.
[4] J.P.Modak and A.R.Bapat, Various Experiences of a Human Powered Flywheel Motor,
Human Power (Technical Journal of IHPV) No54, Spring 2003, pp 21-23.
[5] O. S. Bihade, S.S.Kulkarni,J. P. Modak, K.. S. Zakiuddin, Mathematical Modeling
and Simulation of Field Data BasedModel for Civil Activity, International Journal of
Scientific & Engineering Research, Volume 3, Issue 3, March-2012.
[6] K.S. Dixit1, J.P. Modak, M. P. Singh, Optimization of scheduled servicing functions of
passenger cars using a mathematical modeling approach, International Journal of Advances
in Engineering Research (IJAER) 2012, Vol. No. 3, Issue No. IV, April
[7] Rupesh Patil, Gajanan K. Awari and Mahendra P. Singh, Formulation of Mathematical
Model and Neural Network Analysis of Scheffler Reflector, VSRD International Journal of
Mechanical, Auto. & Prod. Engg. Vol. 1 (1), 2011.
[8] Satya Prakash Mishra1, Parbat D.K, Sensitivity analysis of multi parameter mathematical
model in reinforced concrete construction ,international journal of civil and structural
engineering Volume 3, No 1, 2012.
[9] S. R. Ikhar, Dr. A. V. Vanalkar, Dr. J. P. Modak, Field Data Based Mathematical Model
For Stirrup Making Activity in Civil Construction, Proceedings of the 1st International and
16
th
National Conference on Machines and Mechanisms (iNaCoMM2013), IIT Roorkee,
India, Dec 18-20 2013.
[10] Satish Chaturvedi, Shubha Johri, J.P. Modak, Formulation of Mathematical Model
of Picketing of Liquor Shops and Warehouses
[11] J. P. Modak, S.P.Mishra, O.S.Bihade and D. K. Parahat, An Approach to Simulation of a
Complex Field Activity by a Mathematical Model, Industrial Engineering Journal Vol. II,
No.2, February 2011, pp 11-14.
[12] Sachin G Mahakalkar, Dr. Vivek H Tatwawadi, Jayant P Giri, Dr. J. P. Modak,
Corrugated box production process optimization using dimensional analysis and response
surface methodology,[IJESAT] [International Journal of Engineering Science & Advanced
Technology] Volume-3, Issue-3, 96-105.
[13] A. R. Lende1, J. P. Modak2 Modeling and simulation of human powered flywheel motor
for field data in the course of artificial neural network a step forward in the development
of artificial intelligence, IJRET, International Journal of Research in Engineering and
Technology Volume: 02 Issue: 12 | Dec-2013.
[14] Mangesh R.Phate, Dr. V.H.Tatwawadi Formulation of a Field Data Based Model for a
Surface Roughness using Response Surface Method, International Journal of Science,
Engineering and Technology Research (IJSETR) Volume 2, Issue 4, April 2013.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

640 www.ijergs.org

[15] Sachin G Mahakalkara, Dr. Vivek H Tatwawadib, Jayant P Giric, Dr. J. P. Modak, The
Impact of Dimensional Analysis on Corrugated Sheet Manufacturing Process using
Buckingham Pi Theorem International Journal of Engineering Science and Innovative
Technology (IJESIT) Volume 2, Issue 4, July 2013.
[16] Prof. Girish D. Mehta, Prof. Vijaykumar.S.Shende, Prof. Prerna.S.Borkar, A Mathematical
Model For Vibration Based Progonosis For Effect Of Unbalance On Journal Bearing
,International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-
9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.537-544





















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

641 www.ijergs.org

Active and Reactive Power Control Through P-Q Controller Base System to
Replicate the Behavior of Full Power Converter of Wind Turbine
Abdullah Al Mahfazur Rahman

Southeast University, Dhaka, Bangladesh,
Email: rajib.aust.eee@gmail.com


Abstract This paper presents a technique to simulate the active and reactive power control phenomenon of a full power converter
base wind turbine. The implementation of full power converter in the wind turbine has enhanced the quality of power supply to the
grid. It has the ability to increase better controllability and full control over real and reactive power. The active and reactive powers
are the essential area of control for the grid manager. The conventional energy generation companies provide some margin of reactive
power support with the active power to the grid. But for the case of renewable energy producers it is the matter of concern. In this
paper a P-Q controller is introduced for controlling active and reactive power of the full power converter base wind turbine which
injects power to the PCC. The design technique implements PI controller base system with externally control current source. The
simulation model has been prepared in PSCAD/EMTDC.

Keywords P-Q controller, Point of common coupling (PCC), Transmission system operator (TSO), Grid code, Full power
converter, voltage source converter (VSC).
INTRODUCTION
The driving force of a society is the development of its energy producing infrastructure. As the world is advancing day by day,
eventually it is essential to find new sources of energy to meet the future energy demand. It cannot be based solely on the conventional
sources of energy such as thermal, nuclear and hydro power, because of the rapid decline of the sources and their impact on the
environment is quite alarming for the society. To find the better solution modern, energy technology is moving towards the renewable
energy sources i,e wind, solar and wave energy . So far, their contribution to meet demand is quite modest. The main disadvantages
of these renewable energy sources that they are quite expensive and less flexible compared to conventional power plants. But their
enormous potential to collect from nature and environment friendly characters make them the alternative choice for the future
energy. Currently the most effective wind power plants have efficiency of about 50% and now the consideration is to make it cheaper,
more reliable and more flexible [1]. The technological development of wind power is vast in Europe, America and some of the
countries of Asia like India and China. To make the renewable energy sources to perform well and overcome their limitation, the
major challenge is the efficient integration with the conventional grid. The characters of the conventional energy sources are known to
the system operators but the behaviors of the renewable energy sources are quite unpredictable because they directly rely on nature,
suppose for the case of PV shell it depends on sun irradiation and for wind energy it depends on the wind speed, which has a variation
on hour to hour, day to day and also month to month basis. The system operators expect that these renewable energy sources should
provide some control over the active and reactive power, which is included in the grid code for them.

Control over active power and reactive power in steady state and in the time of system dynamics are the area of concern. But with the
help of modern technology the full power converter base system of the wind turbine can overcome this problem.
In this paper a simple way is presented to represent the real and reactive power effect to simulate the behavior of wind turbine in
providing real power to the grid and in some case the reactive power support to the grid. A P-Q controller base system is presented to
simulate the active and reactive power control phenomenon of a full power base wind turbine both in normal and abnormal condition
of the grid.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

642 www.ijergs.org

ACTIVE AND REACTIVE POWER CONTROL OF A GRID
Electric grids are basically two types, which are ac grid and dc grid. In a dc grid the total power is calculated from active power,
which is the multiplication of voltage and current. The conduction loss depends only on the active power.
For the case of conventional ac grid the total power is calculated through its apparent power, which is the geometric sum of active and
reactive power [8]. In an AC grid the voltage and current both are sinusoidal quantities which oscillate with the frequency of 50 or 60
hz. The product of these two will be the power and it will be only the active power if there is no phase difference between the voltage
and current. However, if there is any phase shift between the voltage and current the output power is oscillating with its positive and
negative value [8]. If the phase difference between current and voltage is 90 then the total power is the reactive power because the
active power is oscillating with equal and opposite value and the average of active power is zero. This situation can arise, if the
system is dealing with pure inductive or capacitive element. For the case of pure inductive element the current lags behind the system
voltage by 90 and in the case of pure capacitive element the current leads the voltage by 90.

If a transmission grid is considered, where it is connected to renewable energy source like wind turbine, there exists reactive
components with the resistive components for the line. Now if a transformer is presented in between the generating source and the
grid, it also has inductive reactance. On the consumer side of the load are mostly inductive. So for the grid it is essential to prepare the
compensation arrangement for reactive power, because the reactive power is not consumed by the load it just go forward and
backward in the system and just participate in the system conduction loss. Thats why the compensation is required. Generally the big
generating station supply active power with leading power factor that means they have reactive power compensation for the inductive
load. To compete with the conventional energy sources the wind firms also have to provide support for the reactive power, which is
imposed by the transmission system operators (TSO) through the grid code criteria.

FULL POWER CONVERTER BASE WIND TURBINE WITH ACTIVE AND REACTIVE POWER SUPPORT
PHENOMENON
The improvement and modification of modern technology has helped a lot to develop some essential fetchers of wind turbine like the
size, quality of output power and technology used. One of the most important improvements is the development of full power
converter with variable speed operation. In a full power converter base wind turbine the generator may be a permanent magnet
synchronous generator or an induction generator. They may or may not use gear box as shown in the Figure -1. The energy produced
in this kind of wind turbine passes through the converter, so it can isolate the system in the time of major disturbance of the grid. With
the variation of wind speed the electrical frequency of the generator can change otherwise the power frequency in the grid remains
unchanged. Thus allow the variable speed operation of the wind turbine [3].


Induction /
Synchronous
generator
Power Electronic
Converter

Figure-1 Full power converter base wind turbine[5].

A full power converter is consisting of two voltage source converter, one is responsible two control the generator side and the second
one is responsible for the grid side. Each of the converters has the ability to absorb or generate the reactive power independently [7].
The generator side converter is basically a diode rectifier or a Pulse width modulated converter, where as the grid side converter is a
conventional PWM converter [3]. The output power from the generator depends on the rotor speed which depends on wind speed.
The wind speed has a wide variation and the rotor speed also varies with it as shown in figure-2, a controllable output power is
possible by the rotor side converter which helps the rotor to adjust its speed. Whereas the grid side converter provide a controllable
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

643 www.ijergs.org

active power output without consuming reactive power, Rather than it has the ability to provide some reactive power it needed by the
grid [4].



Figure-2 Power and Rotor speed curve with respect to wind speed.

DESIGN OF THE SYSTEM
A P-Q controller based system is designed to replicate the behavior of a full power converter considering the nature of wind turbine
and how it behaves with the connected grid. The essential features of the P-Q controller are the control over the active power which
is performed by the generator side converter and the control over the reactive power which is performed by the as the grid side
converter. Thus by using a P-Q controller the full power converter base wind turbine can be eliminated from the network during
the time of grid code analysis simulation work to make the system more simple.
As mentioned before, the wind power generation varies with the wind speed, so it is expected that at PCC of grid power injection in a
controlled way. The task can be performed by the grid side converter of full power converter. In this designed model the reference
value of active and reactive power is set at PCC and it is expected that the turbine side converter should achieve the value by its
controllability [5]. The P-Q controller can exchange a given amount of real and reactive power at the point of common coupling.
Figure-3 shows the connection of load to the PCC [5].

Figure-3 Externally control current source [5].

At the PCC the voltage is to be maintained constant, so the power at that point can be controlled by controlling the current through
that point or vice verse. The design block is shown in figure-4.

The entire system can be presented by the block diagram as shown in figure-5. It can be divided in to the following blocks:
- Externall y control current source .
- 3 phase voltage measurement .
- Active and reactive power measurement .
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

644 www.ijergs.org

- PLL block .
- dq to and to 3 phase current converter .
- P-Q controller .


Externally
controlled 3-phase
current source
PCC
3-phase Voltage
measurement
Active and Reactive
power measurenemt
P-Q controller
dq

PLL
abc

P Q
Pref
Qref
Id Iq
I I
Ia_ref Ib_ref Ic_ref

iload

Figure-4 Block representation of the system [5] .

EXTERNALLY CONTROL CURRENT SOURCE
Externally control current source is a kind of current source which responds with the reference current .That means if a 3 phase
reference current is set, it generates a value of 3 phase load current. The reference current is output of the controller.

PHASE VOLTAGE MEASUREMENT
The three phase voltage is measured at the PCC through the 3 phase Voltage measurement block. It is essential for the conversion
process in the PLL block which is described later part.

ACTIVE AND REACTIVE POWER MEASUREMENT
Active and reactive power measurement block sense actual power at PCC which is used as the actual value of P and Q which compare
with the reference value of active power

and reactive value of power

is respectively.

PHASE LOCK LOOP (PLL)
The output of the P-Q controller is the d-q component of current which is described later part of P-Q controller. But the reference in
the externally control current block is the 3 phase current. According to [6] the d-q current is converted in to the 3 phase current using
the d-q to conversion, so called Parks transformation and then to 3 phase current conversion, so called Clarks transformation.
For converting d-q to alpha-beta voltage angle theta is needed. So to find out the angle theta PLL (Phase lock loop) is introduced.

There are basically two types of PLL arrangement. One of them is voltage oriented PLL, in which the voltage vector is aligned with
the d- axis and the q-axis component is equals to zero. The other type of PLL is the Flux oriented PLL, where the voltage vector is
aligned with the q-axis component and d-axis component is equals to zero. This type of PLL is normally used in controlling the
Electric drives. The Voltage oriented PLL is used in the Transmission and Distribution system. The point of consideration in this
paper is the voltage at PCC so the voltage oriented PLL is used in this case.
The figure 5 shows the block diagram and Figure-6 shows the vector diagram of voltage oriented PLL. The controller is voltage
oriented, so PLL makes Vq=0, during steady state operation it locks the controller phase voltage with the phase voltage of PCC [6].

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

645 www.ijergs.org


abc
dq

PI
Vq
Vd
I/S
+
+
2f

Va
Vb
Vc
V
V

Figure-5 Design of PLL [5].

The transfer function of the PI controller is taken from [6].

F
c,PLL
= k
p,PLL
+
k
i ,PLL
s
(1)

Where the proportional gain k
p,PLL
=2 and the integrator gain k
i,PLL
=
2
taken from [2]. The bandwidth = 2f
PLL
and f
PLL
is chosen
as 5 Hz the slow PLL is used for the design.


d
-
a
x
i
s
q
-
a
x
i
s
V
d
-axis
-axis

Figure-6 Voltage oriented PLL vector diagram [5].
P-Q CONTROLLER
To design the control mechanism of the P-Q controller the PI controller has been selected because it is simple and the rating of current
and voltage is not the matter of concerned [5].The system is considered power invariant, so the equation for active and reactive power
equation in d-q frame can be written in the following way,
P = v
d
i
d
+ v
q
(2)
Q = v
q
i
d
v
d
i
q
(3)

As the PLL used in the system is voltage oriented the q-axis component of voltage is zero so the equation becomes,
P = v
d
i
d
(4)
Q = v
d
i
q
(5)

The active and reactive power can be controlled by making the voltage at PCC stagnant, so the current is the only quantity in equation
(4) and (5) that is needed to be controlled. The reference values are the chosen value of active and reactive power where as the actual
values are measured from PCC. Then the error signals which passes through the PI controller are generated by comparing these two
sets of values for both the cases of active power and reactive power. So the current I
d
can be found the active power controller and the
equation of current is,
i
d
= P
ref
PF
c
(6)

Where the voltage V
d
is desired to be 1 p.u .
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

646 www.ijergs.org

PI
+
-
Pref
P
id
Vd
P

Figure-7 Block diagram of active power controller [5] .

The expression of the transfer function Fc of the PI controller can be written as,

= k
p
+
k
i
s
= k
p
1 +
1
ST
s
(7)

So from the equation (7), if the value of the proportional gain k
p
and the time constant of the integrator constant T
s
are known, then
one can calculate the value of F
c
[6].

The value of the gain k
p
=0.97 and time constant T
s
has chosen is 200ms through hit and trial method [6] which can be seen from
figure- 9.
PI
+
-
Qref
Q
iq
Vd
Q

Figure-8 Block diagram of reactive power controller [5] .

Same parameters is used to design the reactive power controller and the i
q
current is found from the controller.The current equation
can be written as, i
q
= Q
ref
QF
c
8 ) )

The current is in d-q system which is then converterd into current and then 3-phase reference current of the externally control
current source as shown in the Figure-3.

Figure-9 Active power curve[5].

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

647 www.ijergs.org

CONCLUSION
A full power converter based wind turbine is capable of providing the active and reactive power support to the grid which can be
tested through the simulation model. To test the wind turbine grid code requirements a simple P-Q controller base simulation model
can be used instead of using the whole model of full power converter base wind turbine. A P-Q controller base system is very smile in
construction which represents the system in a simplified way, which uses the PI controller base system. For a PI controller current
rating is not the matter of concern. So using the controller the Active and reactive base controller can be modeled which replicates the
behavior of the wind turbine essential for the simulation work.

REFERENCES:
[1] Kristin Bruhn, Sofia Lorensson, Jennie Swenson Wind Power-a Renewable energy source in time.
[2] S. Islam K.Tan Optimum control strategies in energy conversion of PMSG wind turbine system without mechanical sensors,
IEEE transaction on Energy conversion, vol. 19, p. 392, June 2004.
[3] Skender Kabashia,Gazmend Kabashib,Ali Gashic,Kadri Kadriub, Sadik Bekteshia, Skender Ahmetaja, Kushtrim Podrimqaku
Modelling, Simulation and Analysis of Full Power Converter Wind Turbine with Permanent Synchronous Generator.
[4] Wu B.,Lang Y.,Zargari N.,Kouro S . , 2011 Power convertion and control of wind energy systems , Wiley-IEE ISBN :
0470593652,1118029003 .
[5] Abdullah Al Mahfazur Rahman, Muhammad Usman Sabbir Grid Code Testing by Voltage Source Converter LAP LAMBERT
Academic Publishing, ISBN:978-3-659-21581-0, 2012.
[6] Amirnaser Yazdani, Reza Iravani Voltage Source Converters in Power System, A John Wiley and sons inc., publication,
2010.
[7] Ake Larsson Power Quality of Wind Turbine Generating Systems and their interaction with grid.
[8] http://www.sma.de/en/solutions/medium-power-solutions/knowledgebase/sma-shifts-the-phase.html















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

648 www.ijergs.org

Antimicrobial Activity of Three Ulva Species Collected from Some Egyptian
Mediterranean Seashores
A. Abdel-Khaliq
1
*, H. M. Hassan
2
, Mostafa E. Rateb
2
, Ola. Hammouda
4
1. Basic Science Department, Faculty of Oral and Dental Medicine, Nahda university (NUB), Beni-suef ,Egypt.
2. Pharmacognosy Department, Faculty of Pharmacy, Beni-Sueif University, Beni-Suef, Egypt.
3. Botany and Microbiology Department, Faculty of Science, Beni-Sueif University, Beni-Suef, Egypt.
*Corresponding Author: Instructor. A. Abdel-Khaliq, (E-mail: abdoscience1987@yahoo.com)

Abstract:-Members of the class Ulvophyceae such as Ulva fasciata Delile,Ulva intestinalis Linnaeus and Ulva lactuca Linnaeus
were collected from tidal and intertidal zone of Mediterranean sea shores during April 2011 and extracted in ethanol. The total
summation of the recorded total protein increase in the order: Ulva fasciata < Ulva intestinalis < Ulva lactuca, with percentage; 28.7,
27 and 17.6%, respectively. The total summation of the recorded total carbohydrate increase in the order: Ulva lactuca < Ulva
intestinalis < Ulva fasciata, with percentage; 55.6, 49.63 and 47.93%, respectively. The total summation of the recorded total ash
increase in the order:Ulva lactuca < Ulva fasciata < Ulva intestinalis with percentage; 17.6, 17 and 14.6 %, respectively. The total
summation of the recorded total moisture increase in the order: Ulva intestinalis < Ulva fasciata < Ulva lactuca, with percentage;
9.93, 9.28 and 8.50% respectively. The total summation of the recorded total crude fat increase in the order Ulva lactuca < Ulva
fasciata < Ulva intestinalis, with percentage; 0.7, 0.60 and 0.54 % respectively. Phytochemical screening showed the presence of
carbohydrates and/or glycosides, sterols and/or triterpenes and traces of tannins in all marine algae under investigation, the presence of
both free flavonoids and/or combined flavonoids in all marine algae under investigation, Saponins are absent in all Ulva sp. under
investigation, Cardiac glycosides, anthraquinones and alkaloids are absent in all Ulva species under investigation and volatile
substances are also absent. Antimicrobial activity of Ulva sp. was tested against (10 Gram +ve bacteria, 10 Gram ve bacteria and 10
unicellular Filamentous fungi). The antimicrobial activities were expressed as zone of inhibition and minimum inhibitory
concentration (MIC). Identification of compounds from crude extract of Ulva sp. carried by LC/MS technique. Finally Ulva sp. could
serves as useful source of new antimicrobial agents.
Keywords:-Marine algae, Ulva fasciata, Ulva lactuca, Ulva intestinalis, Minimum inhibitory concentration (MIC), LC/MS (Liquid
chromatography/Mass spectroscopy) and Phytochemical screening.
INTRODUCTION
Seaweeds (Marine algae) belong to a group of eukaryotic known as algae. Seaweeds are classified as Rhodophyta (red algae),
Phaeophyta (brown algae) or Chlorophyta (green algae) depending on their nutrient, pigments and chemical composition. Like other
plants, seaweeds contain various inorganic and organic substances which can benefit human health
[1]
. Seaweeds are considered as a
source of bioactive compounds as they are able to produce a great variety of secondary metabolites characterized by a broad spectrum
of biological activities. Compounds with antioxidant, antiviral, antifungal and antimicrobial activities have been detected in brown,
red and green algae
[2]
. The environment in which seaweeds grow is harsh as they are exposed to a combination of light and high
oxygen concentrations. These factors can lead to the formation of free radicals and other strong oxidizing agents but seaweeds seldom
suffer any serious photodynamic damage during metabolism. This fact implies that seaweed cells have some protective mechanisms
and compounds
[3]
.
Marine algae are rich and varied source of bioactive natural products, so it has been studied as potential biocide and
pharmaceutical agents
[4]
. There have been number of reports of antibacterial activity from marine plants and special attention has been
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

649 www.ijergs.org

reported for antibacterial and antifungal activities related to marine algae against several pathogens
[5]
. The antibacterial activity of
seaweeds is generally assayed using extracts in various organic solvent for example acetone, methanol-toluene, ether and chloroform-
methanol
[6]
. Using of organic solvents always provides a higher efficiency in extracting compounds for antimicrobial activity
[7]
.
In recent years, several marine bacterial and protoctist forms have been confirmed as important source of new compounds
potentially useful for the development of chemotherapeutic agents. Previous investigations of the production of antibiotic substances
by aquatic organisms point to these forms as a rich and varied source of antibacterial and antifungal agents. Over 15,000 novel
compounds have been chemically determined. Focusing on bioproducts, recent trends in drug research from natural sources suggest
that algae are a promising group to furnish novel biochemically active substances
[8]
. Seaweeds or marine macro algae are the
renewable living resources which are also used as food and fertilizer in many parts of the world. Seaweeds are of nutritional interest as
they contain low calorie food but rich in vitamins, minerals and dietary fibres
[9]
. In addition to vitamins and minerals, seaweeds are
also potentially good sources of proteins, polysaccharides and fibres
[10]
. The lipids, which are present in very small amounts, are
unsaturated and afford protection against cardiovascular pathogens.
2. MATERIALS AND METHODS
2.1. Collection and identification of seaweeds
The studied algal species collected from the inter-tidal region of Mediterranean Sea shores between Ras elbar and Baltim.
Seaweeds were identified as Ulva lactuca, Ulva fasciata and Ulva intestinalis (Green algae). The identification of the investigated
marine algae was kindly verified by Prof. Dr. Ibrahim Borie and Prof. Dr. Neveen Abdel-Raouf, Botany Department Faculty of
Science, Beni-sweif University, Egypt.
2.2. Preparation of seaweed extracts
The collected seaweeds Ulva lactuca, Ulva fasciata and Ulva intestinalis were cleaned and the necrotic parts were removed
hundred gram of powdered sea weeds were extracted successively with 200 mL of solvent (Ethanol 70%) in Soxhelet extractor until
the extract was clear. The extracts were evaporated to dryness reduced pressure using rotary vacuum evaporator and the resulting
pasty form extracts were stored in a refrigerator at 4C for future use.
2.3. Collection of test microbial cultures
Twenty different bacterial cultures and ten fungal cultures were procured from Biotechnological Research Center, AL-Azhar
University (for boys), Cairo, Egypt. ten different fungal isolates were used in this present study. The fungal cultures were procured
from Biotechnological Research Center, AL-Azhar University (for boys), Cairo, Egypt.
2.4. Determination of Antibacterial activity of Ulva species.
2.4.1. Bacterial inoculum preparation
Bacterial inoculum was prepared by inoculating a loopful of test organisms in 5 ml of Nutrient broth and incubated at 37C
for 3-5 hours till a moderate turbidity was developed. The turbidity was matched with 0.5 M.C. Farland standards and then used for
the determination of antibacterial activity.
2.4.2. Well diffusion method
The antibacterial activities of investigated Ulva species were determined by well diffusion method proposed by Rahman et
al., (2001)
[11]
. The solution of 50 mg/ml of each sample in DMSO was prepared for testing against bacteria. Centrifuged pellets of
bacteria from a24 h old culture containing approximately 104 -106 CFU (Colony forming Unit) per ml were spread on the surface of
Nutrient agar (typetone 1%, Yeast extract 0.5%, agar 1%, 100 ml of distilled water, PH 7.0) which autoclaved under 12oC for at least
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

650 www.ijergs.org

20 min.Wells were created in medium with the help of a sterile metallic bores and then cooled down to 45oC.The activity was
determined by measuring the diameter of the inhibition zone (in mm).100l of the tested samples (100mg / ml) were loaded into the
wells of the plates. All samples was prepared in Dimethyl Sulfoxide (DMSO), DMSO was loaded as control. The plates were kept for
incubation at 37oC for 24h and then the plates were examined for the formation of zone of inhibition. Each inhibition zone was
measured three times by caliper to get an average value. The test was performed three times for each bacterium culture. Penicillin G
and Streptomycin were used as antibacterial standard drugs.
2.4.3. Minimum inhibitory concentration
Minimum inhibitory concentration (MIC) of investigated sea weeds against bacterial isolates were tested in Mueller Hinton
broth by Broth macro dilution method. The seaweed extracts were dissolved in 5% DMSO to obtain 128mg/ml stock solutions. 0.5 ml
of stock solution was incorporated into 0.5 ml of Muller Hinton broth for bacteria to get a concentration of 80, 40, 20, 10, 5, 2.50 and
1.25 mg/ml for investigated sea weeds extracts and 50ml of standardized suspension of the test organism was transferred on to each
tube. The control tube contained only organisms and devoid of investigated Ulva species. The culture tubes were incubated at 37oC
for 24 hours. The lowest concentration, which did not show any growth of tested organism after macroscopic evaluation was
determined as Minimum inhibitory concentration (MIC).
2.5. Determination of Antifungal activity
2.5.1. Well diffusion method
The antibacterial activities of investigated Ulva species were determined by well diffusion method proposed by Rahman et al.
(2001)
[12]
. Petri plates were prepared by Sabourad dextrose agar plates: A homogenous mixture of glucose-peptone-agar(40:10:15)
was sterilized by autoclaving at 121oC for 20 min.The sterilized solution (25ml) was poured in each sterilized petridish in laminar
flow and left for 20 min to form the solidified sabourad dextrose agar plate .These plates were inverted and kept at 30oC in incubator
to remove the moisture and check for any contamination. Antifungal assay: Fungal strain was grown in 5mL Sabourad dextrose broth
(glucose: peptone; 40:10) for3-4 days to achieve 105 CFU/ml cells. The fungal culture (0.1ml) was spread out uniformly on the
Sabourad dextrose agar plates. Now small wells of size (4mm20mm) were cut into the plates with the help of well cutter and bottom
of the wells were sealed with 0.8 % soft agar to prevent the flow of test sample at the bottom of the well.100l of the tested samples
(10mg/ml) were loaded into the wells of the plates .All Samples was prepared in dimethyl sulfoxide (DMSO), DMSO was loaded as
control. The plates were kept for incubation at 30oC for 3-4 days and then the plates were examined for the formation of zone of
inhibition. Each inhibition zone was measured three times by caliper to get an average value. The test was performed three times for
each fungus. Amphotericin B was used as antifungal standard drugs.
2.5.2. Minimum inhibitory concentration
Minimum inhibitory concentrations (MIC) of investigated Ulva species extracts against fungal isolates were tested in
Sabourauds dextrose broth by Broth macro dilution method. The Ulva species extracts were dissolved in 5% DMSO to obtain
128mg/ml stock solutions. 0.5 ml of stock solution was incorporated into 0.5 ml of Sabourauds dextrose broth for fungi to get a
concentration of 64, 32, 16, 8, 4, 2 and 1 mg/ml for Ulva species extracts and 50ml of standardized suspension of the test organism
was transferred on to each tube. The control tube contained only organisms and devoid of seaweed extracts. The culture tubes were
incubated at 28oC for 48 hours (yeasts) and 72 hours (molds). The lowest concentration, which did not show any growth of tested
organism after macroscopic evaluation was determined as Minimum inhibitory concentration (MIC).
2.6. Estimation of nutritional value of algal species
2.6.1. Protein estimation
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

651 www.ijergs.org

The protein fraction (% of DW) was calculated from the elemental N determination using the nitrogen-protein conversion
factor of 6.25 according to AOAC (1995)
[13]
.
2.6.2. Carbohydrates estimation
The total carbohydrate was estimated by following the phenol-sulphuric acid method of Dubois et al. (1956)
[14]
, using
glucose as standard.
2.6.3. Lipid estimation
Lipids were extracted with a chloroform-methanol mixture (2:1 v/v). The lipids in chloroform were dried over anhydrous
sodium sulphate, after which the solvent was removed by heating at 80C under vacuum AOAC (2000)
[15]
.
2.6.4. Moisture estimation
The moisture content was determined by oven method at 105C until their constant weight was obtained.
2.6.5. Moisture estimation
Ash content was acquired by heating the sample overnight in a furnace at 525C and the content was determined
gravimetrically.
2.7. Preliminary Phytochmical Tests
Preliminary phytochmical tests for identification of alkaloids, anthraquinones, coumarins, flavonoids, saponins, tannins, and
terpenes were carried out for all the extracts using standard qualitative methods that have been de- scribed previously [16-20].
2.8. Liquid chromatography / Mass spectroscopy (LCMS)
High resolution mass spectrometric data were obtained using a Thermo Instruments MS system (LTQ XL/LTQ Orbitrap
Discovery) coupled to a Thermo Instruments HPLC system (Accela PDA detector, Accela PDAautosampler, and Accela pump).The
following conditions were applied: capillary voltage 45 V, capillary temperature 260C, auxiliary gas flow rate 10-20 arbitrary units,
sheath gas flow rate 40-50 arbitrary units, spray voltage 4.5 kV, mass range 100_2000 amu (maximum resolution 30 000). The exact
mass obtained for eluted peaks was used to deduce the possible molecular formulae for such mass, and these formulae were searched
in Dictionary of Natural Products, CRC press, online version, for matching chemical structures.
3. RESULTS AND DISCUSSION
3.1. Identification of the marine Algae.
Seaweeds were identified as Ulva lactuca, Ulva fasciata and Ulva intestinalis (Green algae: Chlorophyta).The
identification of the investigated marine algae was kindly verified by Dr. Ibrahim Borai Ibrahim, Professor of Phycology, Botany &
Microbiology Department Faculty of Science, Beni-suef University, Egypt and Prof. Dr. Nevein Abdel-Rouf Mohamed, Professor of
Phycology and Head of Botany & Microbiology Department, Faculty of Science, Beni-suef University.
3.2. Antimicrobial activity.
No zone of inhibition was seen in DMSO control and the positive control Ampicillin showed zone of inhibition ranging from
(28.7 0.2 mm to 16.4 0.3 mm) against the Gram positive bacteria pathogens.
3.2.1 Antimicrobial activity of Ulva lactuca
3.2.1.1 Antimicrobial activity of Ulva lactuca against Gram +ve bacteria
Ulva lactuca showed highest mean zone of inhibition (22.00.8) against the Gram positive bacteria Staphylococcus aureus
followed by Staphylococcus saprophyticus (19.80.3mm), Streptococcus mutans (17.80.9mm), Bacillus subtilis (17.50.3mm),
Streptococcus pyogenes (14.20.5mm), Bacillus cereus (12.60.1mm) and Staphylococcus epidermidis (10.50.4).Gram positive
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

652 www.ijergs.org

bacteria, Streptococcus pneumonia, Enterococcus faecali and Corynebacterium diphtheria showed highly resistance against Ulva
lactuca crude extract.
3.2.1.2 Antimicrobial activity of Ulva lactuca against Gram -ve bacteria
Concering about extract of Ulva lactuca against Gram negative bacteria, maximum zone of inhibition was recorded against
Slamonella typhimurium (22.10.5mm) followed by Serratia marcescens (20.80.6mm), Escherichia coli (20.20.2mm) and
Neisseria meningitides (15.90.6mm). Ulva lactuca showed lowest mean zone of inhibition (12.20.7mm) against Klebsiella
pneumonia followed by Haemophilus influenza (13.20.8mm). Gram negative bacteria, Pseudomonas aeruginosa, Proteous vulgaris,
Yersinia enterocolitica and Shigella flexneria showed highly resistance against Ulva lactuca crude extract.
3.2.1.3 Antimicrobial activity of Ulva lactuca against Unicellular & Filamentous fungi
Ulva lactuca showed highest mean zone of inhibition (23.20.3mm) against the pathogenic fungi Geotricum candidum
followed by Candida albicans (22.50.7mm), Aspergillus clavatus (21.60.7mm), Aspergillus fumigatus (19.90.8mm), Rhizopus
oryzae (19.70.7mm) and Mucor circinelloides (15.80.3mm).Ulva lactuca showed lowest mean zone of inhibition against
Penicillium marneffei (10.30.1mm). Pathogenic fungi, Syncephalastrum racemosum, Absidia corymbifera and Stachybotrys
chartarum showed highly resistance against Ulva lactuca crude extract.
3.2.2 Antimicrobial activity of Ulva intestinalis
3.1.2.1 Antimicrobial activity of Ulva lactuca against Gram +ve bacteria
Ulva intestinalis showed highest mean zone of inhibition (17.90.3 mg/ml) against the Gram positivebacteria Staphylococcus
saprophyticus followed by Streptococcus mutans (16.50.1 mg/ml), Bacillus subtilis (15.50.7 mg/ml), Streptococcus pyogenes
(11.80.1 mg/ml), Bacillus cereus (10.90.2 mg/ml) and Staphylococcus epidermidis (8.70.2 mg/ml). Gram positive bacteria,
Streptococcus pneumonia, Enterococcus faecali and Corynebacterium diphtheria showed highly resistance against Ulva intestinalis
crude extracts.
3.2.2.2 Antimicrobial activity of Ulva intestinalis against Gram -ve bacteria
Ulva intestinalis showed the highest activity against Slamonella typhimurium (20.8 0.9 mg/ml) followed by Serratia
marcescens (18.90.5 mg/ml), Escherichia coli (18.20.9 mg/ml), Neisseria meningitides (14.20.5 mg/ml), Haemophilus influenza
(10.20.1 mg/ml) and Klebsiella pneumonia (10.20.1 mg/ml). Gram negative bacteria, Pseudomonas aeruginosa, Proteous vulgaris,
Yersinia enterocolitica and Shigella flexneria showed highly resistance against Ulva intestinalis crude extract.
3.2.2.3 Antimicrobial activity of Ulva intestinalis against Unicellular & Filamentous fungi
Ulva intestinalis showed highest mean zone of inhibition (21.70.1 mg/ml) against the pathogenic fungi Geotricum
candidum followed by Aspergillus clavatus (20.10.3 mg/ml), Candida albicans (19.30.5 mg/ml), Aspergillus fumigatus (17.8 0.7
mg/ml), Rhizopus oryzae (16.40.5 mg/ml), Mucor circinelloides (13.70.2 mg/ml) and Penicillium marneffei (10.30.1mg/ml).
Pathogenic fungi, Syncephalastrum racemosum, Absidia corymbifera and Stachybotrys chartarum showed highly resistance against
Ulva intestinalis crude extract.
3.2.3 Antimicrobial activity of Ulva fasciata
3.2.3.1 Antimicrobial activity of Ulva fasciata against Gram +ve bacteria
Ulva fasciata showed highest mean zone of inhibition (22.20.6 mg/ml) against the Gram positive bacteria Staphylococcus
aureus followed by Staphylococcus saprophyticus (19.60.4 mg/ml), Bacillus subtilis (17.90.9 mg/ml), Streptococcus mutans
(17.90.1 mg/ml), Streptococcus pyogenes (14.70.3 mg/ml), Bacillus cereus (12.90.1mg/ml) and Staphylococcus epidermidis
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

653 www.ijergs.org

(10.80.1 mg/ml). Gram positive bacteria, Streptococcus pneumonia, Enterococcus faecali and Corynebacterium diphtheria showed
highly resistance against Ulva fasciata crude extract.
3.2.2.2 Antimicrobial activity of Ulva fasciata against Gram -ve bacteria
Maximum zone of inhibition was recorded in Ulva fasciata crude extract against Slamonella typhimurium (22.40.5mg/ml)
followed by Serratia marcescens (21.20.6mg/ml), Escherichia coli (20.60.5mg/ml) and Neisseria meningitides (16.20.3mg/ml),
Haemophilus influenza (13.70.5mg/ml) and Klebsiella pneumonia (12.60.7mg/ml). Pseudomonas aeruginosa, Proteous vulgaris,
Yersinia enterocolitica and Shigella flexneria showed highly resistance against Ulva fasciata crude extract.
3.2.2.3 Antimicrobial activity of Ulva fasciata against Unicellular & Filamentous fungi
Ulva fasciata showed highest mean zone of inhibition (23.40.6mg/ml) against the pathogenic fungi Geotricum candidum
followed by Candida albicans (22.90.4 mg/ml), Aspergillus clavatus (21.10.7 mg/ml), Aspergillus fumigatus (20.10.6 mg/ml),
Rhizopus oryzae (20.10.8 mg/ml) and Mucor circinelloides (16.40.5 mg/ml) and Penicillium marneffei (10.70.3 mg/ml).
Pathogenic fungi, Syncephalastrum racemosum, Absidia corymbifera, and Stachybotrys chartarum showed highly resistance against
Ulva fasciata crude extract.
3.3 Minimum Inhibitory Concentration (MIC)
Minimum inhibitory concentration of reference antibiotic (Ampicillin) ranged from (0.03 to 15.63 mg/ml). Ampicillin is
highly sensitive against staphylococcus epidemidis, staphylococcus aureus, staphylococcus saprophyticus, Bacillus cereus, Bacillus
subtilis, Streptococus pneumonia, Streptococcus pyogenes, Streptococcus mutans and Enterococcus faecali (0.03, 0.06, 0.06, 0.06,
0.12, 0.25, 0.98 & 1.95 mg/ml) respectively. Ampicillin showed less activity against Corynebacterium diphtheria (15.63 mg/ml).
3.3.1. MIC of Ulva lactuca
3.3.1.1 MIC of Ulva lactuca against Gram +ve bacteria
The Minimum inhibitory concentration (MIC) value of Ulva lactuca showed MIC against the Gram positive bacteria was
ranged between (0.98 mg/ml to 250 mg/ml). The lowest MIC (0.98 mg/ml) value was recorded against Staphylococcus aureus
followed by Staphylococcus saprophyticus (3.9 mg/ml), Streptococcus mutans, Bacillus subtilis which have the same MIC (7.81
mg/ml), streptococcus pyogenes (31.25 mg/ml), Bacillus cereus (125 mg/ml) and Staphylococcus epidermidis (250 mg/ml).
3.3.1.2 MIC of Ulva lactuca against Gram -ve bacteria
The Minimum inhibitory concentration (MIC) value of Ulva lactuca against the Gram negative bacteria was ranged between
(0.98 mg/ml to 125 mg/ml). The lowest MIC (0.98 mg/ml) value was recorded against Slamonella typhimurium followed by
Escherichia coli and Serratia marcescens which have the same MIC (1.95 mg/ml), Neisseria meningitides (15.36mg/ml),
Haemophilus influenza (62.5mg/ml) and Klebsiella pneumonia (125 mg/ml).
3.3.1.3 MIC of Ulva lactuca against Unicellular & Filamentous fungi
MIC value of Ulva lactuca against the Unicellular & Filamentous fungi was ranged between (0.49 mg/ml to 250 mg/ml). The
lowest MIC (0.49 mg/ml) value was recorded against Geotricum candidum followed by Candida albicans (0.98 mg/ml), Aspergillus
clavatus (1.95mg/ml), Aspergillus fumigatus and Rhizopus oryzae which have the same MIC value (3.9 mg/ml), Mucor circinelloides
(15.63 mg/ml) and Penicillium marneffei (250 mg/ml).
3.3.2. MIC of Ulva intestinalis
3.3.2.1 MIC of Ulva intestinalis against Gram +ve bacteria
MIC value of Ulva intestinalis against the Gram positive bacteria was ranged between (3.9mg/ml to 500 mg/ml). The lowest
MIC (3.9 mg/ml) value was recorded against Staphylococcus aureus followed by Staphylococcus saprophyticus and Streptococcus
mutans which have the same MIC value (7.81 mg/ml), Bacillus subtilis (15.63mg/ml), streptococcus pyogenes (125 mg/ml), Bacillus
cereus (250 mg/ml) and Staphylococcus epidermidis (500 mg/ml).
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

654 www.ijergs.org

3.3.2.2 MIC of Ulva intestinalis against Gram -ve bacteria
The Minimum inhibitory concentration of Ulva intestinalis against the Gram negative bacteria was ranged between 1.95
mg/ml to 250 mg/ml. The lowest MIC (1.95 mg/ml) value was recorded against Slamonella typhimurium followed by Serratia
marcescens (3.9 mg/ml), Escherichia coli (7.81 mg/ml), Neisseria meningitides (31.25 mg/ml), Haemophilus influenza (125 mg/ml)
and Klebsiella pneumonia (250 mg/ml).

3.3.2.3 MIC of Ulva intestinalis against Unicellular & Filamentous fungi
Concering Ulva intestinalis showed an excellent MIC ranged between (0.95mg/ml to 250 mg/ml). The lowest MIC (0.95
mg/ml) value was recorded against Geotricum candidum followed by Candida albicans and Aspergillus clavatus which have the same
MIC value (3.9 mg/ml), Aspergillus fumigatus and Rhizopus oryzae which have the same MIC value (7.81 mg/ml), Mucor
circinelloides (62.5 mg/ml) and Penicillium marneffei (250 mg/ml).
3.3.3. MIC of Ulva fasciata
3.2.3.1 MIC of Ulva fasciata against Gram +ve bacteria
The lowest concentration of Ulva fasciata crude extract that will inhibit the visible growth of Gram positive bacteria was
ranged between (1.95 mg/ml to 250 mg/ml). The lowest MIC (1.95 mg/ml) value was recorded against Staphylococcus aureus
followed by Staphylococcus saprophyticus (3.9 mg/ml), Streptococcus mutans (7.81 mg/ml), Bacillus subtilis (15.63 mg/ml),
Streptococcus pyogenes (62.5 mgml), Bacillus cereus (125 mg/ml) and Staphylococcus epidermidis (250 mg/ml).
3.3.3.2 MIC of Ulva fasciata against Gram -ve bacteria
The Minimum inhibitory concentration (MIC) value of Ulva fasciata against the Gram positive bacteria was ranged between
(1.95 mg/ml to 250 mg/ml). The lowest MIC (1.95 mg/ml) value was recorded against Staphylococcus aureus followed by
Staphylococcus saprophyticus (3.9 mg/ml), Streptococcus mutans (7.81 mg/ml), Bacillus subtilis (15.63 mg/ml), Streptococcus
pyogenes (62.5 mg/ml), Bacillus cereus (125 mg/ml) and Staphylococcus epidermidis (250 mg/ml).
3.3.3.3 MIC of Ulva fasciata against Unicellular & Filamentous fungi
Ulva fasciata showed MIC ranged between (0.98 mg/ml to 250 mg/ml). The lowest MIC (0.98 mg/ml) value was recorded against
Geotricum candidum followed by Candida albicans and Aspergillus clavatus which have the same MIC value (1.95mg/ml),
Aspergillus fumigates (3.9 mg/ml), Rhizopus oryzae (7.81 mg/ml), Mucor circinelloides (31.25 mg/ml) and Penicillium marneffei (250
mg/ml).
Table (3.1): Anti-bacterial activity of Ulva species (Gram Positive).
Marine
algae
Inhibition zone diameter(mm/sample)
Streptococ
cus
pneumoni
ae
Streptococ
cus
pyogenes
Streptococ
cus
mutans
Bacill
us
cereus
Bacill
is
subtil
is
Enterococ
cus faecali
Corynebacter
ium
diphtheriae
Staphylococ
cus aureus
Staphylococ
cus
epidermidis
Staphylococ
cus
saprophytic
us
AM 23.8 0.2 22.7 0.2 21.6 0.1 27.90
.1
26.4
0.3
20.3 0.3 16.4 0.3 28.3 0.1 28.7 0.2 28.4 0.2
Ulva
lactuca
NA 14.2 0.5 17.8 0.9 12.60
.1
17.5
0.3
NA NA 22.0 0.8 10.5 0.4 19.3 0.3
Ulva
intestina
lis
NA 11.8 0.1 16.5 0.1 10.90
.2
15.5
0.7
NA NA 20.1 0.4 8.7 0.2 17.9 0.3
Ulva
fasciata
NA 14.7 0.3 17.9 0.1 12.90
.1
17.9
0.5
NA NA 22.2 0.6 10.8 0.1 19.6 0.4
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

655 www.ijergs.org

Mean zone of inhibition in mm Standard deviation beyond well diameter (6 mm) produced on a range clinically pathogenic
microorganisms using (50 mg/ml) concentration of tested sample, The test was done using the diffusion agar technique, Well
diameter: 6.0 mm (100 l Was tested), *NA : No activity and AM: Reference antibiotic Ampicillin (30/disk).
Table (3.2): Anti-bacterial activity of Ulva species (Gram Negative).
Mean zone of inhibition in mm Standard deviation beyond well diameter (6 mm) produced on a range clinically pathogenic
microorganisms using (50 mg/ml) concentration of tested sample, The test was done using the diffusion agar technique, Well
diameter: 6.0 mm (100 l Was tested), *NA : No activity and GT: Reference antibiotic Gentamicin (30/disk).





Table (3.3): Anti-fungal activity of Ulva species.
Marine
algae
Inhibition zone diameter(mm/sample)
Pseudomo
nas
aeruginos
a
Escheric
hia coli
Salmonell
a
typhimuri
um
Proteo
us
vulgar
is
Klebsiella
pneumon
iae
Yersinia
enterocolit
ica
Serratia
marcesc
ens
Neisseria
meningiti
des
Haemophi
lus
influenzae
Shigel
la
flexne
ri
GT 17.3 0.1 19.9 0.3 27.3 0.7 20.40
.6
29.3 0.3 18.7 0.2 19.3 0.2 17.6 0.1 21.4 0.1 23.7
0.3
Ulva
lactuca
NA 20.2 0.2 22.1 0.5 NA 12.2 0.7 NA 20.8 0.6 15.9 0.6 13.2 0.8 NA
Ulva
intestina
lis
NA 18.2 0.9 20.8 0.9 NA 10.2 0.1 NA 18.9 0.5 14.2 0.5 11.2 0.4 NA
Ulva
fasciata
NA 20.6 0.5 22.4 0.9 NA 12.6 0.7 NA 21.2 0.6 16.2 0.3 13.7 0.5 NA
Marine
algae
Inhibition zone diameter(mm/sample
Penicilli
um
marneffe
i
Aspergill
us
clavatus
Aspergill
us
fumigatu
s
Syncephalast
rum
racemosum
Mucor
circinelloi
des
Absidia
corymbif
era
Rhizop
us
oryzae
Geotric
um
candidu
m
Candi
da
albica
ns
Stachybot
rys
chartaru
m
AMP 20.6 0.2 22.4
0.1
23.7
0.1
19.7 0.2 17.9 0.1 19.8 0.3 18.3
0.4
28.7
0.2
25.4
0.1
18.9 0.3
Ulva
lactuca
10.3 0.1 21.6
0.7
19.9
0.8
NA 15.8 0.3 NA 19.7
0.7
23.2
0.3
22.5
0.7
NA
Ulva
intestina
11.5 0.8 20.1 17.8 NA 13.7 0.2 NA 16.4 21.7 19.3 NA
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

656 www.ijergs.org

Mean zone of inhibition in mm Standard deviation beyond well diameter (6 mm) produced on a range clinically pathogenic
microorganisms using (50 mg/ml) concentration of tested sample,The test was done using the diffusion agar technique, Well
diameter:6.0 mm(100 l Was tested), *NA: No activity and AMP: Reference ibioticAmphotericin B (30/disk).
Table (3.4): MIC of Ulva species crude extract against Gram positive bacteria.
Mean zone of inhibition in mm Standard deviation beyond well diameter (6 mm) produced on a range clinically pathogenic
microorganisms using (50 mg/ml) concentration of tested sample,The test was done using the diffusion agar technique, Well
diameter:6.0 mm (100 l Was tested), *NA : No activity and AM: Reference antibioticAmpicillin (30/disk).



Table (3.5): MIC of Ulva species crude extract against Gram negative bacteria.
lis 0.3 0.7 0.5 0.1 0.5
Ulva
fasciata
10.7 0.3 22.1
0.7
20.1
0.6
NA 16.4 0.5 NA 20.1
0. 8
23.4
0.6
22.90
. 4
NA

Marine
algae
Inhibition zone diameter(mm/sample
Streptococ
cus
pneumoni
ae
Streptococ
cus
pyogenes
Streptococ
cus
mutans
Bacill
us
cereus
Bacilli
ss
ubtilis
Enterococ
cus faecali
Corynebacter
ium
diphtheriae
Staphylococ
cus aureus
Staphylococ
cus
epidermidis
Staphylococ
cus
saprophytic
us
AM 0.25 0.98 1.95 0.06 0.12 1.95 15.63 0.06 0.03 0.06
Ulva
lactuca
NA 31.25 7.81 125 7.81 NA NA 0.98 250 3.9
Ulva
intestin
alis
NA 125 7.81 250 15.63 NA NA 3.9 500 7.81
Ulva
fasciata
NA 62.5 7.81 125 15.63 NA NA 1.95 250 3.9
Marine
algae
Inhibition zone diameter(mm/sample
Pseudomo
nas
aeruginos
a
Escheric
hia
coli
Salmonell
a
typhimuri
um
Proteo
us
vulgar
is
Klebsiella
pneumon
iae
Yersinia
enterocolit
ica
Serratia
marcesc
ens
Neisseria
meningiti
des
Haemophi
lus
influenzae
Shigel
la
flexne
ri
GT 7.81 3.9 0.06 1.95 0.015 3.9 3.9 7.81 0.98 0.25
Ulva
lactuca
NA 1.95 0.98 NA 125 NA 1.95 15. 63 62.5 NA
Ulva
intestina
lis
NA 7.81 1.95 NA 250 NA 3.9 31.25 125 NA
Ulva
fasciata
NA 1.95 0.98 NA 250 NA 3.9 15.63 125 NA
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

657 www.ijergs.org

Mean zone of inhibition in mm Standard deviation beyond well diameter (6 mm) produced on a range clinically pathogenic
microorganisms using (50 mg/ml) concentration of tested sample,The test was done using the diffusion agar technique, Well
diameter:6.0 mm (100 l Was tested), *NA : No activity and AMP: Reference antibiotic Amphotericin B (30/disk).
Table (3.6): MIC of Ulva species crude extract against Unicellular & Filamentous fungi.
Mean zone of inhibition in mm Standard deviation beyond well diameter (6 mm) produced on a range clinically pathogenic
microorganisms using (50 mg/ml) concentration of tested sample,The test was done using the diffusion agar technique, Well
diameter:6.0 mm (100 l Was tested), *NA : No activity and AMP: Reference antibiotic Amphotericin B (30/disk)
3.4. Phytochemical screening of marine Collected Algae
The qualitative phytochemical screening of the crude powder of Ulva species was carried out in order to assess the presence
of bioactive compounds which might have anti-bacterial potency. The presence of the alkaloids, flavonoids, tannins, steroids and
saponins. The absence of anthraquinones, Crystalline sublimate, steam volatile substances, Carbohydrates/glycosides and Cardi ac
glycosides was investigated (Table 3.7). Alkaloids and Flavonoids were present in moderate amounts (++) in 3 marine algae. Sterols
and triterpenes were present in higher amounts (+++).Carbohydrates, Tannins were present in low amounts(+). Presence of flavonoids
and alkaloids in most tested algae is interesting because of their possible use as natural additives emerged from a growing tendency to
replace synthetic antioxidant and antimicrobials with natural ones [21]. Our results were in agreement with previ- ous findings which
showed presence of flavonoids and alkaloids in most of marine algae [22-24].

Table (3.7): Phytochemical screening of Ulva species.
Test
Ulva fasciata Ulva lactuca Ulva intestinalis
Crystalline sublimate

- - -
Steam volatile substances - - -
Carbohydrates and/or glycosides + + +
Tannins + + +
Flavonoids

*aglycones

++ ++ ++
*glycosides + + +
Saponins - - -
Sterols and/or triterpenes +++ +++ +++
Marine
algae
Inhibition zone diameter(mm/sample
Penicilli
um
marneffe
i
Aspergill
us
clavatus
Aspergill
us
fumigatu
s
Syncephalast
rum
racemosum
Mucor
circinelloi
des
Absidia
corymbif
era
Rhizop
us
oryzae
Geotric
um
candidu
m
Candi
da
albica
ns
Stachybot
rys
chartaru
m
AMP 1.95 0.98 0.49 3.9 7.81 3.9 7.81 0.03 0.12 3.9
Ulva
lactuca
250 1.95 3.9 NA 15.63 NA 3.9 0.49 0.98 NA
Ulva
intestina
lis
250 3.9 7.81 NA 62.5 NA 7.81 0.95 3.9 NA
Ulva
fasciata
250 1.95 3.9 NA 31.25 NA 7.81 0.98 1.95 NA
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

658 www.ijergs.org

Alkaloids ++ ++ ++
Anthraquinones
*aglycones
- - -
*combined - - -
Cardiac glycosides:
-Killer Killiani
-Baljet
-Kedde


-
-
-


-
-
-


-
-
-
(+++): present in higher amounts (++): present in moderate amounts (+):
lower amounts
3.5. Nutritional value of collected marine Algae
Also in the present study, Comparative nutritive value screening was carried out on investigated marine algae (Ulva fasciata,
Ulva lactucaand Ulva intestinalis) from Ras elbar, Baltim and Gamasa sea shores. Results depicted in the Table (3.8), the total
summation of the recorded total protein increase in the order: Ulva fasciata < Ulva intestinalis < Ulva lactuca, with percentage; 28.7,
27 and 17.6%, respectively. The total summation of the recorded total carbohydrate increase in the order: Ulva lactuca < Ulva
intestinalis < Ulva fasciata with percentage; 55.6, 47.93 and 44.2%, respectively. The total summation of the recorded total ash
increase in the order: Ulva lactuca< Ulva fasciata < Ulva intestinalis, with percentage; 17.6, 17 and 14.6%,respectively.The total
summation of the recorded total moisture increase in the order: Ulva intestinalis< Ulva fasciata < Ulva lactuca, with percentage;9.93,
9.28 and 8.50% respectively.The total summation of the recorded total crude fat increase in the order Ulva lactuca < Ulva fasciata <
Ulva intestinalis with percentage; 0.7, 0.60 and 0.54%respectively.


Table (3.8): Nutritive value of Ulva species
Item Ulva fasciata Ulva lactuca Ulva intestinalis
Type of analysis
Total protein (as % of dry weight) 28.7 17.6 27
Total crude fat (as % of dry weight) 0.6 0.7 0.54
Total ash (as % of dry weight) 17 17.6 14.6
Total carbohydrates (as % of dry weight, by difference) 44.2 55.6 47.93
Total moisture (as % of fresh weight) 9.28 8.50 9.93

3.6. LC/MS of collected marine Algae.
The combination of high-performance liquid chromatography and mass spectrometry (LC/MS) has had a significant impact
on drug development over the past decade. Continual improvements in LC/MS interface technologies combined with powerful
features for structure analysis, qualitative and quantitative, have resulted in a widened scope of application. These improvements
coincided with breakthroughs in combinatorial chemistry, molecular biology, and an overall industry trend of accelerated
development. New technologies have created a situation where the rate of sample generation far exceeds the rate of sample analysis.
As a result, new paradigms for the analysis of drugs and related substances have been developed. The growth in LC/MS applications
has been extensive, with retention time and molecular weight emerging as essential analytical features from drug target to product.
LC/MS-based methodologies that involve automation, predictive or surrogate models, and open access systems have become a
permanent fixture in the drug development landscape. An iterative cycle of what is it? and how much is there? continues to fuel
the tremendous growth of LC/MS in the pharmaceutical industry. During this time, LC/MS has become widely accepted as an integral
part of the drug development process.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

659 www.ijergs.org

3.6.1. LC/MS of Ulva fasciata
In the present study, the data recorded in the Table (3.9) & Figs (3.1-3.11), demonstrated that only twenty eight compounds
from the crude extract of Ulva fasciata can be determined. These compounds were determined and compared to previous isolated
compounds using different libraries data bases. The identified compounds were found to be 4-hexahydroxy flavoneacetylB
glucopyranosid, Formycin-A, Adenosine, 5-Deoxyguanosine and n-Alkenylhydroquinol dimethyl ether.
3.6.2. LC/MS of Ulva lactuca
The data recorded in the Table (3.10) & Figs (3.12-3.14), demonstrated that only six compounds from the crude extract of
Ulva lactuca can be determined. These compounds were determined and compared to previous isolated compounds using different
libraries data bases. No identified compounds were matched with any previous isolated compounds which may be novel compounds.
3.6.3. LC/MS of Ulva intestinalis
It demonstrated that only nine compounds from the crude extract of Ulva intestinalis can be identified as had shown in Table
(3.11) & Figs (3.15-3.16). These compounds were determined and compared to previous isolated compounds using different libraries
data bases (Dictionary of Natural Products; an online version and AntiMarin 2012).The identified compounds were found to be n-
Alkenylhydroquinol dimethyl ether only.

Table (3.9): LC/MS data of Ulva fasciata crude extract with their suspected formula and suggested identified compounds.
No. R
t
M
Wt
C
f
Identification
1 4.32 507.1147
C
24
H
18
O
9
N
4
No hits
C
23
H
22
O
13
4- hexahydroxyflavoneacetylBglucopyranosid
2 6.22
236.1494 C
10
H
21
O
5
N No hits
471.2911
C
21
H
38
O
6
N
6

C
20
H
42
O
10
N
2

No hits
No hits
3 8.50
236.1494 C
10
H
21
O
5
N No hits
333.1294 C
13
H
20
O
8
N
2
Shinorine
4 9.56 268.1044 C
10
H
13
O
4
N
5
Formycin-A,Adenosine,5'-Deoxyguanosine
5 12.16
204.0867 C
8
H
13
O
5
N No hits
384.1500

C
14
H
25
O
11
N No hits
C
15
H
21
O
7
N
5
No hits
477.1578 C
16
H
25
O
11
N
6
No hits
546.2031 C
21
H
31
O
12
N
5
No hits
6

16.40 376.2330 C
18
H
33
O
7
N No hits
7 20.40
236.1493 C
10
H
21
O
5
N No hits
534.3804 C
27
H
47
O
4
N
7
No hits
666.4223 C
25
H
59
O
13
N
7
No hits
8 25.45 236.1481 C
10
H
22
O
5
N No hits

507.2537 C
22
H
38
O
11
N
2
No hits
593.5108 C
33
H
64
O
3
N
6
No hits
734.5291
C
39
H
73
O
8
N
3
Na No hits
C
40
H
69
O
4
N
7
Na No hits
9 26.89 474.3774 C
25
H
49
O
2
N
5
Na No hits
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

660 www.ijergs.org

C
22
H
47
O
4
N
7
No hits
10 28.51
474.3806 C
35
H
62
O
3
n-Alkenyl hydroquinol dimethyl ether
581.5161 C
37
H
64
ON
4
No hits
722.5358
C
45
H
71
O
6
N No hits
C
44
H
69
O
2
N
5
Na No hits
R
t
: Retention time, MW: Molecular weight, C
f:
Compound formula
Table (3.10): LC/MS data of Ulva lactuca crude extract with their suspected formula and suggested identified compounds.
No. R
t
M
Wt
C
f
Identification
1
16.16
341.0514

C
15
H
8
O
6
N
4
No hits
C
14
H
12
O
10
No hits
363.0334 C
15
H
8
O
6
N
4
Na No hits
2 26.91 677.3722
C
31
H
58
O
14
Na No hits
C
29
H
52
O
12
N6 No hits
C
44
H
50
O
3
N
2
Na No hits
R
T
: Retention time, MW: Molecular weight, C
F:
Compound formula
Table (3.11): LC/MS data of Ulva intestinalis crude extract with their suspected formula and suggested identified compounds.
No. R
t
M
Wt
C
f
Identification
1
2
4
.
9
6

-

2
9
.
7
2

553.4584 C
35
H
62
O
3
Na n-Alkenylhydroquinol dimethyl ether
2
609.2718
C
22
H
38
O
9
N
10
Na
C
38
H
38
O
4
N
2
Na
No hits

No hits
3
734.5917
C
43
H
77
O
3
N
5
Na
C
44
H
79
O
7
N
No hits
No hits
4
941.6046
C
47
H
82
O
10
N
8
Na No hits
C
60
H
80
O
7
N
2
No hits
C
48
H
84
O
14
N
4
No hits
C
46
H
86
O
14
N
4
Na No hits
R
T
: Retention time, MW: Molecular weight, C
F:
Compound formula


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

661 www.ijergs.org


Figure (3.2) HRESIMS spectrum of compound 1 (Ulva fasciata)

Figure (3.1) LC/MS of Ulva fasciata crude extract

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

662 www.ijergs.org


Figure (3.3) HRESIMS spectrum of compound 2 (Ulva fasciata)

Figure (3.4) HRESIMS spectrum of compound 3 (Ulva fasciata)

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

663 www.ijergs.org


Figure (3.5) HRESIMS spectrum of compound 4 (Ulva fasciata)

Figure (3.6) HRESIMS spectrum of compound 5 (Ulva fasciata)

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

664 www.ijergs.org



Figure (3.7) HRESIMS spectrum of compound 6 (Ulva fasciata)

Figure (3.8) HRESIMS spectrum of compound 7 (Ulva fasciata)

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

665 www.ijergs.org



Figure (3.9) HRESIMS spectrum of compound 8 (Ulva fasciata)

Figure (3.10) HRESIMS spectrum of compound 9 (Ulva fasciata)

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

666 www.ijergs.org






Figure (3.11) HRESIMS spectrum of compound 10 (Ulva fasciata)

Figure (3.12) LC/MS of Ulva lactuca crude extract


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

667 www.ijergs.org




Figure (3.13) HRESIMS spectrum of compound 1 (Ulva lactuca)

Figure (3.14) HRESIMS spectrum of compound 2 (Ulva lactuca)

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

668 www.ijergs.org





Figure (3.15) LC/MS of Ulva intestinaliscrude extract

Figure (3.16) HRESIMS spectrum of compound 2 (Ulva intestinalis)



International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

669 www.ijergs.org

ACKNOWLEDGMENT
I would like to express my deepest gratitude and appreciation to Dr. Ibrahem Borie Ibrahem and Dr. Nevein Abdel-Raouf Mohammed,
Prof. of Phycology, Faculty of Science, Beni-Suef University for his continuous help, careful guidance, and helpful discussion.

CONCLUSION
Our results indicated that, these species of seaweeds collected from Mediterranean Sea shores showed variety of antimicrobial activities,
which make them interesting for programs of screening for natural products. This ability not restricted to one order or division within the
macro algae but all of them offer opportunities for producing new types of bioactive compounds.

REFERENCES:
[1] Kuda T, Taniguchi E, Nishizawa M, Araki Y. (2002). Fate of water-soluble polysaccharides in dried Chorda filum a brown alga
during water washing. Journal of Food Composition and Analysis. 15.3-9.
[2] Bansemir A, Blume M, Schroder S, Lindequist U. (2006). Screening of cultivated seaweeds for antibacterial activity against fish
pathogenic bacteria. Aquaculture. 252.79-84.
[3] Chew YL, Lim YY, Omar M, Khoo KS. (2008). Antioxidant activity of three edible seaweeds from two areas in South East Asia.
LWTFood Science and Technology. 41.1067-1072.
[4] Matsukawa R, Dubinsky Z, Kishimoto E, Masaki K, Masuda Y, Takeuchi T, Chihara M, Yamamoto Y, Niki E and Karube I.
(1997). A comparison of screening methods for antioxidant activity in seaweeds. Journal of Applied Phycology. 9.29-35.
[5] Rangaiah,S.G., Lakshmi, P and Manjula, E. (2010). Antibacterial activity of Seaweeds Gracilaria, Padina and Sargassum sps on
clinical and phytopathogens. Int. J. Chem. Anal. Sci, 1(6). 114-117.
[6] Kolanjinathan, K and Stella, D. (2009). Antibacterial activity of Marine Macro algae against human pathogens. Recent Res. Sci.
Techno, 1 (1). 20-22.
[7] Cordeiro, R. A., Gomes, V.M., Carvalho, A.F.U, and Melo, V.M.M. (2006). Effect of Proteins from the Red Seaweed Hypnea
musciformis (Wulfen) Lamouroux on the Growth of human Pathogen yeasts. Brazilian Arch. Boil. Technol, 49(6). 915-921.
[8] Tuney, I., Cadirci., B.H., Unal, D. and Sukatar, A. (2006). Antimicrobial activities of the extracts of marine algae from the coast
of Urla (Izmir, Turkey). Turk. J.Biol., 30.171-175.
[9] Blunt JW, Copp BR, Munro MHG, Northcote PT, Prinsep MR.
[10] Ito K, Hori K. (1989). Seaweed. Chemical composition and potential food uses. Food Reviews International. 5.101-144.
[11] Rahman A.;Choudhary, M. and Thomsen W.(2001). Bioassay Techniques for Drug Development .Harwood Academic
publishers,the Netherlands,pp.16.
[12] Rahman A.;Choudhary, M. and Thomsen W.(2001). Bioassay Techniques for Drug Development .Harwood Academic
publishers,the Netherlands,pp.16.
[13] AOAC, (1995): Official methods of analysis of AOAC International, 16th edn., OAC Int., Washington.
[14] Dubois M., Giles K.A., Hamilton J.K., Rebers P.A., Smith F. (1956): Calorimetric method for determination of sugars and related
substances, Anal. Chem., 28 (3), 350356, http://dx.doi.org/10.1021/ac60111a017.
[15] AOAC, (2000): Official methods of analysis of AOAC International, 17th edn., OAC Int., Washington.
[16] Fadeyi, M.G., Adeoye, A.E. and Olowokodejo, J.D. (1989) Epidermal and phytochemical studies with genus of Bo- erhavia
(Nyetanginaceae). International Journal of Crude Drug Research, 29, 178-184.
[17] Odebiyi, A. and Sofowora, A.E. (1990) Phytochemical screening of nigerian medicinal plants. Part III. Lloydia, 41, 234-246.
[18] Harborne, J.B. (1992) Phytochemical methods. Chapman and Hall Publications, London, 7-8.
[19] Abulude, F.O., Onibon, V.O. and Oluwatoba, F. (2004) Nutrition and nutritional composition of some tree barks. Nigerian
Journal of Basic and Applied Sciences, 13, 43- 49.
[20] Abulude, F.O. (2007) Phytochemical screening and min- eral contents of leaves of some Nigerian woody plants.
Research Journal of Phytochemistry, 1, 33-39. doi:10.3923/rjphyto.2007.33.39.
[21] Shan, B., Cai, Y.Z., Brooks, J.D. and Corke, H. (2007) The in vitro antibacterial activity of dietary spice and me- dicinal herb.
International Journal of Food Microbiology, 117, 112-119. doi:10.1016/j.ijfoodmicro.2007.03.003
[22] Wang, C., Mingyan, W., Jingyu, S., Li, D. and Longmei, Z. (1998) Research on the chemical constituents of Acan- thophora
spicifera in the South China. Bopuxue Zazhi, 15, 237-242.

[23] Zeng, L.-M., Wang, C.-J., Su, J.-Y., Du, L., Owen, N. L., Lu, Y., Lu, N. and Zheng, Q.-T. (2001) Flavonoids from the red alga
Acanthophora spicifera. Chinese Journal of Chemistry, 19, 1097-1100. doi:10.1002/cjoc.20010191116.

[24] Gven, K., Percot, A. and Sezik, E. (2010) Alkaloids in marine algae. Marine Drugs, 8, 269-284. doi:10.3390/md8020269

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

670 www.ijergs.org

Comparative Analysis of Extraction and Detection of RBCs and WBCs Using
Hough Transform and k-Means Clustering Algorithm
Monika Mogra, Arun Bansel, Vivek Srivastava
Department of Information Tech, Institute of Technology & Management, Bhilwara, Rajasthan - 249225, India,
Email:monika.mogra@gmail.com Ph: +91-9008516566
Abstract: Blood cell analysis is very much important for all human beings, because there are WBCs, RBCs, and Platelets in our
blood, White blood cell count gives the vital information about our blood and diseases related to blood that help diagnosis many of the
patients sickness. This research work presents an adaptive approach for extracting, detecting and counting the WBCs in microscopic
blood sample images. In the research work i used two different approaches to perform my task one is k-means clustering technique
and second is Hough transform. I also used to study the different parameters like number of cells, number of WBCs and also calculate
the time for our code to be executed.
Keywords WBCs, RBCs, Hough transform, k-means clustering, time calculation, thresh-holding, image processing.
Introduction
One of major challenges in computer vision is to determining the location, shape, or quantity of instances of a particular object. An
example is to find and count the circular objects from any image. There are a number of feature extraction techniques available for
circle detection and extraction, one of the most commonly used methods is the Circular Hough Transform. The goal of this research
note is to provide the user with an understanding of the operations behind these algorithms. An overview of the Hough Transform and
k-means clustering is also given.
Hough Transform
Generalized Hough Transform
The Generalized Hough Transform is a modified version of the Hough Transform that not only seARCHES FOR ANALYTICALLY
DEFINED shapes, but also arbitrary shapes. This method uses the principle of template matching, which relies on detecting smaller
elements matching a template image.
Circular Hough Transform
The Circular Hough Transform set the radius to a constant value or provides the user with the option of setting prior to running the
application. For each edge point, a circle is drawn with that point as origin and radius. The CHT depends on a pre-define value of the
circles radius.
k- Means Clustering
One of the clustering algorithms is K-mean clustering; K-means clustering algorithm is used to cluster observations into groups of
related observations without any prior knowledge of those relationships. The k-means algorithm is one of the simplest clustering
techniques and it is commonly used in medical imaging, biometrics and related fields.

Related Work

Venkatalakshmi.B [1] et al has analyzed that the major issue in clinical laboratory is to produce a precise result for every test
especially in the area of Red Blood Cell count. The number of red blood cell is very important to detect as well as to follow the
treatment of many diseases like anemia, leukemia etc. Red blood cell count gives the vital information that help diagnosis many of the
patients sickness. S.Kareem [2] et al described a novel idea to identify the total number of red blood cells (RBCs) as well as their
location in a Giemsa stained thin blood film image.The method utilizes basic knowledge on cell structure and brightness of the
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

671 www.ijergs.org

components due to Giemsa staining of the sample and detects and locates the RBCs in the image. Jameela Ali [3] et al has stated
predominantly emphases on two algorithms Hough Transform and the Sub-Pixel Edge Detection and their application on 1-
Dimensional barcode scanning. The system is meant to verify Barcode on-line. It primarily focuses on two aspects of barcode
verification. One is two detect the angle if barcode is skewed in the image and correct the same. The other is to detect the edges of a
barcode in real time blurred image using sub-pixel edge detection. Ms. Minal [4] et al has stated that the Hough transform has been a
frequently used method for detecting lines in images. However, when applying Hough transform and derived algorithms using the
standard Hough voting scheme on realworld images, the methods often suffer considerable degeneration in performance, especially in
detection rate, because of the large amount of edges given by complex background or texture.Naveed Abbas [5] et al has modified the
Hough transform,it was proposed that improves the detection of low-contrast circular objects.The original circular Hough transform
and its numerous modifications were discussed and compared in order to improve both the efficiency and computational complexity of
the algorithm. Gaganjit Singh [6] et al presented Blood cell counting by laboratory task utilizes hemocytometer and microscope. The
conventional task depends on physician skill. It is laborious. A.Shanmugam [7] et al concluded that HoughT ransform is recognized as
a powerful tool for graphic element extraction from images due to its global vision and robustness in noisy or degraded environment.
However, the application of HT has been limited to small-size images for a long time. Besides the well-known heavy computation in
the accumulation, the peak detection and the line verification become much more time-consuming for large-size images.J. C. Allayous
[8] et al introduced a new Randomized Hough Transform aimed at improving curve detection accuracy and robustness, as well as
computational efficiency. Robustness and accuracy improvement is achieved by analytically propagating the errors with image pixels
to the estimated curve parameters. The errors with the curve parameters are then used to determine the contribution of pixels to the
accumulator array. The computational efficiency is achieved by mapping a set of points near certain selected seed points to the
parameter space at a time. Clark F. Olson [9] et al stated the techniques to perform fast and accurate curve detection using constrained
Hough transforms, in which localization error can be propagated efficiently into the parameter space. We first review a formal
definition of Hough transform and modify it to allow the formal treatment localization error. We then analyze current Hough transform
techniques with respect to this definition.
Research Methodology
To implement the objective listed above following methodology is adopted.

As per first approach - k-MEANS CLUSTERING
1) Call input image
2) Clustering image
3) Histogram equalization
4) Image segmentation
5) Blood cell extraction
6) Counting cells

As per second approach - HOUGH TRANSFORM
1) Call Input image
2) Hough transforms edge linking
3) Image segmentation
4) Snake body detection
5) Output image
6) Counting cells

Results
The input image is taken as shown in figure 1, it is a blood sample microscopic image. We use two approaches to reach our aim, one is
k-Means clustering and other is Hough Transform. In the first input image there are several cells like WBC, RBC and PLATELETS.
We have used k-Means clustering Algorithm to extract WBCs from the image, the WBCs are shown in the figure 3 , and it extracts
the cells in some different color. The white blood cells showed in purple color and background in black. By using filter, the noise are
removed from this image, then it act as the input image for Clustering technique. This technique is the final step of counting white
Blood cells in the image. In the second image where the cells are extracted, the morphological operations, logical operations and
clustering techniques are used in this stage to extract the white blood cells from other cells and background. Morphological
and XOR operation are applied on two binary images, the result is shown in the figure 2 After that clustering algorithm is
applied to this image to extract the white blood cell. As a result obtained from the in figure 2 , the white blood cells showed in
purple color and background in black. By using filter, the noise are removed from this image, then it act as the input image for k-
means clustering. This technique is the final step of counting white Blood cells in the image Figure 4. For this process the centre
point of white blood cells are needed and they are chosen by their own, the k-means clustering algorithm has a property to select its
center point as k by its own internal commands.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

672 www.ijergs.org



Figure 1 Figure 2

Figure 3 Figure 4

Figure 5 Figure 6
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

673 www.ijergs.org


Figure 7 Figure 8
Hough Transform is used to detection objects in the image with any shape, but WBCs are with round shape, so we used Circular
Hough Transform. In Figure 5 shows Hough Transform circular cells. By using filter, the noise are removed from this image, then it
act as the input image for Hough Transform technique. This technique is the final step of counting white Blood cells in the image. For
this process the centre point of white blood cells are needed. The counting of cells is done using Hough transform; first we used this
algorithm to draw circles on around all cells, than cells are counted using same counting method as per our first algorithm. In the
Figure 7 image we have created histogram for the blood sample image; it is actually a graphical representation of image based on their
color. In the figure 8 we have calculated the total time to execute the complete code, for this purpose we have used an inbuilt clock
function from MATLAB library. We use clock command to calculate the time.

Conclusion
This research work is based on counting blood cells from different blood sample images, it is important for every human being to
know about their blood cells. Our implemented algorithms are k-means clustering algorithm and Hough transform. These two methods
include extraction of cells, count of cells and time calculation for getting output. This paper develops an approach used to count white
blood cells in blood image without the use of microscope, because using the microscope is very much costlier process.


Future Scope
To develop a technique for users that will become efficient for counting blood cells in different images, someone could develop a
graphical user interface system for counting different parameters on a single window, it would be a good approach to improve our
design. In our research work I took single image at a time, but someone can add more graphic options to work on different images at
one time. One can use some other technique to implement same design with reduced time. Someone can also calculate some other
parameters also.

REFERENCES:
[1] Venkatalakshmi.B, Thilagavathi.K "Automatic Red Blood Cell Counting Using Hough Transform" in Proceedings of 2013 IEEE Conference on
Information and Communication Technologies (ICT 2013) in 2013.

[2] S.Kareem, R.C.S Morling and I.Kale "A Novel Method to Count the Red Blood Cells in Thin Blood Films" in 978-1-4244-9474-3/11/ IEEE in
2011.

[3] Jameela Ali,Abdul Rahim Ahmad, Loay E. George, Chen Soong Der, Sherna Aziz,"Red Blood Cell Recognition using Geometrical Features" in
IJCSI International Journal of Computer Science Issues, Vol. 10, Issue, 1 in January 2013.

[4] Ms. Minal D. Joshi, Prof. Atul H. Karode, Prof. S.R.Suralkar,"White Blood Cells Segmentation and Classification to Detect Acute Leukemia" in
International Journal of Emerging Trends & Technology in Computer Science (IJETTCS), Volume 2,ISSN 2278-6856 in May June 2013.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

674 www.ijergs.org

[5] Naveed Abbas, Prof. Dr. Dzulkifli Mohamad in ,"Microscopic RGB color images enhancement for blood cells segmentation in ycbcr color
space for k-means clustering" in Journal of Theoretical and Applied Information Technology, Vol. 55 No.1 10th september 2013.

[6] Gaganjit Singh, Swarnalatha P., Tripathy B.K., Swetha Kakani"Convex Hull based WBC Computation for Leukaemia Detection" in
International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering Vol. 2, Issue 5 in 2013.

[7] A.Shanmugam "WBC Image Segmentation and Classification Using RVM" in Applied Mathematical Sciences, Vol. 8, 2014, no. 45, 2227 -
2237 Hikari Ltd in 2014.

[8] C. Allayous, S. Regis, A. Bruel, D. Schoevaert,R. Emilion and T. Marianne-Pepin,"Velocity Allowed Red Blood Cell Classification" in 10th
International IFAC Symposium on Computer Applications in Biotechnology in June 4-6, 2007.

[9] J. M. Sharif, M. F. Miswan, M. A. Ngadi, MdSah Hj ,Muhammad Mahadi bin Abdul Jamil,"Red Blood Cell Segmentation Using Masking and
Watershed Algorithm: A Preliminary Study" in International Conference on Biomedical Engineering (ICoBE) in 27-28 February 2012.

[10] Siyu Guo , Tony Pridmore , Yaguang Kong , Xufang Zhang "An improved Hough transform voting scheme utilizing surround suppression" in
Pattern Recognition Letters 30 (2009) 12411252 in 2009.

[11] Harsh Kapadia1, Alpesh Patel "APPLICATION OF HOUGH TRANSFORM AND SUB-PIXEL EDGE DETECTION IN 1-D BARCODE
SCANNING" in International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering Vol. 2, Issue 6 in JUNE-
2013.

[12] Vinod v.Kimbahune, Mr. Nelesh j.Uke,"Blood Cell Image Segmentation and Counting" in International Journal of Engineering Science and
Technology (IJEST) ISSN : 0975-5462 Vol. 3 No. 3 in march 2011.
















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

675 www.ijergs.org

Improve Workflow Scheduling Technique for Novel Particle Swarm
Optimization in Cloud Environment
R. Pragaladan
1
, R. Maheswari
2

1
Assistant Professor, Department of Computer Science, Sri Vasavi College, Erode, India
2
Research Scholar (M.Tech), Department of Computer Science, Sri Vasavi College, Erode, India
E-mail- mahe.rk123@gmail.com

AbstractCloud computing is the latest distributed computing paradigm [1], [2] and it offers tremendous opportunities to solve
large-scale scientific problems. However, it presents various challenges that need to be addressed in order to be efficiently utilized for
workflow applications. Although the workflow scheduling problem has been widely studied, there are very few initiatives tailored for
cloud environments. Furthermore, the existing works fail to either meet the users quality of service (QoS) requirements or to
incorporate some basic principles of cloud computing such as the elasticity and heterogeneity of the computing resources. In this paper
proposes a resource provisioning and scheduling strategy for scientific workflows on Infrastructure as a Service (IaaS) clouds. The
proposed system presents an algorithm based on the particle swarm optimization (PSO), which aims to minimize the overall workflow
execution cost while meeting deadline constraints.
KeywordsCloud Environments, resource allocation, scheduling, PSO, Multi Cloud Provider

I. INTRODUCTION
The Scientific workflows [3] include ever-growing data and computing resources requirements and demand a high-performance
computing cloud environment in order to be executed in a logical amount of time. These workflows are commonly modeled as a set of
tasks interconnected via data or computing dependencies. The distributed resources have been studied extensively over the years,
focusing on environments like grids and clusters. However, with the emergence of new paradigms such as cloud computing, novel
approaches that address the particular challenges and opportunities of these technologies need to be developed.
Distributed environments have evolved from shared community proposals to usage-based models; the present of these being cloud
environments. This novel technology enables the delivery of cloud related resources over the Inter communication system [4], and
follows a usage-as-you-go model where users are charged based on their consumption. There are various types of cloud providers [5],
each of which has different product offerings. They are classified into a hierarchy of as-a-service terms: Software as a Service (SaaS),
Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).
The existing characteristic of preceding works developed for group of resources and grids is their center of attention on meeting
application deadlines (total amount of time taken receive application) of the workflow though ignoring the cost of the exploit
infrastructure. Even suited for such environments, policies developed for clouds are obliged to consider the usage-per-use model of the
infrastructure in order to avoid prohibitive and preventable costs.
Our proposed work is based on the meta-heuristic optimization technique, particle swarm optimization (PSO). PSO is based on a
swarm of particles moving through hole and converse with each other in order to finding an optimal search direction. PSO have been
better totaling performance than other evolutionary algorithms [6] and fewer parameters to tune, which makes it easier to implement.
Many problems in different areas has been successfully addressed by adapting PSO to specific domains; for instance this technique
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

676 www.ijergs.org

has been used to solve problems in areas such as reactive voltage control[7] , pattern recognition[8] and data mining[9],among others.
In this paper, proposed system develops a static cost-minimization.
The main contributions of this paper are:
- To define the problem of scheduling prioritized workflow ensembles under budget and deadline constraints.
- To analyze and develop several dynamic and static algorithms for task scheduling and resource provisioning that rely on
workflow structure information (critical shortest paths and workflow levels) and estimates of task runtimes in multi cloud
providers
- To evaluate these algorithms using infrastructure model and the application, taking into account reservations in task runtime
estimates, provisioning delays, and failures.
- To discuss the performance of the algorithms on a set of synthetic workflow ensembles based on important, real scientific
applications, using a broad range of different application scenarios and varying constraint values.
The rest of this paper is organized as follows. Section 2 presents the related work followed by the main contribution resource
allocated and scheduling models as well as the problem definition in Section 3. Section 4 gives a brief introduction to NPSO while
and explains the proposed approach. Finally, Section 5 presents the evaluation of the algorithm followed by the conclusions and future
work described in Section 6 and Section 7.

II. RELATED WORK
Scientific workflows, usually represented as Graph, are an important class of applications that lead to challenging problems in
resource management on grid and utility resources systems. Workflows for large computational problems are often composed of
several interrelated workflows grouped into ensembles. Workflows in an ensemble typically have a similar structure, but they differ in
their input data and number of tasks, individual task sizes. There are many applications that require scientific workflow in single cloud
provider in cloud environments.
In general, scheduling multitask workflows on any distributed computing resources (including clouds) is an NP-hard problem [10].
The main challenge of dynamic workflow scheduling on virtual clusters lies in how to reduce the scheduling overhead to adapt to the
workload dynamics with heavy fluctuations. In a cloud platform, resource profiling and stage simulation on thousands or millions of
feasible schedules are often performed, if an optimal solution is demanded. An optimal workflow schedule on a cloud may take weeks
to generate.
Maria Alejandra Rodriguez et al. proposed the particle swarm optimization (PSO) method for solving complex problems with a
very large solution space. Subsequently, the authors demonstrated that the PSO method is effective to generate a soft or suboptimal
solution for most of reduces the cost and communication NP-hard problems [10].
In this paper, a new novel particle swarm optimization (NPSO) algorithm is proposed. The NPSO applies the OO method
iteratively, in search of adaptive schedules to execute scientific workflows on multi cloud provider in cloud compute nodes with
dynamic workloads [11]. During each iteration, the NPSO is applied to search for a suboptimal or good-enough schedule with very
low overhead. From a global point of view, NPSO can process more successive iterations fast enough to absorb the dynamism of the
workload variations. The initial idea of this paper was presented at the Deadline Based Resource Provisioning and Scheduling
Algorithm [12] with some preliminary results. This paper extends significantly from the conference paper with some theoretical proofs
supported by an entirely new set of experimental results.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

677 www.ijergs.org


III. MAIN CONTRIBUTIONS
In this paper main contribution are combined resource provisioning and scheduling strategies for executing scientific
workflows on IaaS clouds. The scenario was modeled as an optimization problem which aims to minimize the overall execution cost
while meeting a user defined deadline and was solved using the meta-heuristic optimization algorithm, PSO. The proposed approach
incorporates basic IaaS cloud principles such as a usage-as-you-go model, heterogeneity, multi cloud, and cloud provider of the
resources. Furthermore, our solution considers other characteristics typical of IaaS platforms such as performance variation and VM
dynamic booting time. The experiments conducted with four well known workflows show that our solution has an overall better
performance than the state-of-the-art algorithms. Furthermore, our heuristic is as successful in meeting deadlines as SCS, which is a
dynamic algorithm. Also, in the best scenarios, when our heuristic, SCS and IC-PCP meet the deadlines, they are retable to produce
schedules with lower execution costs.

IV. PROPOSED SCHEME
The proposed system contains all the existing system implementation. In addition, it extends the resource model to consider
the data transfer cost between data centers so that nodes can be deployed on different regions. Extending the algorithm to include
heuristics that ensure a task is assigned to a node with sufficient memory to execute it will be included in the algorithm. Also, it
assigns different options for the selection of the initial resource pool. For example, for the given task, the different set of initial
resource requirements is assigned. In addition, data transfer cost between data centers are also calculated so as to minimize the cost of
execution in multi-cloud service provider environment. The main contribution of proposed system, the following problem solve in the
existing system, they contribution are
- Adaptable in situations where multiple initial set of resource availability.
- Suitable for multiple cloud service provider environments.
- Data transfer cost is reduced between different cloud data centers.
PROPOSED NPSO ALGORITHMS
Input: Set of workflow task T, Initial Resources R,
Set Dimensional Particle dp, Set Entropy ,
Set Optimal Best opbest, Set Optimal Global Best ogbest
Output: Multi cloud Provider Scheduling
1. Set the dimension of the particle to dp
2. Initialized the population of particles with random position and velocities
3. For each particle, calculated its Entropy values
a. Compare the particles Entropy value with the particles opbest.
- If the current values is better than opbest then set opbest to the current value and location
b. Compare the particles Entropy value with Global best ogbest.
- If the current values is better than ogbest then set ogbest to the current value and location
c. Update the position and velocity of the particle
V X
i
(t+1) =V X
i
(t) + V V
i
(t)


4. Repeat from Step 3 until the stopping criterion is met
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

678 www.ijergs.org

The range in which the particle is allowed to move is determined in this case by the number of resources avail able to run the
tasks. As a result, the value of a coordinate can range from 0 to the number of VMs in the initial resource pool. Based on this, the
integer part of the value of each coordinate in a particles position corresponds to are source index and represents the compute
resource assigned to the task defined by that particular coordinate. In this way, the particles position encodes a mapping of task to
resources.
V. PERFORMANCE EVALUATION
The experiments were conducted using different deadlines. These deadlines were calculated so that their values lie between
the slowest and the fastest runtimes. To calculate these runtimes, two additional policies were implemented. The first one calculates
the schedule with the slowest execution time; a single VM of the cheapest type is leased and all the workflow tasks are executed on it.
The second one calculates the schedule with the fastest execution time; one VM of the fastest type is leased for each workflow task.
Although these policies ignore data transfer times, they are still a good approximation to what the slowest and fastest runtimes would
be. To estimate each of the difference between the fastest and the slowest times is divided by five to get an interval size. To calculate
the first deadline interval we add one interval size to the fastest deadline, to calculate the second one we add two interval sizes and so
on. In this way we analyze the behavior of the algorithms as the deadlines increase from stricter values to more relaxed ones.
The results Fig 1.1 obtained for the PSO workflow and the NPSO algorithms are very similar to those obtained for the
workflow. Its performance improves considerably for the 4th interval where it achieves a 100 percent hit rate.

Fig 1.1 PSO Vs NPSO Performances-I

The results Fig 1.2 obtained for the PSO workflow and the NPSO algorithms are very similar to those obtained for the
workflow. Its performance improves considerably for the 3rd interval where it achieves a 100 percent hit rate.

The average execution costs obtained for each workflow are shown in Fig. 1.3 We also show the mean PSO as the
algorithms should be able to generate a cost-efficient schedule but not at the expense of a long execution time. The reference line on
each panel displaying the average NPSO is the deadline corresponding to the given deadline interval. We present this as there is no
use in an algorithm generating very cheap schedules but not meeting the deadlines; the cost comparison is made therefore, amongst
those heuristics which managed to meet the particular deadline in a give case.

PSO and NPSO Performances Analysis
Deadline Workflow-I
0
20
40
60
80
100
120
1 2 3 4 5 6 7 8 9 10
Algorithms
M
e
a
n
s
0
20
40
60
80
100
120
140
PSO
NPSO
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

679 www.ijergs.org


Fig 1.2 PSO Vs NPSO Performances-II

Fig 1.3 Reduces Cost and Communication for PSO and NPSO
ACKNOWLEDGMENT
My abundant thanks to Dr.JAYASANKAR, Principal, Sri Vasavi College, Erode who gave this opportunity to do this research
work. I am deeply indebted to Prof.B.MAHALINGAM M.Sc., M.Phil., (CS), M.Phil., (Maths), P.G.D.C.A., Head, Department of
Computer Science at Sri Vasavi College, Erode for this timely help during the paper. I express my deep gratitude and sincere thanks
to my supervision R.PRAGALADAN M.SC., M.Phil., Assistant Professor, Department of Computer Science at Sri Vasavi
College , Erode for her valuable, suggestion, innovative ideas, constructive, criticisms and inspiring guidance had enabled me to
complete the work successfully.
VI.CONCLUSION
As research conclusion of proposed system work, they are exploring to different options for the selection in multi cloud
environment in cloud provider. The cloud provider select the initial resource pool have been significant impact on the performance of
the algorithm. We would also like to experiment with different optimization techniques and compare their performance with PSO.
Finally, we aim to implement our approach in a workflow engine so that it can be utilized for deploying applications in real life
environments.
VII. FUTURE ENHANCEMENTS
PSO and NPSO Performances Analysis
Deadline Workflow-II
0
20
40
60
80
100
120
1 2 3 4 5 6 7 8 9 10
Algorithms
M
e
a
n
s
0
20
40
60
80
100
120
140
PSO
NPSO
Reduces Cost & Communication Chart
for PSO and NPSO Algorithms
-50
0
50
100
150
0 0.5 1 1.5 2 2.5
Algorithms
D
i
f
f
e
r
e
n
t

D
e
a
d
l
i
n
e

[
%
]
Deadline 1 Deadline 2 Deadline 3
Deadline 4 Deadline 5 Deadline 6
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

680 www.ijergs.org

Cloud computing is broad field research problem; this paper describes the allocating resources workflow scheduling process from
cloud provider in cloud environments. The proposed system is mainly contribution for allocated resources scheduling using novel
particle swarm optimization algorithm. In this algorithm solve optimization problem and allocated the resources job workflow from
multiple cloud provider in single cloud environments. In future, implementing new algorithm such as new AI application applied solve
workflow scheduling problem in cloud environment.

REFERENCES:
[1] Velte, A., Velte, T., Elsenpeter, R., (2010), Cloud Computing: A Practical Approach, McGraw-Hill Osborne (Primary book to
be used).
[2] Reese, G., (2009), Cloud Application Architectures: Building Applications and Infrastructure in the Cloud, OReilly, USA
(Secondary book from which 3-4 chapters are likely to be used)
[3] Maria Alejandra Rodriguez and Rajkumar Buyya, Deadline Based Resource Provisioning and Scheduling Algorithm for
Scientific Workflows on Clouds, ieee transactions on cloud computing, vol. 2, no. 2, april-june 2014
[4] G. Juve, A. Chervenak, E. Deelman, S. Bharathi, G. Mehta, and K. Vahi, Characterizing and profiling scientific workflows,
FutureGeneration Comput. Syst., vol. 29, no. 3, pp. 682692, 2012.
[5] P. Mell, T. Grance, The NIST definition of cloud computing recommendations of the National Institute of Standards and
Technology Special Publication 800-145, NIST, Gaithersburg, 2011.
[6] R. Buyya, J. Broberg, and A. M. Goscinski, Eds., Cloud Computing: Principles and Paradigms, vol. 87, Hoboken, NJ, USA:
Wiley, 2010.
[7] Y. Fukuyama and Y. Nakanishi, A particle swarm optimization for reactive power and voltage control considering voltage
stability, in Proc. 11th IEEE Int. Conf. Intell. Syst. Appl. Power Syst., 1999, pp. 117121.
[8] C. O. Ourique, E. C. Biscaia Jr., and J. C. Pinto, The use of particle swarm optimization for dynamical analysis in chemical
processes, Comput. Chem. Eng., vol. 26, no. 12, pp. 17831793, 2002.
[9] T. Sousa, A. Silva, and A. Neves, Particle swarm based data mining algorithms for classification tasks, Parallel Comput., vol.
30,no. 5, pp. 767783, 2004.
[10] M. R. Garey and D. S. Johnson, Computer and Intractability: A Guide to the NP-Completeness, vol. 238, New York, NY, USA:
Freeman, 1979.
[11] M. Rahman, S. Venugopal, and R. Buyya, A dynamic critical path algorithm for scheduling scientific workflow applications
on global grids, in Proc. 3rd IEEE Int. Conf. e-Sci. Grid Comput., 2007, pp. 3542.
[12] L. Golubchik and J. Lui, Bounding of Performance Measures for Threshold-baded Systems: Theory and Application to
Dynamic Resource Management in Video-on-Demand Servers IEEE Transactions of Computers, 51(4), pp 353-372, April,
2002.
[13] F. Zhang, J. Cao, K. Hwang, and C. Wu, Ordinal Optimized Scheduling of Scientific Workflows in Elastic Compute Clouds,
Third IEEE Int'l Conf. on Cloud Computing Technology and Science (CloudCom11), Athens, Greece, Nov.29-Dec.1, 2011, pp.
9-17.
[14] J. Kennedy and R. Eberhart, Particle swarm optimization, in Proc. 6th IEEE Int. Conf. Neural Netw., 1995, pp. 19421948
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

681 www.ijergs.org

Design of Welding Fixtures and Positiners
Prof. S.N.Shinde, Siddharth Kshirsagar, Aniruddha Patil, Tejas Parge, Ritesh Lomte
University Of Pune, sidksh7@gmail.com, 9604536827

Abstract:- Robotic welding requires specialized fixtures to accurately hold the work piece during the welding operation. Despite
the large variety of welding fixtures available today the focus has shifted in making the welding arms more versatile, not the
fixture.The new fixture design reduces cycle time and operator labor while increasing functionality; and allows complex welding
operations to be completed on simple two axis welding arms
Keywords:- fixtures, positioners, welding, drop center, CNC, CMM, CAD simulation, automation
1. Introduction
Fixed Automation welding is often applied to welding equipment performing dedicated movements on a weld joint that is
highly repeatable in shapes such as circle, arc and longitudinal seams [8]. The welding machine systems can be flexible and can be
adapted to a differing range of weld automation application. The weld equipment operations are normally fixed to perform a basic
geometric welding application. Welding position equipment and machine systems are backbone of fixed welding automation usually
including welding lathes, turn tables positioners, circle welders, and longitudinal seam welders. To address this issue, we designed and
constructed a prototype welding fixture with enhanced mobility[4]. The principles of positioning are the same for all weldments, large
and small. Many companies extends their expertise in designing and manufacturing of sheet metal welding fixture, assembly fixture,
checking fixture, inspection fixture and cater to the requirements of a single fixture to turnkey solution. They have expertise in design
and manufacturing of manual, pneumatic, hydraulic fixture along with installation and commissioning. With the help of sophisticated
CAD and Simulation tools feasibility study is carried out inclusive of gun approach, weld study and line layout. All fixtures are
validated and supported within house CMM equipment reports for record.
The latest developments from the prototype manufacturing International standards find their way into the series production of
all major vehicle manufacturers. A highly flexible robotic welding system is supporting the chassis specialist to make process and
manufacturing operations ever more efficient.The complex nature of the welding process due to multi-field (thermal, mechanical,
metallurgy etc.) interactions and intricate geometries in real world applications has made the prediction of weld-induced
imperfections, a truly difficult and computationally intensive task. In industries it is however, with the availability of 64 digit
computers and refined FE tools, welding engineers around the world are more biased towards the computer simulations of complex
welding phenomenon instead of the conventional trial and error approach on the shop floor is the most common practice nowadays
[3]. A significant simulation and experimental work focusing on circumferential welding is available in the literature. As the computer
simulation of welding processes is highly computationally intensive and large computer storage and CPU time are required, most of
the previous research reduces computational power requirements by simplifying with assumptions such as rotational symmetry and
lateral symmetry in numerical simulations. These assumptions reduces the computational demand at the cost of the accuracy of the
results because the model was over simplified by limiting the solution domain to only a section of the whole do-main with forced
symmetry assumptions which did not prevails.
2. Basic Study of Welding Fixtures and Positioners
There are all types and designs of welding positioners; some have fixed tables that are always vertical, which are called
headstock positioners. The type where the table tilts on geared segments that also rotate and a variable speed in both directions are
most often just called welding positioners. These are used to weld and position everything from pipe to tanks just about anything that
you need to rotate or position in the flat for a smooth fluid smooth weld.This often gives welders faster welding speeds with better x-
ray quality welds Most of the positioners are rated with what is called C.G.(centre of gravity) and this will vary from type and designs
of positioners. The centre of gravity is a indicator of the amount of torque that the positioner has for rotation purposes. The tilting
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

682 www.ijergs.org

positioners also are rated for the amount of tilt torque. The rating is not only from the face plate away, it is also the rating from the
centre of the positioner table out. This is a important factor if you are going to be welding offset loading like pipe elbows and Tees.
If your positioner is not sized correctly the rotation speed of the positioners table with speed up and slow down. If your
positioner does this you could be doing damage to the motors and gear box of the positioner. You seldom hear a welding fabricator
complain they have to large of a weld positioner.Over time this will also cause the ring gear and other positioner parts to wear more
quickly causing sloppy rotation and at some point failure of the welding positioner's parts. If this is happening you need to either look
at purchasing a larger or a positioner with higher torque. This will insure smooth welding table rotation making for better quality
welds and also easier on the welder. If you are doing longer concentric parts like splicing pipe you can use out board support stands to
overcome the centre of gravity rating of the positioner. This is a common practice even when the load is with the positioners rating. It
just helps the fabrication shop keep the positioner is quality working condition. USA built positioners tend to be rated at a C.G. of 4" -
6" and 12" away from the centre of the table as well as away from the welding table face plate.Things to ask about when you are
buying a welding positioner are where it was built, what is spare parts availability.
What is the warranty and does that cover parts install in shop or as most USA manufactures the install is part of the warranty
if it is shipped back to the manufacture. Many will send the part and you can install and get compensation for a fair labour cost. Import
positioners have been known to have long lead times for parts and those brands also unlike most made in the USA positioners do not
always use common standard parts. One might buy a positioner and find they have different drives, motors and gear boxes. This is
often time consuming to identify what part is needed. This can be time consuming and frustrating when replacement parts are needed.
Some of the imports are very low cost, using cheaper gear boxes, drives, motors and inferior steel.But the low cost is soon forgotten
when the positioner stops working. You also find out the gear box or electrical parts all do not match from positioner to positioner
even though they maybe even the same model. Most if not all positioners built in North American are built too much higher standards
than many of the far -east imports. When looking at purchasing a large or small welding positioner ask where it is built. Equipment
made in the USA are manufactured to a higher standard than many of the imported models you will find at your national supply big
box stores that tend to sell all type of unrelated equipment [5].
5 important things a welder needs to know about weld positioners when selecting, operating and maintaining positioner.
2.1. Remember the CG:-
Selecting right positioning device for job involves accounting not only for weight and size of weldment but also for centre of gravity
CG and how far it is from positioning device.
COG changes as welder adds material and parts to positioner, so this change must be taken into account.
2.2. Attach weldmentcorrectly:-
It is important as it is this point where separation would naturally occur.
Round parts are attached by a three jaw chuck for easy part alignment.
2.3. Use turning rolls for cylinders:-
Small turning rolls-powered or idler type can rotate a pipe or a vessel to enable, producing an easy circumferential weld.
Combination of roller type pipe stand and vertical faced table positioners provides stability and safety when a round part is extended
outward.
2.4. Keep it flat:-
Unit is mounted on flat, even surface to prevent it from tipping.
They should be used to secure positioner to a stable surface to encounter unexpected forces.
2.5. Connect ground current to positioner:-
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

683 www.ijergs.org

Ground current transfers from table and into the chassis which eliminates having to be moved and replaced a welding clamp
continuously.
Without proper grounding electrical parts can be damaged and substandard weld deposits made.

3. Latest Trends in Industries
Alignment and positioning equipment are important as they are required in nearly all research and manufacturing processes. The
increasing focus on nano-scale engineering has forced engineers and scientists to think about the types of alignment devices which
will be required for new and emerging applications. At present, they may choose from two types of state-of-the-art devices; either
fixtures or positioners. Fixtures are devices which define a parts fixated orientation and location via a fixed, geometry.
3.1 Drop center gravity positioner:-
The DCG series Positioner provides 2-axis motion, continuous rotation and, 180 tilt from the horizontal table position.
This configuration of positioner can also be made in a geared elevation version with a third powered axis for elevation. The
worktable's surface can be specified at varying distances below the tilt axis, as well as specifying swing radius clearances from the
table's rotation axis to the nearest obstruction. Due to the configuration of these models, it is necessary to consult the factory for sizing
and capacity requirements. The counter balancing effect of the cantilevered hanger precludes pre-calculated load capacity charts.
Since applications require differing hanger lengths and the tables "dropped" distance below the tilt axis, the counterbalancing effect
will vary greatly. The load, centre of gravity location, and swing clearance will be required to assist the factory in the selection of the
correct model.
Drop-Centre Gravity Features:-
- AC Variable speed drives and motors
- Optional Servo Drives
- Powered 180 tilt
- Optional geared elevation models available
- Robotic versions


Fig1. Welding Positioner[8]
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

684 www.ijergs.org


3.2 Koike Aronson / Ransome [7] :-
Positioners provide all the advantages of standard fixed height models but also include adjustable elevation to provide ergonomic
working heights and improve safety. Gear rack cut into vertical posts and multiple interlocked drive pinions provide the highest degree
of safety in the industry. NEMA 12 electrical ground blocks and tapered roller bearings are provided on every unit. Lift-time
lubrication and sealed drive units insure many years of trouble free service. Special engineered elevation heights and options are also
available. Headstock and Tailstock axes on Koike Aronson Ransome systems are electronically synchronized to prevent work piece /
fixture skewing. Both axes are driven by an encoded motor, controlled by a drive with internal PLCcapabilities. Encoder information
from both axes is fed back to the Tailstock drive. The Tailstock encoder provides closed-loop position information to the Tailstock
drive, which in turn, follows the reference signal from the Headstock encoder. The Headstock drive and motor respond to commands
from the operator control pendant (or optionally a supervisory programmable control system). When the Headstock moves, the
Tailstock automatically follows, step-for-step, based upon encoder feedback. If any errors are detected internally, or from external
devices by either drive, the system will immediately halt to prevent work piece/fixture skewing.
3.3 USA Patents: [5]
Since yesteryears many US scientists have carried out research and review in this manufacturing field of welding fixtures and
positioners. These patents revolutionalized the design procedures and advanced its application purposes.


3.3.1. By Edward.V.Cullen
The present invention relates generally to work positioners. More particularly to that type which are designed and adapted to support
and hold metallic structural pieces in different positions.
A work supporting table which is carried by structure so that it is rotatable about its own centre and is capable of being tilted bodily
into different angular positions as well as raised or lowered.
3.3.2. By Edmund Bullock
According to the invention a jig for use in assembling component parts of composite metal structure comprises a framework of
cylindrical form adapted to receive ad secure said component parts in correct relative position for connection with one other and
capable of being rotated and titled at the other end.
This invention relates to the improvement in welding fixtures for supporting a heavy casting or a frame for welding and has a
principle object to provide such a fixture, upon which material may be attached without use of crane and which will permit any side of
frame or casting.
Latest trends have proclivity towards positioners of various kinds which makes the operation done quickly and effectively without
creating operator assume disadvantages, unsafe or awkward positions.
Having thus described the invention what I claim as new and desire to secure by Letters Patent is:
i. A work positioner adapted for use in connection with welding and comprising a base with a bracket thereon, a member connected
pivotally to the bracket so that it is capable of being tilted in a vertical plane, a work supporting table connected to the member so that
it is rotatable about its centre, irreversible gearing between the member and table for rotating the table relatively to said member, and
means for readily rendering said gearing inoperative so as to release or free the table for manual rotation.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

685 www.ijergs.org

ii. A work positioner of the character described comprising a base with a bracket thereon, a member connected pivotally to the bracket
so that it is capable of being tilted in a vertical plane, a work supporting table connected to the member.
iii. A work positioner comprising a base with a bracket thereon, a member connected pivotally 5 to the bracket so that it is capable of
being tilted in a vertical plane, a work supporting table connected to the member so that it is rotatable about its centre, irreversible
gearing between the member and the table for rotating the table relatively 0 to the member, including a worm and a worm gear
normally in mesh with the worm, and means whereby the worm may be readily disengaged from the worm gear in order to render the
gearing inoperative and thus free or release the table for manual rotation.
iv. A work Positioner comprising a base, a bracket carried by the base and embodying a pair of laterally spaced parallel arms
extending upwardly and outwardly at an acute angle with respect to the horizontal, a sector shaped member disposed between the arms
and having the apex or hub part thereof pivotally connected to the upper extremities of said arms so that it is adapted to tilt in a
vertical plane, a work supporting table connected to the member so that it is rotatable about its centre, gearing for tilting the member
and table, including a pinion disposed between the lower or inner ends of the arms, and an accurate series of teeth on the periphery of
the member and in mesh with said pinion, irreversible gearing between the member and the table for rotating the table relatively to the
member, and means for readily rendering the last mentioned gearing inoperative so as to free or release the table for manual rotation.


v. A work positioner comprising a base, a bracket carried by the base and embodying an outwardly and upwardly extending arm, a
sector shaped member embodying a chamber therein and having the apex or hub part thereof connected pivotally to the upper
extremity of said arm so that it is capable of being tilted in a vertical plane, a work supporting table connected to the member so that it
is rotatable about its centre, gearing for tilting the member and table, including a pinion adjacent the lower end of the arm, and an
accurate series of teeth on the periphery of the member and in mesh with the pinion, irreversible gearing for rotating the table
relatively to the member, including a pair of normally meshing gears in said chamber, and means whereby one of the gears may be
readily disengaged from the other gear in order to render the last mentioned gearing inoperative and thus free the table for manual
rotation.
vi. A work positioner adapted for use in connection with welding and comprising a base with a bracket thereon, a member embodying
a bearing and connected pivotally to the bracket so that it is capable of being tilted in a vertical plane, a work supporting and retaining
table connected to the member so that it is rotatable about its centre, irreversible gearing between the member and the table for
rotating the table relatively to the member including a drive shaft journal led in the bearing of said member, a worm fixed to the shaft
and a worm gear operatively connected to the table and adapted to be driven by the worm in connection with drive of the shaft, said
shaft being axially slid able in said bearing so that the worm may be shifted out of engagement with the worm gear when it is desired
to render the gearing inoperative and free the table for manual rotation, and releasable means associated with said shaft and adapted to
hold the latter against axial displacement when the worm is shifted into engagement with the worm gear.

4. Objectives of Welding Fixtures & Positioners Along With Its Advantages
4.1Objective: -
Primary objective of invention is to provide a supporting structure having greater capabilities and not only being used more
expediously but also of handling structural assemblies which are bulky to be handled manually.Another objective is to provide a
positioner which:
- Occupies small space.
- Both rugged and durable.
- Efficiently and effectively fulfill its intended purpose.
- Capable of handling small as well as large sized work pieces.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

686 www.ijergs.org

- Retain framework in various positions into which it is swung.
- Facilities assembly of components in correct position.
4.2 Advantages:-
- Reduces welder fatigue.
- Increases welder safety.
- Improves weld quality.
- Increases productivity over manually positioning the parts.
- Assists welders in maneuvering and welding large weldments and parts.
- Ensures smooth welding table rotation.
- Faster welding speeds especially for obtaining X-ray quality welds.




5. Design Features

The principles of positioning are the same for all weldments, large or small. The base product is affixed to the type or design
welding positioner and allows movement by powered mechanical means into the proper weld position and this allows quality welds
and faster assembly. Fabricated sub assemblies are added and the entire weldment can be moved to allow easy access to weld joints
for easier access for the welder. Properly positioned weldments regardless of the size, reduces welder fatigue. This also increases
safety, improves weld quality. By moving the weldment using a welding positioner means and positioning of the welding area so
welders are not forced to weld out of position or in an uncomfortable position. Safety is improved when the weldment is fixed to a
proper type and design weld positioning device. Cranes, chains, slings and other less safe methods for moving a part will create
uncontrolled motion which is a safety hazard. With the proper weld positioner welders do not have to maneuver underneath a possibly
heavy weldment. Doing this reduces a safety hazard and the risk of injury from falling sparks, slag, or metal parts.
While many welders are qualified to do overhead and vertical welding, down hand welds often require less training, allowing
new welders to produce quality welds. Gravity helps the welder in a downhill weld, resulting in equal legs on fillet welds, smoother
bead surface and reduced cleanup and rework times. By combining a positioner with a welding power source and a torch stand, a
welder can perform semiautomatic welding that is productive and ergonomically friendly. The positioner holds the part and maneuvers
it under a stationary torch. This torch can be fitted with a weaving device to allow oscillation to fill large gaps or V-grooves.
Consistent speed and torch position improve the quality of the weld with greater repeatability. By using a communication cable
between the integrated positioner and a welding power supply, the operator only needs to signal a start through a foot pedal or a start
button, and the welding cycle will continue until the signal is automatically sent that it has completed. This method, typically used on
a circumferential weld, can incorporate dwell times to create a puddle and fill the crater. The completed part is removed and another is
started. Fabrication welders should keep these rules as paramount when choosing and operating a weld positioner.
When a weldment is a cylindrical, it is eligible to be supported when rotated. Small turning rolls idlers and jack stands with
rollers can support the cylinder during rotation. A long pipe or vessel can use these to help overcome the centre of gravity away from
the positioner table face plate. Idler rolls are not powered but can be added in series to support these pipes and tanks. Often these are
used for a smaller size positioner to do parts that once required larger size weld positioning devices. These do not help offset loads
centre of gravity away from the centre of the table out toward the edge of the table or further. The combination of a pipe stand and a
vertical-faced table positioner provides stability and safety when a round part is extended outward. The support rollers provide two
points of contact and the weight has added support for the centre of gravity. Welding positioning equipment and machines are
important that the parts be mounted to a flat and even surface to prevent it from tipping. Most have mounting holes provided that
should be used to secure the positioner to prevent tipping when or if it encounters an unexpected force. Some types of positioner have
leg extensions to prevent forward tipping. But if the weldment offset load is too much the extensions will not prevent side tipping.
Small weld positioners are best used when bolted to the welding table or surface to insure proper grounding and tipping prevention.

For experimental purpose part size given by the customer of 1x1.5m having weight of approximately 1.2 tones the methodology
applied is:


5.1 Construction:-
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

687 www.ijergs.org


Each weld on any component is welded using a specific welding process with the aid of highly focused electrode shielding gas, large
degree of control the welder has over the heat intensity leads to production of very strong and consistent welds [2].

- Base is fabricated and machined and is provided with support pin acting as a pivot having a high clamping pressure of 700
bar for holding parts.
- Base is provided with 4 hydraulic cylinders and 1 main cylinder.
- Gear transmission is contributed by operation of motor through a pinion by gear reduction ratio.
- The major contribution in this process is carried out by fabrication welding system mounted on the base used especially for
welding intricate parts like chassis of JCB.
- Sleeve bearings are mounted in order to reduce backlash error and to provide frictionless operation analogous to an escalator.
- There is a power pack unit which includes pumps which drive hydraulic pivots having pressure within the range of 40-50
bars.
- There are two control panels :
i. Primary: It is used to operate the positioner angle and speed of motor with use of push button or PLC
programming.
ii. Secondary: It is used to vary the clamping pressure of the fixtures mounted on the base table.
- In order to achieve flexibility in operation clamps slide on guide way having a lock-pin arrangement.
- Hinge pin plays a pivotal role in fixing s hydraulic cylinder with the base of the working cylinder.
- In order to fulfill the advantages stated in the above context manufacturing is carried out mainly by using a Vertical
Machining Centre (VMC) performing several operations like milling, turning, boring and it is more advanced than CNC
machines.

5.2 Material:
- The fabrication System mounted on the base is made up of mild steel.
- For hard parts which are prone to inducing friction is made up of alloy steel grade EN-19 having high
tensile strength, good ductility and shock resisting properties.
- Pins are made up of 20MnCr5 which are toughened and case hardened for smooth operation.





Fig2. Fixture Assembly[8]


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

688 www.ijergs.org

5.3 Procedure

As a result of complex alignment and positioning equipment are important as they are required in nearly all research
andmanufacturing processes [1]. The increasing focus on nano-scale engineering has forced engineers andscientists to think about the
types of alignment devices which will be required for new and emergingapplications. At present, they may choose from two types of
state-of-the-art devices; either fixtures or positioners. Fixtures are devices which define a parts fixated orientation and location via a
fixed geometry.

- Material selection.
- Analysis of selected material on computer aided softwares.
- Prototype design.
- Performing tests on the prototype.
- Calculating Centre of Gravity based on the test reports.
- Gear Box design for speed variations.
- Pressure calculations required to be supplied by the hydraulic cylinders for clamping operation.


Acknowledgements


Inspiration and guidance are invaluable in every aspects of life, especially in the fields of academics, which we have received
from our company Adroit Enterprises. We would like to thank them as they are responsible for the complete presentation of
our project and also for his endless contribution of time, effort valuable guidance and encouragement he has given to our. The
following work would not have been possible without the help and advice of our company Adroit Enterprises who have
acknowledged for their faith in our abilities and for letting us nd our own way.


Conclusion

Conclusion is drawn on the basis of the information collected on each aspect of our project. It leads to a belief that if applied
will create an even better machine than we have designed. The process of conducting operations related to welding fixtures
and positioners helps in gaining a deeper understanding as well as effective project process. The prototype construction
proves fruitful in analyzing the process for its potential as a finished product.In todays market all large manufacturers are
automating as much of their production line as possible. Automated processes have been in high demand extensively in past
two decades but there is still room for improvement. Welding fixtures closes the gap in the engineering of automated fixture
mechanism. From finding a resource for research material to design updates of the part causes the task of accurately
prototyping the real design difficult. It is important that the design satisfies all of the functional requirements and design
parameters which were outlined at the start of the project. In order to meet the requirements of the fixture customization is
done by making the clamping system very practical for various sizes and geometries. A few other considerations for
calculations that would ultimately improve the quality of the welding fixture are stress analysis and cost benefit analysis.
Stress analysis and friction analysis would both help in the selection of material to be used for each part of the machine.
Thorough stress calculations could not be done without knowledge of the material being used for each part, because of
different materials physical and mechanical properties. By also knowing the material selection a cost benefit analysis could
be conducted to determine how cost effective the product is. All of these calculations would greatly add to the significance of
the research already conducted.


REFERENCES:

Journal Papers:
[1] Martin. L. Culpepper, Design Of A Hybrid Positioner-Fixture For Six-Axis Nanopositioning And Precision
Fixturing, MIT Dept. Of Mechanical Engineering, Massachusetts Avenue.
[2] Reid. F. Allen, Design And Optimization Of A Formula Sae Race Car Chassis And Suspension , Massachusetts
Institute of Technology, June 2009.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

689 www.ijergs.org

[3] Prabhat Kumar Sinha, Analysis Of Residual Stresses And Distortions In Girth-Welded Carbon Steel Pipe,
International Journal Of Recent Technology and Engineering (IJRTE), May 2013.
[4] Jeffery. J .Madden, Welding Fixtures And Active Position Adapting Functions, Dec 7 2007.

Thesis:
[5] U.S. Patents Info
Websites:
[6] www.weldingpositioner.org
[7] Koike Industries Official Website.
www.adroitenterprises.com



















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

690 www.ijergs.org

Blog-Spam Detection Using intelligent Bayesian Approach
- Krushna Pandit, Savyasaachi Pandit
Assistant Professor, GCET, VVNagar, E-mail- pandit.krushna11@gmail.com , M - 9426759947
Abstract
Blog-spam is one of the major problems of the Internet nowadays. Since the history of the internet the spam are considered a huge
threat to the security and reliability of web content. The spam is the unsolicited messages sent for the fulfillment of the senders
purpose and to harm the privacy of user, site owner and/or to steal available resource over the internet (may be or may not be allocated
to).For dealing with spam there are so many methodologies available. Nowadays the blog spamming is a rising threat to safety,
reliability, & purity of the published internet content. Since the search engines are using certain specific algorithms for creating the
searching page-index/rank for the websites (i.e. google-analytics), it has attracted so many attention to spam the SMS(Social Media
Sites) for gaining rank in order to increase the companys popularity. The available solutions to malicious content detection are quite a
more to be used very frequently in order to fulfill the requirement of analyzing all the web content in certain time with least possible
false positives. For this purpose a site level algorithm is needed so that it can be easy, cheap & understandable (for site modifiers) to
filter and monitor the content being published. Now for that we use a Bayes Theorem of the Statistical approach.
Keywords Spam-detection, Bayesian approach, blog-spam, autonomous spam detection, text mining
INTRODUCTION
Spam is the use of electronic-messaging systems to send unsolicited bulk messages, especially advertising, indiscriminately. While
the most widely recognized form of spam is e-mail spam, the term is applied to similar abuses in other-media:
IM-instant messaging spam, Usenet newsgroup spam, Web search engine spam, spam in blogs, wiki spam, online classified
ads spam, mobile phone messaging spam, Internet forum spam, junk fax transmissions, social networking spam, Social spam etc.[3]
Spamtraps are often email addresses that were never valid or have been invalid for a long time that are used to collect spam. An
effective spamtrap is not announced and is only found bydictionary attacks or by pulling addresses off hidden webpages.[4] For a
spamtrap to remain effective the address must never be given to anyone. Some black lists, such as spamcop, use spamtraps to catch
spammers and blacklist them.
Enforcing technical requirements of the Simple Mail Transfer Protocol (SMTP) can be used to block mail coming from systems that
are not compliant with the RFC standards.[2] A lot of spammers use poorly written software or are unable to comply with the
standards because they do not have legitimate control of the computer sending spam (zombie computer). So by setting restrictions on
the mail transfer agent (MTA) a mail administrator can reduce spam significantly, such as by enforcing the correct fall back of Mail
eXchange (MX) records in the Domain Name System, or the correct handling of delays (Teergrube).
Fig- Current Scenario for Spam-Awareness [By IDA Singapore Inc.][12]
Not
aware
, 26%
Aware
but
not
using
38%
Using
one, 3
3%
Using
both,
3%
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

691 www.ijergs.org

Spam-detection
Given that the objective of web spam is to improve the ranking of select search results, web spamming techniques are tightly coupled
to the ranking algorithms employed (or believed to be employed) by the major search engines. As ranking algorithms evolve, so will
spamming techniques.[1] For example, if web spammers were under the impression that a search engine would use click-through
information of its search result pages as a feature in their ranking algorithms, then they would have an incentive to issue queries that
bring up their target pages, and generate large numbers of clicks on these target pages. Furthermore, web spamming techniques evolve
in response to countermeasures deployed by the search engines.[6] For example, in the above scenario, a search engine might respond
to facetious clicks by mining their query logs for many instances of identical queries from the same IP address and discounting these
queries and their result click through/s in their ranking computation.[9] The spammer in turn might respond by varying the query
(while still recalling the desired target result), and by using a bot-net (a network of third-party computers under the spammers
control) to issue the-queries and the click -throughs on the target results.
Fig- Spam Detection Model(Message content)
Given that web spamming techniques are constantly evolving, any taxonomy of these techniques must necessarily be ephemeral, as
will be any enumeration of spam detection heuristics.[5]
However, there are a few constants:
Any successful web spamming technique targets one or more of the features used by the search engines ranking algorithms.
Web spam detection is a classification problem, and search engines use machine learning algorithms to decide whether or not a
page is spam.
In general, spam detection heuristics look for statistical anomalies in some of the features visible to the search engines.
BLOG SPAMMING
Spam in blogs (also called simply blog spam, comment spam, or social spam) is a form of spam-dexing- done by automatically
posting random comments or promoting commercial services to blogs, wikis, guest-books, or other publicly accessible
online discussion boards.[11] Any web application that accepts and displays hyperlinks submitted by visitors may be a target.
Adding links that point to the spammer's web site artificially increases the site's search engine ranking.[9] An increased ranking often
results in the spammer's commercial site being listed ahead of other sites for certain searches, increasing the number of potential
visitors and paying customers. There are various solutions available for spam-detection, some of them are described as below[2]:
1. General spam-avoidance approaches
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

692 www.ijergs.org

Disallowing multiple consecutive submissions
It is rare on a site that a user would reply to their own comment, yet spammers typically do. Checking that the user's IP address is not
replying to a user of the same IP address will significantly reduce flooding. This, however, proves problematic when multiple users,
behind the same proxy, wish to comment on the same entry. Blog Spam software may get around this by faking IP addresses, posting
similar blog spam using many IP addresses.
Blocking by keyword
Blocking specific words from posts is one of the simplest and most effective ways to reduce spam. Much spam can be blocked simply
by banning names of popular pharmaceuticals and casino games.{{Citation needed|date=September 2012}
This is a good long-term solution, because it's not beneficial for spammers to change keywords to "vi@gra" or such, because
keywords must be readable and indexed by search engine bots to be effective.[4]
nofollow
Google announced in early 2005 that hyperlinks with rel="nofollow" attribute
[4]
would not be crawled or influence the link target's
ranking in the search engine's index. The Yahoo and MSN search engines also respect this tag.
Using rel="nofollow" is a much easier solution that makes the improvised techniques above irrelevant. Most weblog software now
marks reader-submitted links this way by default (with no option to disable it without code modification). A more sophisticated server
software could spare the nofollow for links submitted by trusted users like those registered for a long time, on awhitelist, or with a
high karma. Some server software adds rel="nofollow" to pages that have been recently edited but omits it from stable pages, under
the theory that stable pages will have had offending links removed by human editors.
Other websites like Slashdot, with high user participation, use improvised nofollow implementations like adding rel="nofollow" only
for potentially misbehaving users. Potential spammers posting as users can be determined through various heuristics like age of
registered account and other factors. Slashdot also uses the poster's karma as a determinant in attaching a nofollow tag to user
submitted links.
rel="nofollow" has come to be regarded as a microformat.
Validation (reverse Turing test)
A method to block automated spam comments is requiring a validation prior to publishing the contents of the reply form. The goal is
to verify that the form is being submitted by a real human being and not by a spam tool and has therefore been described as a reverse
Turing test. The test should be of such a nature that a human being can easily pass and an automated tool would most likely fail.[5]
Many forms on websites take advantage of the CAPTCHA technique, displaying a combination of numbers and letters embedded in
an image which must be entered literally into the reply form to pass the test. In order to keep out spam tools with built-in text
recognition the characters in the images are customarily misaligned, distorted, and noisy. A drawback of many older CAPTCHAs is
that passwords are usually case-sensitive while the corresponding images often don't allow a distinction of capital and small letters.
This should be taken into account when devising a list of CAPTCHAs. Such systems can also prove problematic to blind people who
rely on screen readers. Some more recent systems allow for this by providing an audio version of the characters. A simple alternative
to CAPTCHAs is the validation in the form of a password question, providing a hint to human visitors that the password is the answer
to a simple question like "The Earth revolves around the... [Sun]". One drawback to be taken into consideration is that any validation
required in the form of an additional form field may become a nuisance especially to regular posters. One self published original
research noted a decrease in the number of comments once such a validation is in place.
Disallowing links in posts
There is negligible gain from spam that does not contain links, so currently all spam posts contain (an excessive number of) links. It is
safe to require passing Turing tests only if post contains links and letting all other posts through. While this is highly effective,
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

693 www.ijergs.org

spammers do frequently send gibberish posts (such as "ajliabisadf ljibia aeriqoj") to test the spam filter. These gibberish posts will not
be labeled as spam. They do the spammer no good, but they still clog up comments sections.
2. Distributed approaches
This approach is very new to addressing link spam. One of the shortcomings of link spam filters is that most sites receive only one
link from each domain which is running a spam campaign. If the spammer varies IP addresses, there is little to no distinguishable
pattern left on the vandalized site. The pattern, however, is left across the thousands of sites that were hit quickly with the same links.
A distributed approach, like the free LinkSleeve uses XML-RPC to communicate between the various server applications (such as
blogs, guestbooks, forums, and wikis) and the filter server, in this case LinkSleeve. The posted data is stripped of urls and each url is
checked against recently submitted urls across the web. If a threshold is exceeded, a "reject" response is returned, thus deleting the
comment, message, or posting. Otherwise, an "accept" message is sent.[7]
A more robust distributed approach is Akismet, which uses a similar approach to LinkSleeve but uses API keys to assign trust to
nodes and also has wider distribution as a result of being bundled with the 2.0 release of WordPress. They claim over 140,000 blogs
contributing to their system. Akismet libraries have been implemented for Java, Python, Ruby, and PHP, but its adoption may be
hindered by its commercial use restrictions. In 2008, Six Apart therefore released a beta version of their TypePad AntiSpam software,
which is compatible with Akismet but free of the latter's commercial use restrictions.
Project Honey Pot has also begun tracking comment spammers. The Project uses its vast network of thousands of traps installed in
over one hundred countries around the world in order to watch what comment spamming web robots are posting to blogs and forums.
Data is then published on the top countries for comment spamming, as well as the top keywords and URLs being promoted.
3. Application-specific anti-spam methods
Particularly popular software products such as Movable Type and MediaWiki have developed their own custom anti-spam measures,
as spammers focus more attention on targeting those platforms.[8] Whitelists and blacklists that prevent certain IPs from posting, or
that prevent people from posting content that matches certain filters, are common defenses. More advanced access control lists require
various forms of validation before users can contribute anything like link-spam. The goal in every case is to allow good users to
continue to add links to their comments, as that is considered by some to be a valuable aspect of any comments section.
PROPOSED ALGORITHM/ SOLUTION:
1. Scan entire text(including headers, html & javascripts) for retrieving tokens
2. Map the tokens in 2 hash-tables(one for each corpus)
3. Count number of times each token occurs in each corpus(ignore the case)
4. Create 3
rd
hash-table for mapping each token to the probability (that the input message is a spam)
(let ((g (* 2 (or (gethash word good) 0))) (b (or (gethash word bad) 0))) (unless (< (+ g b) 5) (max .01 (min .99 (float (/ (min 1
(/ b nbad)) (+ (min 1 (/ g ngood)) (min 1 (/ b nbad)))))))))
- word is the token whose probability is to be-calculated
- good & bad are the hash tables created in 1
st
step
- arc formula
5. The most interesting 15 tokens are used to calculate the probability of message to be a spam(use combined probability)
6. Treat the mail as spam if the probability of spam is calculated to be more than 0.9
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

694 www.ijergs.org

RESULTS
In Spam-detection it is to evaluate 2 ratios for calculating correctness of an algorithm.
False positive rate
The false positive rate is the proportion of absent events that yield positive test outcomes, i.e., the conditional probability of a positive
test result given an absent event.
The false positive rate is equal to the significance level. The specificity of the test is equal to 1 minus the false positive rate.[9]
In statistical hypothesis testing, this fraction is given the Greek letter , and is defined as the specificity of the test.
Increasing the specificity of the test lowers the probability of type I errors, but raises the probability of type II errors (false
negatives that reject the alternative hypothesis when it is true).
False negative rate
The false negative rate is the proportion of events that are being tested for which yield negative test outcomes with the test, i.e., the
conditional probability of a negative test result given that the event being looked for has taken place.[9]
In statistical hypothesis testing, this fraction is given the letter . The "power" (or the "sensitivity") of the test is equal to .

Checking words: false positives
People tend to be much less bothered by spam slipping through filters into their mail box (false negatives), than having desired email
("ham") blocked (false positives). Trying to balance false negatives (missed spams) vs false positives (rejecting good email) is critical
for a successful anti-spam system. Some systems let individual users have some control over this balance by setting "spam score"
limits, etc. Most techniques have both kinds of serious errors, to varying degrees. So, for example, anti-spam systems may use
techniques that have a high false negative rate (miss a lot of spam), in order to reduce the number of false positives (rejecting good
email).
Detecting spam based on the content of the email, either by detecting keywords such as "viagra" or by statistical means (content or
non-content based), is very popular. Content based statistical means or detecting keywords can be very accurate when they are
correctly tuned to the types of legitimate email that an individual gets, but they can also make mistakes such as detecting the keyword
"cialis" in the word "specialist" (see also Internet censorship: Over- and under-blocking). The content also doesn't determine whether
the email was either unsolicited or bulk, the two key features of spam. So, if a friend sends you a joke that mentions "viagra", content
filters can easily mark it as being spam even though it is neither unsolicited nor sent in bulk. Non-content base statistical means can
help lower false positives because it looks at statistical means vs. blocking based on content/keywords. Therefore, you will be able to
receive the friend who sends you a joke that mentions "viagra".


0
0.2
0.4
0.6
0.8
1
1000 1500 2000 2500 3000
spam accuracy
ham accuracy
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

695 www.ijergs.org

Fig Algorithm accuracy for different input-data sizes

Fig Execution time for different input-data sizes

Fig Spam-ham precision for various algorithms
Table 1 recall & precision for classifier
Class recall Precision
Spam 78.5% 67.5%
Non-spam 81.5% 72%
RELATED WORK
The common e-mail client has the capability to sort incoming e-mail based on simple strings found in specific header fields, the
header in general, and/or in the body. Its capability is very simple and does not even include regular expression matching. Almost all
e-mail clients have this much filtering capability. These few simple Text-filters can correctly catch about 80% of the spam.
Unfortunately, they also have a relatively high false positive rate.[2]

White list/verification filters[7]
A fairly aggressive technique for spam filtering is what I would call the "whitelist plus automated verification" approach. There are
several tools that implement a whitelist with verification: TDMA is a popular multi-platform open source tool; ChoiceMail is a
commercial tool for Windows; most others seem more preliminary.
A whitelist filter connects to an MTA and passes mail only from explicitly approved recipients on to the inbox. Other messages
generate a special challenge response to the sender. The whitelist filter's response contains some kind of unique code that identifies the
0
20
40
60
80
100
1000 1500 2000 2500 3000
SA-Bayes
KNN Classifier
Proposed
Classifier
0
50
100
150
200
250
300
350
400
Spam
as
ham
Spam
as
spam
Ham
as
ham
Ham
as
spam
SA-Bayes
KNN Classifier
Proposed
Classifier
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

696 www.ijergs.org

original message, such as a hash or sequential ID. This challenge message contains instructions for the sender to reply in order to be
added to the whitelist (the response message must contain the code generated by the whitelist filter). Almost all spam messages
contain forged return address information, so the challenge usually does not even arrive anywhere; but even those spammers who
provide usable return addresses are unlikely to respond to a challenge. When a legitimate sender answers a challenge, her/his address
is added to the whitelist so that any future messages from the same address are passed through automatically.

Distributed adaptive blacklists
Spam is almost by definition delivered to a large number of recipients. And as a matter of practice, there is little if any customization
of spam messages to individual recipients. Each recipient of a spam, however, in the absence of prior filtering, must press his own
"Delete" button to get rid of the message. Distributed blacklist filters let one user's Delete button warn millions of other users as to the
spamminess of the message.[4]
Tools such as Razor and Pyzor (see Resources) operate around servers that store digests of known spams. When a message is received
by an MTA, a distributed blacklist filter is called to determine whether the message is a known spam. These tools use clever statistical
techniques for creating digests, so that spams with minor or automated mutations (or just different headers resulting from transport
routes) do not prevent recognition of message identity.[6] In addition, maintainers of distributed blacklist servers frequently create
"honey-pot" addresses specifically for the purpose of attracting spam (but never for any legitimate correspondences). In my testing, I
found zero false positive spam categorizations by Pyzor. I would not expect any to occur using other similar tools, such as Razor.
There is some common sense to this. Even those ill-intentioned enough to taint legitimate messages would not have samples
of my good messages to report to the servers -- it is generally only the spam messages that are widely distributed.[5] It
isconceivable that a widely sent, but legitimate message such as the developerWorks newsletter could be misreported, but the
maintainers of distributed blacklist servers would almost certainly detect this and quickly correct such problems.

Rule-based rankings
The most popular tool for rule-based spam filtering, by a good margin, is SpamAssassin. There are other tools, but they are not as
widely used or actively maintained. SpamAssassin (and similar tools) evaluate a large number of patterns -- mostly regular
expressions -- against a candidate message. Some matched patterns add to a message score, while others subtract from it. If a
message's score exceeds a certain threshold, it is filtered as spam; otherwise it is considered legitimate.[7]
Some ranking rules are fairly constant over time -- forged headers and auto-executing JavaScript, for example, almost timelessly mark
spam. Other rules need to be updated as the products and scams advanced by spammers evolve. Herbal Viagra and heirs of African
dictators might be the rage today, but tomorrow they might be edged out by some brand new snake-oil drug or pornographic theme.
As spam evolves, SpamAssassin must evolve to keep up with it.

Bayesian word distribution filters
It suggested, building Bayesian probability models of spam and non-spam words. The general idea is that some words occur more
frequently in known spam, and other words occur more frequently in legitimate messages. Using well-known mathematics, it is
possible to generate a "spam-indicative probability" for each word. Another simple mathematical formula can be used to determine the
overall "spam probability" of a novel message based on the collection of words it contains.[7]
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

697 www.ijergs.org


Fig- 6: The analysis of the available open source methods
PROS & CONS OF THE PROPOSED ALGORITHM:
Pros:
Ease of usage (can be used as a simple web service)
Cost effective solution(less memory consumption with simple structure)
Reliable performance(no false positives)
Better resource utilization(uses previously available content to analyze)
Ease of tracing(for future aspects)
Cons
Requires hectic-analysis in order to attain approximate 100% accuracy, due requirement for good-word and bad-word data-corpus/s

CONCLUSION:
This approach gives comparative reduction in false-positive(ham as spam) and false negative(spam as ham) ratios w.r.t. simple
Bayesian-classifier and KNN classifier. It also gives continuous and linear order of growth for increasing input-data sizes.
Thus, it can be concluded that the statistical approach as this proposed prototype, be a promising & satiable solution for our targeted
domain of application or usage

REFERENCES:
[1] Nikita Priyanka Renato "A Survey on blog bot Detection Techniques" - IJARCSSE(International Journal of Advanced Research
in Computer Science and Software Engineering) Dec 2013
[2] Sahil Puri, Dishant Gosain, Mehak Ahuja, Ishita Kathuria, Nishtha JatanaComparison and analysis of spam detection
algorithms- International Journal of Application or Innovation in Engineering & Management (IJAIEM) 2013
[3] Grabianowski "How Spam Works". HowStuffWorks. Discovery Communications. Archived from the original on September 8,
2012.
[4] "SLV : Spam Link Verification". LinkSleeve. 2012
[5] T.A Meyer and B Whateley SpamBayes: Effective open-source, Bayesian based, email classification system
0
10
20
30
40
50
60
70
80
Advertising
Backscatter
Demographic
Targeted
Virus
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

698 www.ijergs.org

[6] Augmenting Naive Bayes Classifiers with Statistical Language Models Fuchun Peng University of Massachusetts Amherst
[7] Enrico Blanzieri and Anton Bryl A survey of learning-based techniques of email spam filtering-
[8] Banit Agrawal, Nitin Kumar, and Mart Molle. Controlling spam emails at the routers. In Proceedings of the IEEE International
Conference on Communications, ICC 2005, volume 3, pages 15881592, 2005.
[9] P Boykin and Vwani Roychowdhury. Leveraging social networks to fight spam. Computer, 38(4):6168, 2005.
[10] Enrico Blanzieri and Anton Bryl. Evaluation of the highest probability svm nearest neighbor classifier with variable relative error
cost. In Proceedings of Fourth Conference on Email and Anti-Spam, CEAS2007, page 5 pp., 2007.
[11] Hrishikesh Aradhye, Gregory Myers, and James Herson. Image analysis for efficient categorization of image-based spam e-mail.
In Proceedings of Eighth International Conference on Document Analysis and Recognition, ICDAR 2005, volume 2, pages 914
918. IEEE Computer Society, 2005.
ITU. ITU survey on anti-spam legislation worldwide. - 2005






















International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

699 www.ijergs.org

Optimization of Anti-Roll bar using Ansys Parametric Design
Language (APDL)
*Mr. Pravin Bharane, **Mr. Kshitijit Tanpure, ***Mr. Ganesh Kerkal
*M.Tech Automotive Technology (COEP), **M.E Automotive Engg (SAE, Pune),
*** M.Tech Automotive Technology (COEP),
Pravin1bharane@gmail.com, Mob. No. 8600228467

Abstract: The main goal of using anti-roll bar is to reduce the body roll. Body roll occurs when a vehicle deviates from straight-line
motion. The objective of this paper is to analyze the main geometric parameter which affects rolling stiffness of Anti-Roll bar. By the
optimization of geometric parameter, we can increase the rolling stiffness and reduce the mass of the bar. Changes in design of anti-
roll bars are quite common at various steps of vehicle production and a design analysis must be performed for each change. To
calculate rolling stiffness, mass, deflection, Von-mises stresses Ansys Parametric Design language (APDL) is used. The effects of
anti-roll bar design parameters on final anti-roll bar properties are also evaluated by performing sample analyses with the FEA
program developed in this paper.
Keywords FEA, Anti Roll Bar, APDL, Design Parameters, Bushing position, Rolling stiffness, Deflection
INTRODUCTION
The anti-roll bar is a rod or tube that connects the right and left suspension members. It can be used in front suspension, rear
suspension or in both suspensions, no matter the suspensions are rigid axle type or independent type. The ends of the anti-roll bar are
connected to the suspension links while the center of the bar is connected to the frame of the car such that it is free to rotate. The ends
of the arms are attached to the suspension as close to the wheels as possible. If the both ends of the bar move equally, the bar rotates in
its bushing and provides no torsional resistance. But it resists relative movement between the bar ends, such as shown in Fig. 1. The
bar's torsional stiffness-or resistance to twist-determines its ability to reduce such relative movement and its called as roll stiffness.

Fig.1 An anti-roll bar attached to double wishbone type suspension
Basic Properties of Anti-Roll Bars
Geometry
Packaging constraints imposed by chassis components define the path that the anti-roll bar follows across the suspension. Anti-roll
bars may have irregular shapes to get around chassis components, or may be much simpler depending on the car. Two sample anti-
roll bar geometries are shown in Fig.2. Anti-roll bars basically have three types of cross sections: solid circular, hollow circular and
solid tapered, in recent years use of hollow anti-roll bars became more widespread due to the fact that, mass of the hollow bar is lower
than the solid bar.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

700 www.ijergs.org


Fig.2 - Sample anti-roll bar geometries
Material and Processing:
Anti-roll bars are usually manufactured from SAE Class 550 and Class 700 Steels. The steels included in this class have SAE codes
from G5160 to G6150 and G1065 to G1090, respectively. Operating stresses should exceed 700 MPa for the bars produced from these
materials. Use of materials with high strength to density ratio, such as titanium alloys, is an increasing trend in recent years.
Connections
Anti-roll bars are connected to the other chassis components via four attachments. Two of these are the rubber bushings through which
the anti-roll bar is attached to the main frame. And the other two attachments are the fixtures between the suspension members and the
anti-roll bar ends, either through the use of short links or directly.
Bushings
There are two major types of anti-roll bar bushings classified according to the axial movement of the anti-roll bar in the bushing. In
both types, the bar is free to rotate within the bushing. In the first bushing type, the bar is also free to move along bushing axis while
the axial movement is prevented in the second type.


Fig.3 Bushing (rubber bushings and metal mounting blocks)

The bushing material is also another important parameter. The materials of bushings are commonly rubber, nylon or polyurethane, but
even metal bushings are used in some race cars [4].
The main goal of using anti-roll bar is to reduce the body roll. Body roll occurs when a vehicle deviates from straight-line motion.
The line connecting the roll centers of front and rear suspensions forms the roll axis roll axis of a vehicle. Center of gravity of a
vehicle is normally above this roll axis. Thus, while cornering the centrifugal force creates a roll moment about the roll axis, which is
equal to the product of centrifugal force with the distance between the roll axis and the center of gravity. This moment causes the inner
suspension to extend and the outer suspension to compress, thus the body roll occurs [5].

LITERATURE REVIEW :-
[1] Kelvin Hubert, Spartan chassis et.al studied and explained anti-roll bars are usually manufactured from SAE Class 550 and
Class 700 Steels. The steels included in this class have SAE codes from G5160 to G6150 and G1065 to G1090, respectively.
Operating stresses should exceed 700 MPa for the bars produced from these materials.
[2] Mohammad Durali and Ali Reza Kassaiezadeh studied & proposed the main goal of using anti-roll bar is to reduce the body
roll. Body roll occurs when a vehicle deviates from straight-line motion. The line connecting the roll centers of front and rear
suspensions forms the roll axis roll axis of a vehicle. Center of gravity of a vehicle is normally above this roll axis. Thus, while
cornering the centrifugal force creates a roll moment about the roll axis, which is equal to the product of centrifugal force with the
distance between the roll axis and the center of gravity.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

701 www.ijergs.org

[3] J. E. Shigley, C.R. Mischke explained that the moment causes the inner suspension to extend and the outer suspension to
compress, thus the body roll occurs. Actually, body roll is an unwanted motion. First reason for this is the fact that, too much roll
disturbs the driver and gives a feeling of roll-over risk, even in safe cornering. Second reason is its effect on the camber angle of the
tires. The purpose of camber angle is to align the wheel load with the point of contact of the tire on the road surface. When camber
angle is changed due to body roll, this alignment is lost and also the tire contact patch gets smaller.

MATHEMATICAL MODELING OF ANTI-ROLL BAR
Society of Automotive Engineers (SAE), presents general information about torsion bars and their manufacturing processing in
Spring Design Manual . Anti-roll bars are dealt as a sub-group of torsion bars. Some useful formulas for calculating the roll stiffness
of anti-roll bars and deflection at the end point of the bar under a given loading are provided in the manual. However, the formulations
can only be applied to the bars with standard shapes (simple, torsion bar shaped anti-roll bars) [6].The applicable geometry is shown
in Fig.4.
.
Fig.4 -Anti-roll bar geometry used in SAE Spring Design Manual
The loading is applied at point A, inward to or outward from plane of the page. The roll stiffness of such a bar can be calculated as:
L= a+b+c ... (1)
(L:- half track length)
f
A
=
[1
3

3
+

2
+
2
+42
2
(+)
3
(2)
(f
A
:- Deflection of point A)
KR =

2
2
.. (3)
KR: - Roll Stiffness of the bar
Max shear stress =

. (4)
Analysis
1. Define Element Types, Element Real Constants and Material Properties.
2. Modeling the Anti-Roll Bar
3. Applying Boundary Conditions and Loads
The displacement constraints exist at two locations: at the bar ends and at bushing locations.
The Ux, Uz degrees of freedom are constrained at the bar ends for spherical joints. ROTy and ROTz degrees of freedom are
also constrained if pin joints are used. At the bushing locations, free ends of the springs are constrained in all Ux, Uy and Uz
degrees of freedom. These elements have no rotational dofs. The other ends of the spring, attached to the beam, are
constrained according to the type of the bushing. Ux dof is constrained for the second bushing type which does not allow bar
movement along bushing axis. The loading for the first load step -determination of roll stiffness- is a known force, F, applied
to the bar ends, in +y direction at one end and in y direction at the other end as shown in Fig.5.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

702 www.ijergs.org


Fig.5- Load step
4. Solution & Post-processing

PROGRAM FOR STRUCTURAL ANALYSIS AND OPTIMIZATION OF ANTI-ROLL BAR
COMMAND DISCRIPTION
Fini
/clear
/filname,Optimisation
/title, Structural Analysis and Optimisation of Anti-Roll
Bar
C*** processing
/prep7
*ask,Outer_dia,Outer diameter of bar,21.8
*ask,Inner_dia,Internal diameter of bar,16
*ask,Length,Total length of the bar,1100
*ask,Width,total width of the bar,230
*ask,r,fillet radius,50
*ask,Bush_Pos,Postision of Bush,390
*ask,Bush_L,Length of Bush,40
*ask,Load,VerticalLoad on bar,1000








- *Ask - Enter Input Value
k,1,0,0,0
k,2,(Length/2-Outer_dia/2),0,0
k,3,(Length/2-Outer_dia/2),0,Width
k,4,-(Length/2-Outer_dia/2),0,0
k,5,-(Length/2-Outer_dia/2),0,Width
k,6,(Bush_Pos-Bush_L/2),0,0
k,7,(Bush_Pos+Bush_L/2),0,0
k,8,-(Bush_Pos-Bush_L/2),0,0
k,9,-(Bush_Pos+Bush_L/2),0,0





K, NPT, X, Y, Z
Defines a line between two keypoints.
l,1,6
l,6,7
l,7,2
l,2,3
l,1,8
l,8,9
l,9,4
l,4,5


L,P1,P2
Defines a line between two keypoints.
/pnum,line,1
/pnum,kp,1
PNUM, Label, KEY
Controls entity numbering/coloring on plots.
Lplot

LPLOT, NL1, NL2, NINC
Displays the selected lines.
lfilt,3,4,r
lfilt,7,8,r
LFILLT, NL1, NL2, RAD, PCENT
Generates a fillet line between two intersecting lines
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

703 www.ijergs.org


Results
obtained
from
APDL
- M
ax.
Equivalent
Stress =
332.307
MPa
- M
ax.
Principal
Stress =
351.409
MPa
- Roll Stiffness = 408.62 Nm/deg
- Deflection = 25.86
- Mass = 1.85


Fig.6- Equivalent Von Mises Stress Distribution on the Bar

k,14,5,0,0
k,15,0,0,-5

circle,1,Outer_dia/2,14,15 CIRCLE, PCENT, RAD, PAXIS, PZERO, ARC
Generates circular arc lines.
al,11,12,13,14
circle,1,Inner_dia/2,14,15
al,15,16,17,18

/pnum,area,1
Aplot

APLOT, NA1, NA2, NINC, DEGEN, SCALE
Displays the selected areas
asba,1,2,,,2
ASBA, NA1, NA2, SEPO, KEEP1, KEEP2
Subtracts areas from areas.
lsel,s,,,11,18,1
Lplot
LSEL, Type, Item, Comp, VMIN, VMAX, VINC, KSWP
Selects a subset of lines.
lesize,all,,,5

LESIZE, NL1, SIZE, ANGSIZ, NDIV, SPACE,
Specifies the divisions on unmeshed lines.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

704 www.ijergs.org


Fig.7- Principal Stress Distribution on the Bar

Fig.8- Deflection of Bar

SAMPLE HAND CALCULATIONS OF ANTI-ROLL BAR
Sample calculations are done considering all the input parameters given below. Design parameters are assigned as follows:
Cross-section type = Hollow
Outer radius = 10.9 mm
Inner radius = 8 mm
Bushing type = 1 (x movement free)
Bushing position = 400 mm
Bushing length = 40 mm
Bushing Stiffness = 1500 N/mm
End connection type = 1 (spherical joint)
Bar material = SAE 5160
E = 206000 N/mm
2
, = 0.27, Syt= 1200 MPa, Sut = 1400 MPa, = 7800 kg/m
3


The automated design software gives the end deflection of the anti-roll bar under a load of 1000 N as:
Deflection (f
A
) = 25.86 mm
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

705 www.ijergs.org

Rolling Stiffness (KR) =


1
(

2
)
= 408.62 N.m/deg
According to the SAE formulations, roll stiffness can be calculated as:-

f
A
=
1000250
3
90
3
+550160
2
+4230
2
46064
3206000 (21.8^4)(16^4)
= 25.96 mm

KR =
1000 1100
2

(225.96)
= 23.296 N.m/rad = 406.59 N.m/deg
There is 0.5 % difference between mathematical and simulation results
OPTIMIZATION OF ANTI-ROLL BAR
The main goal of using Anti-roll bar is to reduce the body roll, for that purpose we need to increase roll stiffness of Anti-roll bar and
also we need to reduce the weight of the Anti-roll bar. For improving Anti-roll bar performance optimization is necessary [7].
Optimization is done by trial and error method through 20 results as follows.
Table 1: Results obtained for Optimized Anti-Roll Bar
O.D
(mm)
I.D
(mm)
Bushing
Position
(mm)
Deflection
(mm)
Von Mises
Stress Max
(N/Sq.mm)
Max
Principal
Stress
(N/Sq.mm)
Rolling
Stiffness
(N.m/deg)
Mass Of
The Bar
(Kg)
21.8 16 300 27.814 389.636 351.417 379.96 1.85
21.8 16 350 26.562 345.562 351.409 397.84 1.85
21.8 16 390 25.964 332.270 351.404 406.98 1.85
21.8 16 400 25.864 332.307 351.393 408.56 1.86
21.8 16 420 25.694 344.59 351.558 411.26 1.85
20 16 300 47.575 609.889 588.642 222.50 1.217
20 16 350 45.43 540.162 588.578 232.96 1.217
20 16 390 44.414 525.67 588.63 238.26 1.217
20 16 420 43.945 536.015 588.60 240.79 1.217
20 16.5 300 52.58 671.775 682.968 201.43 1.080
20 16.5 350 50.23 607.366 682.79 210.80 1.080
20 16.5 390 49.104 609.43 682.77 215.60 1.080
20 16.5 420 48.58 610.079 682.98 217.92 1.080
21 16.5 300 37.26 501.794 480.978 283.83 1.426
21 17 350 38.77 489.016 546.56 272.80 1.285
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

706 www.ijergs.org


ACKNOWLEDGEMENT
I would like to express my heartfelt gratitude to our department and college, Dnyanganga College of Engineering & Research for
gifting me the opportunity to publish research paper. I would like to express my gratitude towards Prof. B. D. Aldar (Head of
Department, Mechanical Engineering, DCOER) for giving me permission to commence this thesis and to do the necessary study. I
wish to express my sincere gratitude for their invaluable guidance throughout the study and for their support in project completion. I
also take this opportunity to show my appreciation to all the teaching and non-teaching staffs, family members, friends for their
support.
CONCLUSION
- Vehicles performances are strongly affected by tuned Anti roll bar by changing the parameters of the bar.
- The time required for analysis of Anti-roll bar using APDL (Ansys Parametric Design Language) is very short and can be
repeated simply after changing any of the input parameters which provides an easy way to find an optimum solution for anti-
roll bar design.
- The most obvious effect of using hollow section is the reduction in mass of the bar.
- Locating the bushings closer to the centre of the bar increases the stresses at the bushing locations which results in roll
stiffness of the bar decreases and the max Von mises stresses increases.
- By increasing the bushing stiffness of Anti-roll bar, increases Anti- roll stiffness, also increasing the stresses induce in the
bar.


REFERENCES
[1] Kelvin Hubert, Spartan chassis, Anti-Roll Stability Suspension Technology SAE PP. 2005-01- 3522.
[2] Mohammad Durali and Ali Reza Kassaiezadeh, Design and Software Base Modeling of Anti- Roll System SAE PP 2002-01-
2217.
[3] J. E. Shigley, C.R. Mischke, Mechanical Engineering Design 5th Ed. McGraw-Hill, pp. 282-289, 1989.
[4] Somnay, R.Shih Product Development Support with Integral Simulation Modeling, SAE Technical Paper Series, paper No:
1999-01-2812, 1999.
[5] N. B. Gummadi ,H. Cai , Bushing Characteristics of Stabilizer Bars SAE Paper Number: 2003-01-0239,
[6] SAE Spring Committee, Spring Design Manual, 2nd Ed., SAE, pp. 215-267, 1996
[7] M. Murat Topa, H. Eren Enginar, N. Sefa Kuralay, Reduction of stress concentration at the corner bends of the anti-roll bar by
using parametric optimisation, Mathematical and Computational Applications, Vol. 16, No. 1, pp. 148-158, 2011
[8] Danesin D, Krief P, Sorniotti A, Velardocchia M , Active roll control to increase handling and comfort SAE technical paper
2003-01-0962, Society of Automotive Engineers, Warrendale; 2003
[9] A. Carpinteri, A. Spagnoli, Multiaxial high-cycle fatigue criterion for hard metals International Journal Fatigue, vol. 23, pp.
13545, 2001
[10] Darling J, Hickson LR, An experimental study of a prototype active anti-roll suspension system Vehicle System Dynamics,
29(5):30929, 1998.
[11] P. Haupt, Continuum mechanics and theory of materials Springer- Verlag, 2002.
[12] G. Sines, JL. Waisman, Behavior of metals under complex stresses in Metal fatigue Ed. New York: McGraw-Hill, 1959
ANSYS Help for Version 12.0
21 17 390 37.90 485.247 546.58 279.04 1.285
21 16.5 420 34.426 445.63 481.09 307.12

1.426
20 15 350 38.996 465.494 469.837 271.23 1.18
20 15 400 37.968 439.45 469.776 278.55 1.18
20 17 390 55.55 761.535 819.592 190.72 0.75
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

707 www.ijergs.org

An Enhanced Data Collection for Wireless Sensor Network Using Topology
Routing Tree
I
V.Priya.,
II
S.Nivas
I
Research Scholar, Bharathiar University, Coimbatore,
II
Assistant Professor, CS,
I, II
Dept. of Computer Science, Maharaja Co-Education College of Arts and Science,
Perundurai, Erode 638052.
I
Email id: vpriyainfotech@gmail.com
I
Contact No: 8344412152
II
Email id: nivasmaharaja@gmail.com
II
Contact No:9894376769
Abstract this paper describes the basic idea about the different methods of data collection in WSN. In many applications of
wireless sensor networks, approximate data collection is a wise choice due to the constraints in communication bandwidth and energy
budget. Many existing techniques have power to manage with the issues like energy consumption, packet collision,
retransmission, delay etc. For quick data collection, schemes are required to be scheduled in effective manner. One of the good
techniques is BFS. It provides periodic query scheduling for data aggregation with minimum delay under various wireless interference
models. Given a set of periodic aggregation queries, each query has its own period p
i
and the subset of source nodes S
i
containing the
data. Time scheduling on a single frequency channel with the aim of minimizing the number of time slots required (schedule length) to
complete a convergecast is considered. Next, scheduling with transmission power control is combined to mitigate the effects of
interference, and show that while power control helps in reducing the schedule length under a single frequency, scheduling
transmissions using multiple frequencies is more efficient. Lower bounds on the schedule length are given when interference is
completely eliminated, and propose algorithms that achieve these bounds. Then, the data collection rate no longer remains limited by
interference but by the topology of the routing tree.
Keywords Wireless sensor network, data collection, energy, aggregation, scheduling, transmission
INTRODUCTION
Wireless sensor networks have recently come into prominence because they hold the potential to revolutionize many
segments of our economy and life, from environmental monitoring and conservation, to manufacturing and business asset
management, to automation in the transportation and health care industries. The design, implementation, and operation of a sensor
network requires the confluence of many disciplines, including signal processing, networking and protocols, embedded systems,
information management and distributed algorithms. Such networks are often deployed in resource-constrained environments, for
instance with battery operated nodes running un-tethered.
These constraints dictate that sensor network problems are best approached in a hostile manner, by jointly considering the
physical, networking, and application layers and making major design tradeoffs across the layers. Advances in wireless networking,
micro-fabrication and integration (for examples, sensors and actuators manufactured using micro-electromechanical system
technology, or MEMS), and embedded microprocessors have enabled a new generation of massive-scale sensor networks suitable for
a range of commercial and military applications.
The technology promises to revolutionize the way we live, work, and interact with the physical environment. In a typical
sensor network, each sensor node operates
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

708 www.ijergs.org

Un-tethered and has a microprocessor and a small amount of memory for signal processing and task scheduling. Each node is
equipped with one or more sensing devices such as acoustic microphone arrays, video or still cameras, infrared (IR), seismic, or
magnetic sensors. Each sensor node communicates wirelessly with a few other local nodes within its radio communication range.







Fig 1.1 Sensor Network
Sensor networks extend the existing Internet deep into the physical environment. The resulting new network is orders of
magnitude more expansive and dynamic than the current TCP/IP networks and is creating entirely new types of traffic that are quite
different from what one finds on the Internet now. Information collected by and transmitted on a sensor network describes conditions
of physical environments for example, temperature, humidity, or vibration and requires advanced query interfaces and search engines
to effectively support user-level functions.
Sensor networks may inter-network with an IP core network via a number of gateways. A gateway routes user queries or
commands to appropriate nodes in a sensor network. It also routes sensor data, at times aggregated and summarized, to users who have
requested it or are expected to utilize the information. A data repository or storage service may be present at the gateway, in addition
to data logging at each sensor. The repository may serve as an intermediary between users and sensors, providing a persistent data
storage. It is well known that communicating 1 bit over the wireless medium at short ranges consumes far more energy than
processing that bit
Wireless sensor networks are a trend of the past few years, and they involve deploying a large number of small nodes. The
nodes then sense environmental changes and report them to other nodes over flexible network architecture. Sensor nodes are great for
deployment in hostile environments or over large geographical areas. The sensor nodes leverage the strength of collaborative efforts to
provide higher quality sensing in time and space as compared to traditional stationary sensors, which are deployed in the following
two ways:
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

709 www.ijergs.org

- Sensors can be positioned far from the actual phenomenon, i.e. something known by sense perception. In this
approach, large sensors that use some complex techniques to distinguish the targets from environmental noise are
required.
- Several sensors that perform only sensing can be deployed. The position of the sensors and communications
topology is carefully engineered. They transmit time series of the sensed phenomenon to central nodes where
computations are performed and data are fused.
1.2 Wireless Sensor Network vs. Ad hoc Network
A mobile ad hoc network (MANET), sometimes called a mobile mesh network, is a self configuring network of mobile
devices connected by wireless links. Each device in a MANET is free to move independently in any direction, and will therefore
change its links to other devices frequently. The difference between wireless sensor networks and ad-hoc networks are outlined below:
- The number of sensor nodes in a sensor network can be several orders of magnitude
- Higher than the nodes in an ad hoc network.
- Sensor nodes are densely deployed.
- Sensor nodes are prone to failures.
- The topology of a sensor network changes very frequently.
- Sensor nodes mainly use broadcast communication paradigm whereas most ad hoc
- Networks are based on point-to-point communication.
- Sensor nodes are limited in power, computational capacities, and memory.
- Sensor nodes may not have global identification (ID) because of the large amount of overheads and large number of
sensors.
- Sensor networks are deployed with a specific sensing application in mind whereas ad-hoc networks are mostly
Constructed For Communication Purpose.
1.3 Need For The System: Approximate Data Collection
Approximate data collection is a wise choice for long-term data collection in WSNs with constrained bandwidth. In many
practical application scenarios with densely deployed sensor nodes, the gathered sensor data usually have inherent spatial-temporal
correlations. For example, Fig. 1 shows the temperature readings of five nearby sensor nodes deployed in a garden more than 10 hours
at night. The temperature readings recorded by the five nodes keep decreasing in the first 4 hours and then become stable in the next 6
hours, which exhibit apparent spatial and temporal correlations among themselves.
By exploring such correlations, the sensor data canbe collected in a compressive manner within prespecified, application-
dependent error bounds. The data traffic can be reduced at the expense of data accuracy. The granularity provided by such
approximate data collection is more than sufficient, especially considering the low measuring accuracy of sensors equipped on the
sensor nodes. Study on approximate data collection is thus motivated by the need of long-term operation of large-scale WSNs, e.g.,
the GreenOrbs project.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

710 www.ijergs.org



Fig 1.2 Data Aggregation
By exploring such correlations, the sensor data canbe collected in a compressive manner within prespecified, application-
dependent error bounds. The data traffic can be reduced at the expense of data accuracy. The granularity provided by such
approximate data collection is more than sufficient, especially considering the low measuring accuracy of sensors equipped on the
sensor nodes. Study on approximate data collection is thus motivated by the need of long-term operation of large-scale WSNs, e.g.,
the GreenOrbs project.


Fig 1.3 Cluster Formations

There are several factors to be considered in the design of an approach for approximate data collection. First, the data
collection approach should be scalable. In many real applications, sensor networks consist of hundreds or even thousands of sensor
nodes. For example, GreenOrbs has deployed 330 nodes and expects to deploy 1;000 sensor nodes in a network.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

711 www.ijergs.org

In practice, in large WSNs, the information exchange between the sink and the related sensor nodes may consume
considerable bandwidth and the acquisition of complete sensor data set of a WSN is too costly to be practical. Second, in approximate
data collection, the spatial-temporal correlation model used for data suppression should be light-weight and efficient so as to meet the
constraints on sensor nodes memory and computation capacity. For densely deployed WSNs, many models can be used to describe
temporal and/or spatial correlation of sensor data. But it is often nontrivial to build a light-weight correlation model to suppress
spatial-temporal redundancy simultaneously.Most of the existing models are too expensive, i.e., consuming a large amount of
computing capacity or storage capacity, to be run on the existing sensor nodes . Some of them are too simple to contain enough
information and ignores the trend of sensor readings, or only consider either temporal correlation or spatial correlation separately. This
thesis approach shows that simplicity and efficiency can be achieved by exploiting implicit sensor node cooperation and elaborately
distributing data processing tasks to sensor nodes. Third, the data collection scheme should be self-adaptive to environmental changes.
Note that physical environmental changes are usually complex and hard to be modeled comprehensively with a simple estimation
model. For long-term data collection, the approximate data collection scheme should be capable of automatically adjusting its
parameters according to the environmental changes so as to guarantee its correctness.

Fig 1.4 Data Collection
In this thesis, by leveraging the inherent spatial-temporal correlation in sensor data, an efficient approach is proposed for
approximate data collection in WSNs to simultaneously achieve low communication cost and guaranteed data quality (namely
bounded data errors). This thesis approach, Approximate Data Collection (ADC), is well designed to satisfy the above criterions. ADC
achieves low communication cost by exploiting the fact that physical environments generally exhibit predictable stable state and
strong temporal and spatial correlations, which can be used to infer the readings of sensors.Both the scalability and simplicity of ADC
are achieved by exploiting implicit cooperation and distributing data processing among sensor nodes. ADC can discover local data
correlations and suppress the spatial redundancy of sensor data in a distributive fashion. The distributed spatial data correlation
discovery and spatial redundancy suppression is achieved by dividing a WSN into several clusters.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

712 www.ijergs.org


Fig 1.5 Cluster Members
The sink can estimate the sensor readings according to the model parameters updated by the cluster heads. This distributed
data process scheme makes ADC can be easily applied to WSNs with different system scales. As the sensor network scale increases,
ADC only needs to increase the number of clusters.
1.4 Scope Of Research Work
Furthermore, by using clustering-based distributed data process scheme, sensor data can be processed locally in ADC. First,
each sensor node is responsible for processing sensor readings generated by itself. Second, the spatial redundancy of sensor data is
suppressed by cluster heads that are close to the data source. There are no explicitly control data exchange between sensor nodes and
their cluster heads. The sensor data process cost is distributed to all sensor nodes and the sensor data process burden of each cluster
head can be easily controlled by adjusting the cluster size.
II. Problem Formulation
The problem of minimizing the schedule length for raw-data convergecast on single channel is more. Convergecast, namely
the collection of data from a set of sensors toward a common sink over a treebased routing topology, is a fundamental operati on in
wireless sensor networks (WSN). In many applications, it is crucial to provide a guarantee on the delivery time as well as increase the
rate of such data collection. For instance, in safety and mission-critical applications where sensor nodes are deployed to detect oil/gas
leak or structural damage, the actuators and controllers need to receive data from all the sensors within a specific deadline, failure of
which might lead to unpredictable and catastrophic events. This falls under the category of one-shot data collection. On the other
hand, applications such as permafrost monitoring require periodic and fast data delivery over long periods of time, which falls under
the category of continuous data collection.
For periodic traffic, it is well known that contention free medium access control (MAC) protocols such as TDMA (Time
Division Multiple Access) are better fit for fast data collection, since they can eliminate collisions and retransmissions and provide
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

713 www.ijergs.org

guarantee on the completion time as opposed to contention-based protocols. However, the problem of constructing conflictfree
(interference-free) TDMA schedules even under the simple graph-based interference model has been proved to be NP-complete. In
this project, consider a TDMA framework and design polynomial-time heuristics to minimize the schedule length for both types of
convergecast.
It also find lower bounds on the achievable schedule lengths and compare the performance of our heuristics with these
bounds. The problem of joint scheduling and transmission power control for constant and uniform traffic demands. It can be overcome
by Aggregate converge cast and One-Shot Raw-Data converge cast algorithms.
III. Objectives Of The Research
The research work main objective is information be collected from a wireless sensor network organized as tree. To address
this, a number of different techniques using realistic simulation models under the many-to-one communication paradigm known as
convergecast are evaluated. Time scheduling on a single frequency channel with the aim of minimizing the number of time slots
required (schedule length) to complete a convergecast is considered. Next, scheduling with transmission power control is combined to
mitigate the effects of interference, and show that while power control helps in reducing the schedule length under a single frequency,
scheduling transmissions using multiple frequencies is more efficient.
It investigated the impact of transmission power control and multiple frequency channels on the schedule length, where the
proposed constant factor and logarithmic approximation algorithms on geometric networks (disk graphs). Raw-data convergecast has
been studied in a distributed time slot assignment scheme is proposed in the project to minimize the TDMA schedule length for a
single channel. The project also compares the efficiency of different channel assignment methods and interference models, and
proposes schemes for constructing specific routing tree topologies that enhance the data collection rate for both aggregated and raw-
data convergecast.
IV. Related work
Adaptive Approximate Data Collection
Since sensor readings change slowly according to the change of physical phenomena, the adaptive data approximation
algorithm should be self-adaptive to the changes of the sensor readings timely. The proposed data approximation algorithm consists of
two parts: data approximation learning algorithm and data approximation monitoring (for cluster heads and sink node) algorithm.
The data approximation learning algorithm runs on every cluster head and is responsible for finding a -loss approximation
of the true sensor data of each cluster. The data approximation monitoring algorithm consists of two parts. One runs on every cluster
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

714 www.ijergs.org

head continuously. It monitors the changes of the parameters of the local estimation and decides whether to send an update message to
the sink node or not. The other part,which runs on the sink node, is responsible for updating the -loss approximation according to the
update messages from each cluster head.
(i).Routing Tree
The sink node is treated as root node. All other nodes neighbor to sink behaves as intermediate nodes. The nodes responsible
for collecting the data behave as leaf nodes. The intermediate nodes aggregates the data received from leaf nodes are sends to sink
node.
(ii). The Data Approximation Learning Algorithm
The data approximation learning algorithm guarantees that the predictor set SS stored in the sink node is a A-loss
approximation of IF at all times. Each cluster head starts the data approximation monitoring algorithm after the data approximation
learning algorithm. The data approximation monitoring algorithm updates all local estimation data according to the received local
estimation update messages and checks the estimation error of each O-similarity set every T seconds. Each sensor node requires
WS*T seconds to check the correctness of it local estimation model, the estimation error check is delayed by WS*T seconds. If the
radius of any O-similar set exceeds A, the cluster head will adjust its local O-similarity sets and send the changes to the sink node. The
sink node updates SS according to the update messages from the cluster heads.
The Data Approximation Learning Algorithm
1: Generate correlation graph Gi(V, E, t)
2: i = 0;
3: while |V| > 0 do
4: v = FindLargestOutDegree(V);
5: w[i].representation_node=v;
6: w[i].similarity_set=AllNeighbor(v);
7: V - = {v};
8: V - = w[i].similarity_set;
9: i++;
10: end while
11: return w;
(iii). Monitoring Algorithm for the Cluster Heads
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

715 www.ijergs.org

The details of the data approximation monitoring algorithm for cluster heads are shown in Algorithm. The algorithm first
updates all local estimations according to all local estimation update messages M received in last T seconds (line 1). Line 2-12 search
each O-similar set and find out all sensor nodes that are no longer O-similar to their representation nodes, then add them into node list
C C. All empty O-similar sets are removed (line 9-10).
Each sensor node in CC tries to find a O-similar set to join in by invoking the procedure Join() (line 14). If there is no such a
set, a new O-similar set will be created for this node by invoking the procedure CreatNewSet() (line 16). Line 20 sends the update
messages to the sink node.
The data approximation monitoring algorithm only requires two kinds of update messages: the O-similar set creating message
and the O-similar set updating message. The former creates a new O-similar set at the sink node, while the latter is used to update the
predictor of a O-similar set or add new sensor nodes into it. Note that explicitly sending a message for removing a sensor node from a
O-similar set is not necessary, because no sensor node belongs to two or more O-similar sets simultaneously. Adding a node into a O-
similar set means removing it from another one.
Monitoring Algorithm for the Cluster Heads
1: UpdateMessagePrc(M);
2: for all W e G do
3: for all s e W do
4: if D(s,W, t) > A then
5: CC = CC {s};
6: W-={s};
7: end if
8: end for
9: if W = C then
10: G-=W;
11: end if
12: end for
13: for all s e CC do
14: flag=Join(s);
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

716 www.ijergs.org

15: if flag==0 then
16: W = CreatNewSet(s);
17: G = G W;
18: end if
19: end for
20: SendUpdatemsg();
(iv). Monitoring Algorithm For The Sink Node
The details of the data approximation monitoring algorithm for the sink node are shown in Algorithm. After receiving an
updating message M, the sink node first checks its message type. If it is a O-similar set creating message, it first removes all the nodes
contained in M from current exiting O-similar sets (line 2), then creates a new O-similar set and adds all these nodes contained in M
into the new O-similar set (line 3). If M is a O-similar set updating message, the sink node first removes all the nodes contained in M
from current exiting O-similar sets (line 7), then updates the predictor of the specified O-similar set or add all the node contained in M
into the specified O-similar set (line 8). Finally, all empty sets are removed (line 10-14).

Monitoring Algorithm for the Sink Node
1: if msgtype is O-similar set creating message then
2: Remove(M);
3: W = CreatNewSet(M);
4: G = G {W};
5: end if
6: if msgtype is O-similar set updating message then
7: Remove(M);
8: SetUpdate(M);
9: end if
10: for all W e G do
11: if W = C then
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

717 www.ijergs.org

12: G-= W;
13: end if
14: end for
(v).Transmission Plan
For each query Qi with a routing tree Ti, during each period, rst each leaf node in Ti adds the source data to its transmission
plan. Then, every internal node in (noted as a relay node for query Qi) only generates one unit of data by aggregating all received data
with its own data (if it has), while it may receive multiple data units from its children.
(vi).Packet Scheduling
Packet scheduling at each node that contains data units in its transmission plan is occurred. The nodes are divided into two
complementary groups: leaf nodes and intermediate nodes. It ensures that all leaf nodes transmit at even time-slots only, and all
intermediate nodes transmit at odd time-slots only.
(vii).Aggregation Degree Setting
Aggregation degree (number of packets that can be aggregated (data can be summed, maximum data, minimum data or
average data) is set at each node.
(viii).Aggregation and Transmission Based On Degree
Data is aggregated based on degree and transmission occurred according to degree.
Experimental and Results
The following Table2.1 describes experimental result for proposed system performance rate analysis. The table Contains
number of cluster, cluster size and number of aggregated data and average aggregated data details are shown

S.No Number Of
Cluster
Cluster
A
Cluster
B
Cluster
C
Cluster
D
Cluster
E
Cluster
F
1 2 Cluster 56 89 65 67 67 67
2 3 Cluster 67 89 78 89 89 89
3 4 Cluster 78 68 89 56 56 89
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

718 www.ijergs.org

4 5 Cluster 89 56 67 67 68 67
5 6 Cluster 56 67 56 56 67 56
6 7 Cluster 67 72 67 67 67 58
7 8 Cluster 78 89 89 89 89 67
8 9 Cluster 89 67 67 66 76 76

No. Of
Aggregated Data 580 597 578 557 579 569
Avg % 72.5 74.625 72.25 69.625 72.375 71.125
Table2.1 Cluster Size: Proposed System and Performance Rate

The following Table 2.2 describes experimental result for existing system over all experimental result analysis. The table
contains aggregated cluster, number of aggregated data cluster data and average aggregated data details are shown

Aggregated Cluster No. Of. Aggregated Data Avg % Aggregated
Cluster A 580 72.5
Cluster B 597 74.62
Cluster C 578 72.25
Cluster D 557 69.62
Cluster E 579 72.37
Cluster F 569 71.12

Table 2.2 Overall Experimental Results - Proposed System
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

719 www.ijergs.org


The following Fig 2.1describes experimental result for proposed data transfers for hybrid method rate analyses are shown.


Fig 2.1 Proposed System - Aggregated Data
The following Fig 6.4 describes experimental result for proposed system aggregation scheme analyses are shown




Fig 6.3 Aggregation Scheme- Proposed System

CONCLUSION
In research work are implementations by approximate data collection between wireless sensor networks. In this proposed
system exploring application level data collection process. The wireless sensor network is collecting data between one network to
Data Transfers for Hybrid Methods
Aggregated Data
Aggregated Data 580 597 578 557 579 569
Cluster A Cluster B Cluster C Cluster D Cluster E Cluster F
Data for Proposed Aggregation Scheme
66
68
70
72
74
76
No.of.Cluster Aggregated
P
e
r
c
e
n
t
a
g
e

(
%
)
AVG % Aggregated
AVG % Aggregated 72.5 74.62 72.25 69.62 72.37 71.12
Cluster Cluster Cluster Cluster Cluster Cluster
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

720 www.ijergs.org

another ad hoc networks data is stable and strong temporal and spatial correlation between sensor readings. Our work detects data
similarities among the sensor nodes by comparing their local estimation models rather than their original data. The simulation results
show that this approach can greatly reduce the amount of messages in wireless communications.

In this research work fast convergecast in WSN node communication using a TDMA protocol to minimize the schedule
length is considered. In this thesis work additionally data collection between tree based and schedule process. The degree of node level
is finding parent and child node level, the parent node is send data into server node and child node into another sink node details.
Therefore, time complexity minimum for data collection between sensor node details. The system addressed the fundamental
limitations due to interference and half-duplex transceivers on the nodes and explored techniques to overcome the same. It is found
that while transmission power control helps in reducing the schedule length, multiple channels are more effective
REFERENCES
[1] G. Tolle, J. Polastre, R. Szewczyk, D. Culler, N. Turner, K. Tu, S.Burgess, T. Dawson, P. Buonadonna, D. Gay, and W. Hong, A
Macroscope in the Red Woods, Proc. Third Intl Conf. Embedded Networked Sensor Systems (SenSys 05), 2005.
[2] M. Li, Y. Liu, and L. Chen, Non-Threshold Based Event Detection for 3D Environment Monitoring in Sensor Networks, IEEE
Trans.Knowledge and Data Eng., vol. 20, no. 12, pp. 1699-1711, Dec. 2008.
[3] Z. Yang and Y. Liu, Quality of Trilateration: Confidence based Iterative Localization, IEEE Trans. Parallel and Distributed
Systems, vol. 21, no. 5, pp. 631-640, May 2010.
[4]G. Werner-Allen, K. Lorincz, J. Johnson, J. Lees, and M. Welsh, Fidelity and Yield in a Volcano Monitoring Sensor Network,
Proc. Seventh Symp. Operating Systems Design and Implementation (OSDI 06), 2006.
[5] R. Cardell-Oliver. Rope: A reactive, opportunistic protocol for environment monitoring sensor network. In Proc. The Second IEEE
Workshop on Embedded Networked Sensors (EmNetS-II), May 2005.
[6] L. Mo, Y. He, Y. Liu, J. Zhao, S. Tang, X. Li, and G. Dai, Canopy Closure Estimates with GreenOrbs: Sustainable Sensing in the
Forest, Proc. Sevent h ACM Conf. Embedded Networked Sensor Systems (SenSys 09), 2009.
[7] A. Cerpa, J. Elson, D. Estrin, L. Girod, M. Hamilton, and J. Zhao. Habitat monitoring: Application driver for wireless
communications technology. In Proc. the Workshop on Data Communications in Latin America and the Caribbean, Apr. 2001.
[8] R. E. Grumbine, "What Is Ecosystem Management?" Conservation Biology, vol. 8, pp. 27-38, 1994.
[9] F. L. Bunnell and D. J. Vales., "Comparison of Methods for Estimating Forest Overstory Cover: Differences among Techniques,"
Can. J. Forest Res., vol. 20, 1990.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

721 www.ijergs.org

[10] D. Chu, A. Deshpande, J.M. Hellerstein, and W. Hong, Approximate Data Collection in Sensor Networks Using Probabilistic
Models, Proc. 22nd Intl Conf. Data Eng. (ICDE 06), 2006.
[11] A. Deshpande, C. Guestrin, S. Madden, J. Hellerstein, and W. Hong, Model-Driven Data Acquisition in Sensor Networks,
Proc. 13th Intl Conf. Very Large Data Bases (VLDB 04), 2004.
[12] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. The design of an acquisitional query processor for sensor networks.
In ACM SIGMOD, 2003.
[13] C. Wan, S.B. Eisenman, and A.T. Campbell, Coda: Congestion Detection and Avoidance in Sensor Networks, Proc. First Intl
Conf. Embedded Networked Sensor Systems (SenSys 03), 2003.
[14] A. Silberstein, R. Braynard, and J. Yang, Constraint Chaining: On Energy-Effcient Continuous Monitoring in Sensor
Networks, Proc. ACM SIGMOD Intl Conf. Management of Data (SIGMOD 06), 2006.
[15] I. Solis and K. Obraczka. Efficient continuous mapping in sensor networks using isolines. In Proc. of the 2005 Mobiquitous, July
2005.
[16] X. Meng, L. Li, T. Nandagopal, and S. Lu. Event contour: An efcient and robust mechanism for tasks in sensor networks.
Technical report, UCLA, 2004.
[7] C. Olston and J. Widom, Best Effort Cache Sychronization with Source Cooperation, Proc. ACM SIGMOD Intl Conf.
Management of Data (SIGMOD 02), 2002













International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

722 www.ijergs.org

A WEIGHT BASED SYNCHRONIZATION DETECTION FOR WORMHOLE ATTACK USING PERIODIC
UPDATES FRAMEWORK
I
C. Sudha M.C.A.,
II
D. V. Rajkumar M.C.A., M.Phil.,
I
Research Scholar, Bharathiar University, Coimbatore,
II
Assistant Professor, CS,
I, II
Dept. of Computer Science, Maharaja Co-Education College of Arts and Science,
Perundurai, Erode 638052.
I
Email id: sudharchinnasamy@gmail.com
I
Contact No:8939667746
II
Email id: dvrajkumar@gmail.com
II
Contact No: 7871514680

ABSTRACT
One type of major attacks to neighbor discovery is wormhole attack, in which malicious node(s) relay packets for two
legislate nodes to fool them believing that they are direct neighbors. It seems a merit that this kind of attack can enlarge the
communication ranges, however, since it causes unauthorized physical access, selective dropping of packets and even denial of
services, the wormhole attack is intrinsically a very serious problem especially in case of emergent information transmission. This
thesis proposes a wormhole attack resistant secure neighbor discovery (SND) scheme for directional wireless network. In specific, the
proposed SND scheme consists of three phases: the network controller (NC) broadcasting phase, the network nodes
response/authentication phase and the NC time analysis phase. In the broadcasting phase and the response/authentication phase, local
time information and antenna direction information are elegantly exchanged with signature-based authentication techniques between
the NC and the legislate network nodes, which can prevent most of the wormhole attacks. To solve the transmission collision problem
in the response/authentication phase, we also introduce a novel random delay multiple access (RDMA) protocol to divide the RA
phase into M periods, within which the unsuccessfully transmitting nodes randomly select a time slot to transmit. The optimal
parameter setting of the RDMA protocol and the optional strategies of the NC are included. In addition thesis proposes the concepts
using Substance and Existence Multicast Protocol (SEMP) using the Optimism Max Out algorithms (OPMA) and Randomized
Optimism Max Out algorithms (ROPMA) algorithms which nodes use to send updates to their neighbors.
Keywords: Network Controller, Secure Neighbor Discovery, Wireless Network, Random access, Parameter, Authentication
1. INTRODUCTION
1.1. Ad Hoc Wireless Network
Mobile ad hoc network (MANET) is a self-configuring network formed with wireless links by a collection of mobile
nodes without using any fixed infrastructure or centralized management. The mobile nodes allow communication among the
nodes by hop to hop basis and the forward packets to each other. Due to dynamic infrastructure-less nature and lack of
centralized monitoring, the ad hoc networks are vulnerable to various attacks. The performance of network and reliability is
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

723 www.ijergs.org

compromised by attacks on ad hoc network routing protocols. In a wormhole attack an intruder creates a tunnel during the
transmission of the data from one end-point of the network to the other end-point , making leading distant network nodes to believe
that they are immediate neighbors and communicate through the wormhole link.
In wormhole, an attacker creates a tunnel between two points in the network and creates direct connection between them as
they are directly connected. An example is shown in Figure. 1.1. Here R and P are the two end-points in the wormhole tunnel. R is the
source node and S is the destination node. Node R is assuming that there is direct connection to node P so node R will start
transmission using tunnel created by the attacker. This tunnel can be created by number of ways including long-range
wireless transmission ,With the help an Ethernet cable or using a long-range wireless transmission .

figure 1.1: Warmhole attack
Wormhole attacker records packets at one end in the network and tunnels them to other end-point in the network. This
attack compromise the security of networks For example, when a wormhole attack is used against AODV, than all the packets
will be transmitted through this tunnel and no other route will be discovered. If the tunnel is create honestly and reliably than it is not
harmful to the network and will provides the useful service in connecting the network more efficiently.
A potential solution is to avoid wormhole attack is to integrate the prevention methods into intrusion detection system but it
is difficult to isolate the attacker using only software based approach because the packets sent by the wormhole are similar
to the packets sent by legitimate nodes .That all the nodes should monitor the behavior of its neighbor nodes. Each node sends REQ
messages to destination by using its neighbor node list. If the source does not get back the REP message from destination within
a stipulated time, it consider the presence of wormhole attack and adds that route to its wormhole list .On-demand routing
protocol ( AODP ) is being used in dynamic wireless ad hoc networks, a new route will be discovered in response to every route
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

724 www.ijergs.org

break . The route discovery requires high overhead. This overhead can be reduced if there are multiple paths and new route discovery
is required only in the situation when all paths break.
2. PROBLEM FORMULATION

2.1. Main Objective
1. To reduce the power consumption in switching between the active and sleep mode of the nodes.
2. To schedule the transmission time to the available neighbor nodes.
3. To detect the malicious weight information provided by the nodes during the packet transmission.
2.2. Specific Objectives
1. To extend the generic algorithm and implement the Ability Based Synchronization algorithm to find the winner slot to store
the packet data.
2. To extend the Ability Based Synchronization algorithm and implement the Optimism max out algorithm to avoid the
inflation attack which is made by sending false maximum weight among the nodes.
3. To extend the Optimism max out algorithm and implement the Randomized Optimism max out algorithm to synchronize all
the neighbor nodes by using all the slots.
2.3. Network Model

fig. 2.3: Network model under consideration
For 60 GHz directional networks are based on a centralized network structure, i.e., at least one network controller (NC) is
deployed, although concurrent point-to-point transmissions are supported between different pairs of devices. Thus, we only consider
the infrastructure mode where there exists one NC for access control and resources management of the network.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

725 www.ijergs.org

2.4 Attack Model
This section focuses on an active attack named wormhole attack, in which the malicious node(s) relay packets for two
legislate nodes to fool them believing that they are direct neighbors. In particular, there are two types of wormhole attack in the
network. One type of attack is that, there is a malicious node, e.g., W1, between the NC and the distant nodes.
In the neighbor discovery procedure, the malicious node relays the packets from the NC to the distant wireless node and vice-
versa, to make them believe they are direct neighbor and let the NC offer service to the distant node. Another type of such attack is
that, there are two or even more malicious nodes, e.g., W2 and W3, and they collude to relay packets between the NC and a distant
legislate wireless node to believe they are direct neighbor. The rst type of wormhole attack is only considered, as the proposed SND
scheme is also effective for the second attack.
2.5 Proposed Wormhole Attack Resistant Scheme
This section rst introduces the main idea of the proposed scheme, followed by the detailed description of the three phases in
the scheme, namely the NC broadcast (BC) phase, response/authentication (RA) phase and the NA time analysis (TA) phase.
Though the region that each attacker can attack could be a circular area, sectors other than the three plotted sectors can be
easily protected from the wormhole attack by using directional authentication, as described in the following. The objective of the
proposed SND scheme is to detect whether there are malicious nodes in the NCs communication range R.

fig. 2.5: Flow chat of the proposed scheme
For the scan of each sector, the NC broadcasts its hello message in the specic direction. This period is called NC BC
phase. The legislate nodes in this sector scan its neighbor sector in a counter-clockwise manner starting from a random sector, staying
in each sector for tn seconds.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

726 www.ijergs.org

Thus, to guarantee that all the nodes in the sector that the NC is scanning can hear the hello message, the NC BC phase
should last for at least Ltn seconds. After the NC broadcasts its hello message in a septic sector and all the nodes in this sector hear
the hello message, the node RA phase launches. In this phase, either the node(s) in this sector hear the transmission collision and
report wormhole attack, or they authenticate with the NC and report their local time information, which can be used by the NC for
further detection of wormhole attack in the NC TA phase.
From the time domain, the process of the proposed worm-hole attack resistant SND scheme is shown in Fig. 3.4, which starts
with the NC BC phase, followed by the RA phase and the NC TA phase. In the NC BC phase, the hello message is transmitted in
each time slot of length ten=2 to guarantee that the nodes in this sector can hear the hello message when they enter this sector at a
random time and stay there for time duration tn. As shown in Fig. 2.5, the NC TA phase can be pipelined with the RA phase with a
delay of td. Note that for the NC BC phase, the length of the hello message is larger than tn/4 for security reason.

fig. 2.5: Time domain observation of the proposed scheme
2.5.1 ABS: Ability Based Synchronization Algorithm
An algorithm is described first that uses the size of synchronization clusters as a catalyst for synchronization. The algorithm
is called ABSweight based synchronization. As mentioned previously, at the end of each active interval, a node uses the slotArray
structure to decide its next transmission time. The slotArray structure has s entries, one for each slot of the next (sleep) interval. The
node has to choose one of these slots, called winner slot, and synchronize with it. That is, the node has to advertise the time of its next
transmission (its TX value in the CPMP update packet) such that the update packet will be placed into that winner slot by its
neighbors.
ABS: Weight Based Synchronization. initState resets the local structures. processPackets updates the local structures for
each received packet. setTX determines the winner slot to be the one containing the packet from the largest neighboring cluster of
synchronization.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

727 www.ijergs.org

Algorithm 1
1. Object implementation ABS extends GENERIC;
2. maxW : int; #max weight over active interval
3. weight : int; #weight advertised in SEMP packets
4. Operation initState()
5. for (i:= 0; i < s; i ++) do
6. slotArray[i] : = new pkt[]; od
7. end
8. Operation setTX()
#compute the maxW value
9. maxW := 0;
10. for (i:= 0; i < s; i ++) do
11. for (j:= 0; j < slotArray[i]:size();j ++) do
12. if (slotArray[i][j]:weight > maxW) then
13. winnerSlot := i;
14. maxW := slotArray[i][j]:weight; fi
15. od od
#determine new TX and weight values
16. if (winnerSlot!= nextSendSEMP % ta) then
17. TX:= winnerSlot;
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

728 www.ijergs.org

18. nextSendSEMP := tcurr TX;
19. weight := maxW + 1;
20. else
21. weight := maxW;
22. fi
23. end
24. Operation processPackets(t
curr
: int)
25. pktList := inq:getAllPackets(slotLen);
26. for (i:= 0; i < pktList:size();i ++) do
27. index :=((t
curr
+ pktList[i]:TX) mod t
a
)/ts);
28. slotArray[index]:add(pktList[i]);
29. od
30. end
This operation needs to be explicitly performed, since from the last packet received from that cluster, the size of the cluster
may have increasedthe cluster may have incorporated other nodes. Let n be the number of nodes in a connected network.
2.5.2 OPM: Optimism max out algorithm
The Optimism Max out Algorithm is proposed to address the inflation attack. Instead of relying on subjective information
(the weight value contained in SEMP updates), OPMOA allows nodes to build a local approximation of this metric, using only
objective information derived from observationthe time of update receptions. OPMOA works by counting the number of packets
that are stored in each slot of the current active interval.
Algorithm 2. Optimism Max out Algorithm. setTX finds the slot storing the maximum number of packets and synchronizes
with it.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

729 www.ijergs.org


1. Object implementation OPMOA extends ABS;
2. maxC : int; #max nr: of packets per slot
3. Operation setTX()
#compute the maxC value
4. maxC : 0;
5. for (i := 0; i < s; i++) do
6. if (slotArray[i]:size() > maxC) then
7. maxC : slotArray[i]:size();
8. winnerSlot := i; fi
9. od
#update the TX value
10. if (winnerSlot! = nextSendSEMP % ta) then
11. TX:= winnerSlot;
12. nextSendSEMP := tcurr + TX;
13. fi
14. end
2.5.3 ROPM: Randomized Optimism Max Out Algorithm
This algorithm shows that for the same networks OPMOA is unable to completely synchronize, the situation changes when
imperfect channel conditions are considered. Specifically, for a network of 100 nodes with 15 percent packet loss rates, OPMOA
synchronizes the entire network in 21,000 s. While in a network with perfect channel conditions clusters created by OPMOA are
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

730 www.ijergs.org

stable, packet loss can make nodes move from one cluster of synchronization to another, thus breaking the stability. If enough nodes
switch, clusters may engulf other clusters in their vicinity, eventually creating a single cluster of synchronization.
However, relying only on packet loss is insufficient. One of our requirements is that a network synchronizes in a timely
manner. To achieve this, we extend OPMOA with randomization: nodes choose to synchronize with their neighbors in a weighted
probabilistic fashion. The algorithm is called randomized future peak detection.
This algorithm presents the details of the OPMOAR algorithm, which extends ABS (see Algorithm ABS algorithm). The
initState and processPackets methods are also inherited from ABS.
3. EXPERIMENTAL RESULT
The following Table 3.1 describes experimental result for existing system number of node given average in update data for
NC State to receiver data RDMA protocol analysis. The table contains number of node, NC time interval, and receiver update data in
RDMA protocol details are shown
Table 3.1Averagae of NC Update data in RDMA Protocol
S.NO Number of
Node
NC Update data (%) Receiver Update Data in RDMA Protocol
(%)
1 N8, N4 [2] 20 40
2 N7, N6, N9, N10 [4] 45 65
3 N5, N12,N14, N21, N23, N8 [6] 58 78
4 N9, N13, N15, N18, N22, N2, N8,
N14 [ 8]
65 85
5 N9, N13, N15, N18, N22, N2, N8,
N14, N21,N23 [10]
75 95

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

731 www.ijergs.org

The following Fig 3.1 describes experimental result for existing system number of node given average in update data for NC
State to receiver data RDMA protocol analysis. The table contains number of node, NC time interval, and receiver update data in
RDMA protocol details are shown.


Fig 3.1 Average of NC update data in RDMA Protocol


The following Table 3.2 describes experimental result for existing system number of node given average in update data for
NC State to receiver data SEMP protocol analysis. The table contains number of node, NC time interval, and receiver update data in
SEMP protocol details are shown.

Table 3.2 Average of NC Update data in SEMP Protocol
S.NO Number of
Node
NC Update data (%) Receiver Update Data in SEMP Protocol (%)
1 N8, N4 [2] 20 46
2 N7, N6, N9, N10 [4] 45 72
3 N5, N12,N14, N21, N23, N8 58 83
NC Update data in RDMA Protocol
40
65
78
85
95
1 2 3 4 5
NC Update data (%)
R
e
c
e
i
v
e
r

U
p
d
a
t
e

D
a
t
a

(
%
)
RDMA Protocol
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

732 www.ijergs.org

[6]
4 N9, N13, N15, N18, N22, N2,
N8, N14 [ 8]
65 89
5 N9, N13, N15, N18, N22, N2,
N8, N14, N21,N23
[10]
75 97


The following Fig 3.2 describes experimental result for existing system number of node given average in update data for NC
State to receiver data SEMP protocol analysis. The table contains number of node, NC time interval, and receiver update data in
SEMP protocol details are shown


Fig 3.2 Average of NC updates data in SEMP Protocol
The following Table 3.3 describes experimental result for RDMA and SEMP protocol and average in update data for NC
State to receiver data analysis. The table contains number of node, NC time interval, and receiver update data in RDMA and SEMP
protocol details are shown
Table 3.3 NC update data in RDMA and SEMP Protocol

NC Update data in SEMP Protocol
46
72
83
89
97
0
20
40
60
80
100
120
1 2 3 4 5
NC Update data [%]
R
e
c
e
i
v
e
r

U
p
d
a
t
e

D
a
t
a

[
%
]

Receiver Update Data in SEMP Protocol [%]
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

733 www.ijergs.org

S.NO NC Update data
(%)
RDMA Protocol (%) SEMP Protocol
(%)
1 20 40 46
2 45 65 72
3 58 78 83
4 65 85 89
5 75 95 97


The following Fig 3.3 describes experimental result for RDMA and SEMP protocol and average in update data for NC State
to receiver data analysis. The table contains number of node, NC time interval, and receiver update data in RDMA and SEMP protocol
details are shown

Fig 3.3 NC update data in RDMA and SEMP Protocol
The following Table 3.4 describes Performances Analysis for RDMA and SEMP Protocol. The table contains NC time interval,
and sending data in to designation and its arrival time interval in RDMA and SEMP protocol details are shown
Table 3.4 Performances Analysis for RDMA and SEMP Protocol
S.NO NC Update Data (ns) RDMAP Send data
To Designation Arrival
Details (ms)
SEMP Send Data to
Designation
Arrival Details (ms)
1 10 163 161
0
20
40
60
80
100
Receiver
Update Data
Rate (%)
1 2 3 4 5
NC Update Data Rate (%)
NC Update Data in RDMA and SEMP Protocol
NC Update data in RDMA Protocol (%)
Receiver Update Data in SEMP Protocol (%)
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

734 www.ijergs.org

2 20 165 163
3 30 173 170
4 40 176 174
5 50 181 177
6 60 184 182
7 70 185 183
8 80 191 188
9 90 194 190
10 100 197 195
The following Fig 3.4 describes Performances Analysis for RDMA and SEMP Protocol. The table contains NC time interval, and
sending data in to designation and its arrival time interval in RDMA and SEMP protocol details are shown


Fig 3.4 Performances Analysis for RDMA and SEMP Protocol

4. CONCLUSSION
The propose system at present several algorithms for synchronization mechanisms. However, the use of random values in
winner slot calculation does not cent percent accuracy. So the extension of proposed algorithms with a new algorithm is required for
highly efficient communication between nodes.
Performances Analysis for RDMA and SEMP
Protocol
1 2 3 4 5 6 7 8 9 10 11 12
Designation Send Data
A
r
r
i
v
a
l

D
a
t
a

i
n

D
e
s
i
g
n
a
t
i
o
n

D
e
t
a
i
l
s

RDMAP Send data
To Designation
SEMP Send Data To
Designation
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

735 www.ijergs.org

If the application is tested with real mobile nodes, then it can assist the further proceeding of the algorithm implementation
practically. In addition if the experimental application is designed web based, then it can be accessed platform independently and the
usage will be more. The new system is designed such that those enhancements can be integrated with current modules easily with less
integration work.
The problem of synchronizing the periodic transmissions of nodes in ad hoc networks, in order to enable battery lifetime
extensions without missing neighbors updates is studied. Several solutions, both lightweight and scalable but vulnerable to attacks is
proposed. Extension of generic algorithm to use transmission stability as a metric for synchronization is made. The implementation
and simulations show that the protocols are computationally inexpensive, provide significant battery savings, are scalable and
efficiently defend against attacks.



5. SCOPE FOR FUTURE ENHANCEMENTS
Several algorithms are proposed for synchronization mechanisms. However, the use of random values in winner slot
calculation does not cent percent accuracy. So the extension of proposed algorithms with a new algorithm is required for highly
efficient communication between nodes.
- If the application is tested with real mobile nodes, then it can assist the further proceeding of the algorithm implementation
practically.
- In addition if the experimental application is designed web based, then it can be accessed platform independently and the
usage will be more.
The new system becomes useful if the above enhancements are made in future. The new system is designed such that those
enhancements can be integrated with current modules easily with less integration work.
6. REFERENCE
[1] X. An, R. Prasad, and I. Niemegeers, Neighbor discovery in 60 ghz wireless personal area networks, in Proceedings of IEEE
International Symposium on World of Wireless Mobile and Multimedia Networks .IEEE, 2010, pp. 18.
[2] S. Vasudevan, J. Kurose, and D. Towsley, On neighbor discovery in wireless networks with directional antennas, in
Proceedings of IEEE INFOCOM 2005 , vol. 4, 2005, pp. 25022512.
[3] L. Hu and D. Evans, Using directional antennas to prevent wormhole attacks, in Proceedings of Network and Distributed
System Security Symposium . San Diego, 2004
[4] R. C. Daniels and R. W. Heath, Jr., 60 GHz wireless communi-cations: Emerging requirements and design recommendations,
IEEE Veh. Technol. Mag., vol. 2, no. 3, pp. 4150, Sept. 2007.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

736 www.ijergs.org

[5] Z. M. Chen and Y. P. Zhang, Inter-chip wireless communication channel: Measurement, characterization, and modeling,
IEEE Trans. Antennas Propagat., vol. 55, no. 3, pp. 978986, Mar. 2007.
[6] A. Burrell and P. Papantoni-Kazakos, Random access algorithms in packet networksa review of three research decades,
International Journal of Communications, Network and System Sciences, vol. 5, no. 10, pp. 691707, 2012.
[7] L. Georgiadis, L. Merakos, and P. Papantoni-Kazakos, A method for the delay analysis of random multiple-access algorithms
whose delay process is regenerative, IEEE Journal on Selected Areas in Communications, vol. 5, no. 6, pp. 10511062, 1987.
[8] R. Mudumbai, S. Singh, and U. Madhow, Medium access control for 60 ghz outdoor mesh networks with highly directional
links, in Proccedings of IEEE 1
[9] G. Acs, L. Buttyan, and I. Vajda, Provably Secure On-Demand Source Routing in Mobile Ad Hoc Networks, IEEE Trans.
Mobile Computing, vol. 5, no. 11, pp. 1533-1546, Nov. 2006.
[10] J. Deng, R. Han, and S. Mishra, INSENS: Intrusion-Tolerant Routing for Wireless Networks, Computer Comm., vol. 29, no.
2, pp. 216-230, 2006.
[11] S. Dosh i, S. Bhandare, a nd T.X. Brow n, An On-Demand Minimum Energy Routing Protocol for a Wireless Ad Hoc
Network, ACM SIGMOBILE Mobile Computing and Comm. Rev., vol. 6, no. 3, pp. 50-66, 2002.
[12] L.M. Feeney, An Energy Consumption Model for Performance Analysis of Routing Protocols for Mobile Ad Hoc Networks,
Mobile Networks and Applications, vol. 6, no. 3, pp. 239-249, 2001.
[13] D.B. Johnson, D.A. Maltz, and J. Broch, DSR: The Dynamic Source Routing Protocol for Multihop Wireless Ad Hoc
Networks, Ad Hoc Networking, Addison-Wesley, 2001.
[14] Y.-C. Hu, A. Perrig, and D. Johnson. Ariadne: A secure on-demand routing protocol for ad hoc networks. In Proceedings of the
ACM Conference on Mobile Computing and Networking (Mobicom), 2002.











International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

737 www.ijergs.org

WEB FORUMS CRAWLER FOR ANALYSIS USER SENTIMENTS
I
B.Nithya M.sc.,
II
K. Devika M.Sc., MCA., M.Phil.,
I
Research Scholar, Bharathiar University, Coimbatore,
II
Assistant Professor, CS,
I, II
Dept. of Computer Science, Maharaja Co-Education College of Arts and Science,
Perundurai, Erode 638052.
I
Email id: bmsnithya2@gmail.com
I
Contact No:9965112440
II
Email id: devika_tarun@yahoo.co.in
I
Contact No: 9894831174

ABSTRACT
The advancement in computing and communication technologies enables people to get together and share information in
innovative ways. Social networking sites empower people of different ages and backgrounds with new forms of collaboration,
communication, and collective intelligence. This project presents Forum Crawler Under Supervision (FoCUS), a supervised web-scale
forum crawler. The goal of FoCUS is to crawl relevant forum content from the web with minimal overhead. Forum threads contain
information content that is the target of forum crawlers. Although forums have different layouts or styles and are powered by different
forum software packages, they always have similar implicit navigation paths connected by specific URL types to lead users from entry
pages to thread pages. Based on this observation, the web forum crawling problem is reduced to a URL-type recognition problem and
classifies them as Index Page, Thread Page and Page-Flipping page. In addition, this project studies how networks in social media can
help predict some human behaviors and individual preferences
Keywords: content based retrieval, multimedia databases, search problems.
1. INTRODUCTION
1.1. Data Mining
Data mining, or knowledge discovery, is the computer-assisted process of digging through and analyzing enormous sets of
data and then extracting the meaning of the data. Data mining tools predict behaviors and future trends, allowing businesses to make
proactive, knowledge-driven decisions. Data mining tools can answer business questions that traditionally were too time consuming to
resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their
expectations.
Data mining derives its name from the similarities between searching for valuable information in a large database and mining
a mountain for a vein of valuable ore. Both processes require either sifting through an immense amount of material, or intelligently
probing it to find where the value resides.
Although data mining is still in its infancy, companies in a wide range of industries - including retail, finance, heath care,
manufacturing transportation, and aerospace - are already using data mining tools and techniques to take advantage of historical data.
By using pattern recognition technologies and statistical and mathematical techniques to sift through warehoused information, data
mining helps analysts recognize significant facts, relationships, trends, patterns, exceptions and anomalies that might otherwise go
unnoticed.
For businesses, data mining is used to discover patterns and relationships in the data in order to help make better business decisions.
Data mining can help spot sales trends, develop smarter marketing campaigns, and accurately predict customer loyalty.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

738 www.ijergs.org


1.2.WEB CRAWLER
A Web crawler is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing.
A Web crawler may also be called a Web spider, an ant, an automatic indexer, or (in the FOAF software context) a Web scutter. Web
search engines and some other sites use Web crawling or spidering software to update their web contentor indexes of others sites' web
content. Web crawlers can copy all the pages they visit for later processing by a search engine that indexes the downloaded pages so
that users can search them much more quickly. Crawlers can validate hyperlinks and HTML code. They can also be used for web
scraping.
WebCrawler was originally a separate search engine with its own database, and displayed advertising results in separate areas of
the page. More recently it has been repositioned as a metasearch engine, providing a composite of separately identified sponsored and
non-sponsored search results from most of the popular search engines.
A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all
the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier
are recursively visited according to a set of policies. The large volume implies that the crawler can only download a limited number of
the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have
already been updated or even deleted.
The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers
to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small
selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified
through HTTP GET parameters in the URL.
If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-
provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site.
This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor
scripted changes in order to retrieve unique content.
"Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not
only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained." A crawler must carefully
choose at each step which pages to visit next.
1.3. COLLECTIVE BEHAVIOR
Collective behavior refers to the behaviors of individuals in a social networking environment, but it is not simply the
aggregation of individual behaviors. In a connected environment, individuals behaviors tend to be interdependent, influenced by the
behavior of friends. This naturally leads to behavior correlation between connected users. Take marketing as an example: if our
friends buy something, there is a better-than-average chance that we will buy it, too.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

739 www.ijergs.org

This behavior correlation can also be explained by homophily. Homophily is a term coined in the 1950s to explain our
tendency to link with one another in ways that confirm, rather than test, our core beliefs. Essentially, we are more likely to connect to
others who share certain similarities with us. This phenomenon has been observed not only in the many processes of a physical world,
but also in online systems. Homophily results in behavior correlations between connected friends.
In other words, friends in a social network tend to behave similarly. The recent boom of social media enables us to study
collective behavior on a large scale. Here, behaviors include a broad range of actions: joining a group, connecting to a person, clicking
on an ad, becoming
interested in certain topics, dating people of a certain type, etc. In this work, we attempt to leverage the behavior correlation presented
in a social network in order to predict collective behavior in social media. Given a network with the behavioral information of some
actors, how can we infer the behavioral outcome of the remaining actors within the same network.
It can also be considered as a special case of semi-supervised learning or relational learning where objects are connected
within a network. Some of these methods, if applied directly to social media, yield only limited success. This is because connections
in social media are rather noisy and heterogeneous. In the next section, we will discuss the connection heterogeneity in social media,
review the concept of social dimension, and anatomize the scalability limitations of the earlier model proposed which provides a
compelling motivation for this work.

2. PROBLEM FORMULATION
2.1. PROBLEM FORMULATION
To harvest knowledge from forums, their content must be downloaded first. However, forum crawling is not a
trivial problem. Generic crawlers, which adopt a breadth-first traversal strategy, are usually ineffective and inefficient for
forum crawling. This is mainly due to two non-crawler-friendly characteristics of forums.

1) Duplicate links and uninformative pages and
2) page-flipping links.

In addition to the above two challenges, there is also a problem of entry URL discovery. The entry URL of a
forum points to its homepage, which is the lowest common ancestor page of all its threads. The system reduces the forum
crawling problem to a URL type recognition problem and implement a crawler, FoCUS, to demonstrate its applicability. It
shows how to automatically learn regular expression patterns (ITF regexes) that recognize the index URL, thread URL,
and page-flipping URL using the page classifiers built from as few as five annotated forums.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

740 www.ijergs.org

To predict collective behavior in social media is being done by understanding how individuals behave in a social
networking environment.. In particular, given information about some individuals, how can infer the behavior of
unobserved individuals in the same network? A social-dimension-based approach has been shown effective in addressing
the heterogeneity of connections presented in social media.

However, the networks in social media are normally of colossal size, involving hundreds of thousands of actors.
The scale of these networks entails scalable learning of models for collective behavior prediction. To address the
scalability issue, an edge-centric clustering scheme is required to extract sparse social dimensions.
Hence the thesis is proposed. With sparse social dimensions, the project can efficiently handle networks of
millions of actors while demonstrating a comparable prediction performance to other non-scalable methods.

While fuzzy c-means is a popular soft-clustering method, its effectiveness is largely limited to spherical clusters.
By applying kernel tricks, the kernel fuzzy c-means algorithm attempts to address this problem by mapping data with
nonlinear relationships to appropriate feature spaces. Kernel combination, or selection, is crucial for effective kernel
clustering.

Unfortunately, for most applications, it is uneasy to nd the right combination. At present, there is a risk in
clustering images with more noise pixels. Since the image is not clustered well, the existing system is somewhat less
efficient.

The problem is aggravated for many real-world clustering applications, in which there are multiple potentially useful cues.
For such applications, to apply kernel-based clustering, it is often necessary to aggregate features from different sources
into a single aggregated feature.
2.2. OBJECTIVES OF THE RESEARCH

The development in computing and communication technologies enables people to get together and share
information in innovative ways. Social networking sites (a recent phenomenon) empower people of different ages and
backgrounds with new forms of collaboration, communication, and collective intelligence. This thesis presents Forum
Crawler under Supervision (FoCUS), a supervised web-scale forum crawler.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

741 www.ijergs.org


The goal of FoCUS is to crawl relevant forum content from the web with minimal overhead. Forum threads
contain information content that is the target of forum crawlers. Although forums have different layouts or styles and are
powered by different forum software packages, they always have similar implicit navigation paths connected by specific
URL types to lead users from entry pages to thread pages.

Based on this observation, the web forum crawling problem is reduced to a URL-type recognition problem and
classifies them as Index Page, Thread Page and Page-Flipping page. In addition, this thesis studies how networks in social
media can help predict some human behaviors and individual preferences. In particular, given the behavior of some
individuals in a network, how can infer the behavior of other individuals in the same social network? This study can help
better understand behavioral patterns of users in social media for applications like social advertising and recommendation.

This study of collective behavior is to understand how individuals behave in a social networking environment.
Oceans of data generated by social media like Facebook, Twitter, and YouTube present opportunities and challenges to
study collective behavior on a large scale. This thesis aims to learn to predict collective behavior in social media. A
social-dimension-based approach has been shown effective in addressing the heterogeneity of connections presented in
social media. However, the networks in social media are normally of colossal size, involving hundreds of thousands of
actors. The scale of these networks entails scalable learning of models for collective behavior prediction.

To address the scalability issue, the thesis proposes an edge-centric clustering scheme to extract sparse social
dimensions. With sparse social dimensions, the proposed approach can efciently handle networks of millions of actors
while demonstrating a comparable prediction performance to other non-scalable methods.

In addition, the thesis includes a new concept called sentiment analysis. Since many automated prediction
methods exist for extracting patterns from sample cases, these patterns can be used to classify new cases. The proposed
system contains the method to transform these cases into a standard model of features and classes.
METHODOLOGY
4.1 TERMINOLOGY
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

742 www.ijergs.org

To facilitate presentation in the following sections, the first define some terms used in this dissertation.
4.1.1 PAGE TYPE
It classified forum pages into page types.
Entry Page:
The homepage of a forum is contains a list of boards and is also the lowest common ancestor of all threads.
Index Page:
A page of a board in a forum, which usually contains a table-like structure; each row in it contains information of
a board or a thread.
Thread Page:
A page of a thread in a forum that contains a list of posts with user generated content belonging to the same
discussion.
Other Page:
A page that is not an entry page, index page, or thread page.

4.1.2 URL TYPE
There are four types of URL.
Index URL:
A URL is on an entry page or index page and points to an index page. Its anchor text shows the title of its
destination board.
Thread URL:
A URL is on an index page and points to a thread page. Its anchor text is the title of its destination thread.
Page-flipping URL:
A URL leads users to another page of the same board or the same thread. Correctly dealing with page-flipping
URLs enables a crawler to download all threads in a large board or all posts in a long thread.
Other URL:
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

743 www.ijergs.org

A URL that is not an index URL, thread URL, or page-flipping URL.

4.1.3 EIT Path:
An entry-index-thread path is a navigation path from an entry page through a sequence of index pages (via index
URLs and index page-flipping URLs) to thread pages (via thread URLs and thread page-flipping URLs).

4.1.4 ITF Regex:
An index-thread-page-flipping regex is a regular expression that can be used to recognize index, thread, or page-
flipping URLs. ITF regex is what FoCUS aims to learn and applies directly in online crawling. The learned ITF regexes
are site specific, and there are four ITF regexes in a site: one for recognizing index URLs, one for thread URLs, one for
index page-flipping URLs, and one for thread page-flipping URLs. A perfect crawler starts from a forum entry URL and
only follows URLs that match ITF regexes to crawl all forum threads. The paths that it traverses are EIT paths.

4.2. ARCHITECTURE OF FOCUS

The overall architecture of FoCUS as follows. It consists of two major parts: the learning part and the online
crawling part. The learning part first learns ITF regexes of a given forum from automatically constructed URL training
examples. The online crawling part then applies learned ITF regexes to crawl all threads efficiently. Given any page of a
forum, FoCUS first finds its entry URL using the Entry URL Discovery module.

Then, it uses the Index/Thread URL Detection module to detect index URLs and thread URLs on the entry page;
the detected index URLs and thread URLs are saved to the URL training sets. Next, the destination pages of the detected
index URLs are fed into this module again to detect more index and thread URLs until no more index URL is detected.
fter that, the Page-Flipping URL Detection module tries to find page flipping URLs from both index pages and
thread pages and saves them to the training sets. Finally, the ITF Regexes Learning module learns a set of ITF regexes
from the URL training sets.
.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

744 www.ijergs.org

Once the learning is finished, FoCUS performs online crawling as follows: starting from the entry URL, FoCUS
follows all URLs matched with any learned ITF regex. FoCUS continues to crawl until no page could be retrieved or other
condition is satisfied.

4.2.1. ITF REGEXES LEARNING
To learn ITF regexes, FoCUS adopts a two-step supervised training procedure. The first step is training sets
construction. The second step is regexes learning.

i. Constructing URL Training Sets
The goal of URL training sets construction is to automatically create sets of highly precise index URL, thread
URL, and page-flipping URL strings for ITF regexes learning. Its use a similar procedure to construct index URL and
thread URL training sets since they have very similar properties except for the types of their destination pages; to present
this part first. Page-flipping URLs have their own specific properties that are different from index URLs and thread
URLs; to present this part later.

ii. Index URL and thread URL training sets
Recall that an index URL is a URL that is on an entry or index page; its destination page is another index page;
its anchor text is the board title of its destination page. A thread URL is a URL that is on an index page; its destination
page is a thread page; its anchor text is the thread title of its destination page. It also note that the only way to distinguish
index URLs from thread URLs is the type of their destination pages. Therefore, to need a method to decide the page type
of a destination page.
The index pages and thread pages each have their own typical layouts. Usually, an index page has many narrow
records, relatively long anchor text, and short plain text; while a thread page has a few large records (user posts). Each
post has a very long text block and relatively short anchor text.

An index page or a thread page always has a timestamp field in each record, but the timestamp order in the two
types of pages are reversed: the timestamps are typically in descending order in an index page while they are in ascending
order in a thread page. In addition, each record in an index page or a thread page usually has a link pointing to a user
profile page.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

745 www.ijergs.org





International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

746 www.ijergs.org

4.2.2. PAGE-FLIPPING URL TRAINING SET

Page-flipping URLs point to index pages or thread pages but they are very different from index URLs or thread
URLs. The proposed connectivity metric is used to distinguish page-flipping URLs from other loop-back URLs.
However, the metric only works well on the grouped page-flipping URLs, i.e., more than one page-flipping URL in one
page.

But in many forums, there is only one page-flipping URL in one page, which it called single page-flipping URL.
Such URLs cannot be detected using the connectivity metric. To address this shortcoming, it observed some special
properties of page flipping URLs and proposed an algorithm to detect page flipping URLs based on these properties.

In particular, the grouped page-flipping URLs have the following properties:

1. Their anchor text is either a sequence of digits such as 1, 2, 3, or special text such as last.

2. They appear at the same location on the DOM tree of their source page and the DOM trees of their destination
pages.

3. Their destination pages have similar layout with their source pages. It use tree similarity to determine whether the
layouts of two pages are similar or not. As to single page-flipping URLs, they do not have the property 1, but they
have another special property.

4. The single page-flipping URLs appearing in their source pages and their destination pages have the same anchor
text but different URL strings.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

747 www.ijergs.org

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

748 www.ijergs.org

4.3 SPARSE SOCIAL DIMENSIONS

In this section, to first show one toy example to illustrate the intuition of communities in an edge view and then
present potential solutions to extract sparse social dimensions.

4.3.1 COMMUNITIES IN AN EDGE-CENTRIC VIEW
4.3.2 EDGE PARTITION VIA LINE GRAPH PARTITION
4.3.3 EDGE PARTITION VIA CLUSTERING EDG E INSTANCES

4.3.1 COMMUNITIES IN AN EDGE-CENTRIC VIEW

Though SocioDim with soft clustering for social dimension extraction demonstrated promising results, its
scalability is limited. A network may be sparse i.e., the density of connectivity is very low), whereas the extracted social
dimensions are not sparse. Lets look at the toy network with two communities in Figure 1. Its social dimensions
following modularity maximization are shown in Table 2. Clearly, none of the entries is zero.

Figure. No.1: 1 Toy example

Figure. No: 2 Edge cluster
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

749 www.ijergs.org

Then a network expands into millions of actors, a reasonably large number of social dimensions need to be
extracted. The corresponding memory requirement hinders both the extraction of social dimensions and the subsequent
discriminative learning. Hence, it is imperative to develop some other approach so that the extracted social dimensions are
sparse.
5. SYSTEM DESIGN
5.1 Module Design
The thesis contains the following modules.
The following modules are present in the thesis
1. Index Url And Thread Url Training Sets
2. Page-Flipping Url Training Set
3. Entry Url Discovery
4. Create Graph
5. Convert To Line Graph
6. Algorithm Of Scalable K-Means Variant
7. Algorithm For Learning Of Collective Behavior
8. Sentiment Analysis
1) Forum Topic Download
2) Parse Forum Topic Text And Urls
3) Forum Sub Topic Download
4) Parse Forum Sub Topic Text And Urls

1. Index Url And Thread Url Training Sets
The homepage of a forum which is contains a list of boards and is also the lowest common ancestor of all threads. A page of
a board in a forum, which usually contains a table-like structure; each row in it contains information of a board or a thread. Recall that
an index URL is a URL that is on an entry or index page; its destination page is another index page; its anchor text is the board title of
its destination page. A thread URL is a URL that is on an index page; its destination page is a thread page; its anchor text is the thread
title of its destination page. The only way to distinguish index URLs from thread URLs is the type of their destination pages.
Therefore, user needs a method to decide the page type of a destination page.
2. Page-Flipping Url Training Set
Page-flipping URLs point to index pages or thread pages but they are very different from index URLs or thread URLs. The
proposed metric is used to distinguish page-flipping URLs from other loop-back URLs. However, the metric only works well on the
grouped page-flipping URLs more than one page-flipping URL in one page.
3. Entry Url Discovery
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

750 www.ijergs.org

An entry URL needs to be specified to start the crawling process. To the best of our knowledge, all previous methods
assumed that a forum entry URL is given. In practice, especially in web-scale crawling, manual forum entry URL annotation is not
practical. Forum entry URL discovery is not a trivial task since entry URLs vary from forums to forums.
4. Create Graph
In this module, nodes are created flexibly. The name of the node is coined automatically. The name should be unique. The
link can be created by selecting starting and ending node; a node is linked with a direction. The link name given cannot be repeated.
The constructed graph is stored in database. Previous constructed graph can be retrieved when ever from the database.
5. Convert To Line Graph
In this module, from the previous modules graph data, line graph is created. The edge details are gathered and constructed as
nodes. The nodes with same id in them are connected as edges.
6. Algorithm Of Scalable K-Means Variant
In this module, the data instances are given as input along with number of clusters, and clusters are retrieved as output. First
it is required to construct a mapping from features to instances. Then cluster centroids are initialized. Then maximum similarity is
given and looping is worked out. When the change is objective value falls above the Epsilon value then the loop is terminated.

7. Algorithm For Learning Of Collective Behavior
In This Module, The Input Is Network Data, Labels Of Some Nodes And Number Of Social Dimensions; Output Is Labels Of
Unlabeled Nodes.
8. Sentiment Analysis
1) Forum Topic Download
In This Module, The Source Web Page Is Keyed In (Default: Http://Www.Forums.Digitalpoint.Com) And The Content Is
Being Downloaded. The HTML Content Is Displayed In A Rich Text Box Control.
2) Parse Forum Topic Text And Urls
In This Module, The Downloaded Source Page Web Content Is Parsed And Checked For Forum Links. The Links Are
Extracted And Displayed In A List Box Control. Also The Link Text Are Extracted And Displayed In Another List Box Control.
3) Forum Sub Topic Download
In this module, all the forum links pages in the source web page are downloaded. The HTML content is displayed in a rich
text box control during each page download.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

751 www.ijergs.org

4) Parse Forum Sub Topic Text And Urls
In this module, the downloaded forum pages web content are parsed and checked for sub forum links. The links are extracted
and displayed in a list box control. Also the link text are extracted and displayed in another list box control.

6. RESULT AND DISCUSSION
ANALYZING AVERAGE POST PER FORUM AND AVERAGE SENTIMENTAL VALUE

Forum
Id
Forum Title Threads
count
Post
Count
Average Post Per
forum
Average sentiment value
per forum
1 Google 4 1340 335 0
34 Google+ 51 1158 22 1
37 Digital Point Ads 50 708 14 1
38 Google AdWords 53 684 12 0
39 Yahoo Search Marketing 50 1240 24 1
44 Google 50 2094 41 0
46 Azoogle 51 1516 29 0
49 ClickBank 50 1352 27 0
52 General Business 51 1206 23 0
54 Payment Processing 52 1782 34 0
59 Copywriting 51 526 10 0
62 Sites 53 504 9 1
63 Domains 51 78 1 1
66 eBooks 51 484 9 1
70 Content Creation 50 206 4 1
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

752 www.ijergs.org

71 Design 50 498 9 1
72 Programming 51 202 3 1
77 Template Sponsorship 47 94 2 1
82 Adult 51 30 0 1
83 Design &amp;
Development
6 0 0 1
84 HTML &amp; Website
Design
52 254 4 1
85 CSS 50 110 2 1
86 Graphics &amp;
Multimedia
54 79 1 0

Table No: 5.3 Analyzing Average Post Per Forum And Average Sentimental Value


CHART NO: 5.3 CHART REPRESENTATION FOR ANALYZING AVERAGE POST PER FORUM AND
AVERAGE SENTIMENTAL VALUE
0
50
100
150
200
250
300
350
400
1 3 5 7 9 11 13 15 17 19 21 23
Average Post Per
forum
Average sentiment
value per forum
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

753 www.ijergs.org


The proposed approach includes group the forums into various clusters using emotional polarity computation and
integrated sentiment analysis based on K-means clustering. Also positive and negative replies are clustered. Using
scalable learning the relationship among the topics are identified and represent it as a graph. Data are collected from
forums.digitalpoint.com which includes a range of 75 different topic forums. Computation indicates that within the same
time window, forecasting achieves highly consistent results with K-means clustering.
Also the forum topics are represented using graphs. In this graph the is used to represent the forum titles, thread
count, post count, average post per forum, average sentiment value per forum and the similarity or relationship between
the topics.
CONCLUSION AND FUTURE WORK
6.1 CONCLUSION

In this thesis algorithms are developed to automatically analyze the emotional polarity of a text, based on which a value for
each piece of text is obtained. The absolute value of the text represents the influential power and the sign of the text denotes its
emotional polarity.

This K-means clustering is applied to develop integrated approach for online sports forums cluster analysis. Clustering
algorithm is applied to group the forums into various clusters, with the center of each cluster representing a hotspot forum within the
current time span.

In addition to clustering the forums based on data from the current time window, it is also conducted forecast for the next
time window. Empirical studies present strong proof of the existence of correlations between post text sentiment and hotspot
distribution. Education Institutions, as information seekers can benefit from the hotspot predicting approaches in several ways. They
should follow the same rules as the academic objectives, and be measurable, quantifiable, and time specific. However, in practice
parents and students behavior are always hard to be explored and captured.
sing the hotspot predicting approaches can help the education institutions understand what their specific customers' timely
concerns regarding goods and services information. Results generated from the approach can be also combined to competitor analysis
to yield comprehensive decision support information.


International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

754 www.ijergs.org

6.2.SCOPE FOR FUTURE ENHANCEMENTS
In the future, to utilize the inferred information and extend the framework for efficient and effective network
monitoring and application design. The new system become useful if the below enhancements are made in future.

- The application can be web service oriented so that it can be further developed in any platform.
- The application if developed as web site can be used from anywhere.
- At present, number of posts/forum, average sentiment values/forums, positive % of posts/forum and negative %
of posts/forums are taken as feature spaces for K-Means clustering. In future, neutral replies, multiple-languages
based replies can also be taken as dimensions for clustering purpose.
- In addition, currently forums are taken for hot spot detection. Live Text streams such as chatting messages can be
tracked and classification can be adopted.

The new system is designed such that those enhancements can be integrated with current modules easily with less
integration work. The new system becomes useful if the above enhancements are made in future. The new system is
designed such that those enhancements can be integrated with current modules easily with less integration work.

REFERENCES:
1. S. Brin and L. Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine. Computer Networks
and ISDN Systems, vol. 30, nos. 1-7, pp. 107-117, 1998.
2. R. Cai, J.-M. Yang, W. Lai, Y. Wang, and L. Zhang, iRobot: An Intelligent Crawler for Web Forums, Proc.
17th Intl Conf. World Wide Web, pp. 447-456, 2008.
3. A. Dasgupta, R. Kumar, and A. Sasturkar, De-Duping URLs via Rewrite Rules, Proc. 14th ACM SIGKDD Intl
Conf. Knowledge Discovery and Data Mining, pp. 186-194, 2008.
4. C. Gao, L. Wang, C.-Y. Lin, and Y.-I. Song, Finding Question-Answer Pairs from Online Forums, Proc. 31st
Ann. Intl ACM SIGIR Conf. Research and Development in Information Retrieval, pp. 467-474, 2008.
5. H.S. Koppula, K.P. Leela, A. Agarwal, K.P. Chitrapura, S. Garg, and A. Sasturkar, Learning URL Patterns for
Webpage De-Duplication, Proc. Third ACM Conf. Web Search and Data Mining, pp. 381-390, 2010.
6. L. Zhang, B. Liu, S.H. Lim, and E. OBrien-Strain, Extracting and Ranking Product Features in Opinion
Documents, Proc. 23rd Intl Conf. Computational Linguistics, pp. 1462-1470, 2010.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

755 www.ijergs.org

7. M.L.A. Vidal, A.S. Silva, E.S. Moura, and J.M.B. Cavalcanti, Structure-Driven Crawler Generation by
Example, Proc. 29thAnn. Intl ACM SIGIR Conf. Research and Development in Information Retrieval, pp. 292-
299, 2006.
8. Y. Wang, J.-M. Yang, W. Lai, R. Cai, L. Zhang, and W.-Y. Ma, Exploring Traversal Strategy for Web Forum
Crawling, Proc. 31st Ann. Intl ACM SIGIR Conf. Research and Development in Information Retrieval, pp. 459-
466, 2008.
9. J.-M. Yang, R. Cai, Y. Wang, J. Zhu, L. Zhang, and W.-Y. Ma, Incorporating Site-Level Knowledge to Extract
Structured Data from Web Forums, Proc. 18th Intl Conf. World Wide Web, pp. 181-190, 2009.
10. 28] Y. Zhai and B. Liu, Structured Data Extraction from the Web based on Partial Tree Alignment, IEEE Trans.
Knowledge Data Eng., vol. 18, no. 12, pp. 1614-1628, Dec. 2006.
11. [29] J. Zhang, M.S. Ackerman, and L. Adamic, Expertise Networks in Online Communities: Structure and
Algorithms, Proc. 16th Intl Conf. World Wide Web, pp. 221-230, 2007.
12. Blog, http://en.wikipedia.org/wiki/Blog, 2012.
13. ForumMatrix, http://www.forummatrix.org/index.php, 2012.
14. Hot Scripts, http://www.hotscripts.com/index.php, 2012.
15. Internet Forum, http://en.wikipedia.org/wiki/Internet_forum, 2012.
16. Message Boards Statistics, http://www.big-boards.com/statistics/, 2012














International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

756 www.ijergs.org

Determination of Distortion Developed During TIG welding of low carbon
steel plate
Atma Raj M.R, Joy Varghese V.M.
Mechanical Engineering Department
SCT College of Engineering
Thiruvananthapuram, Kerala, India
atmarajmr@gmail.com


AbstractTIG welding is widely used in modern manufacturing industries. In almost all kinds of metals, TIG welding produces high
quality welds. Determination of distortions developed during welding is one of the major goals of welding simulation.Predictions of
distortions are necessary to maintain the design accuracy of critically welded components in the design stage itself rather than doing
corrective measures after welding. The purpose of present work is to predict the distortion developed during TIG welding of low
carbon steel plate. In this study, 3-D FE model is developed to analyze the distortion during TIG Welding of steel plate. In numerical
analysis thermal and structural analysis were carried out sequentially. The thermal loads are the main input of structural analysis. For
the analysis the effect of distortion in different plates were calculated and compared to get the plane of maximum distortion. An
experiment was conducted to measure the distortion or deformation in a welded plate.

KeywordsTIG, Distortion or Deformation, welding modeling, CMM, FEM, Discretization, welding heat source.

INTRODUCTION
Tungsten inert gas welding, TIG is widely applied in manufacturing process for different types of materials like Aluminum,
Mild steel and different type of stainless steel alloy grades. The optimization of TIG welding process parameters play important
role for the final product quality in terms of weld distortions, joint efficiency and mechanical properties. As welding process
involves the heating and cooling process in non-uniform manner, the distortions are unavoidable. The weld contributes to the
development of several kinds of distortions like longitudinal, transverse or angular distortions [1]. Distortion in welding is due
to non-uniform heating and cooling produced during welding. Controlling distortion is very important as it severely affects the
dimensional tolerance limits. Correcting distortion is costly and in many cases not possible. So it is necessary to establish a procedure
that minimizes distortion and establish rational standards for acceptable limits for distortion. Arc welding involves intense local
heating of the weld region and conduction of this heat in to the surrounding material. However this expansion is constrained by the
cooler material surrounding it, leads to plastic deformationof hotter material. Reducing and controlling distortion requires the
fundamental knowledge of residual stress and other factors which cause distortion. During welding and subsequent cooling, thermally
induced shrinkage strains build up in the weld metal and the base metal regions. The stresses resulting from these shrinkage strains
combine and react to produce bending, buckling etc.
LITERATURE REVIEW
The welding heat source was assumed to be a point and line source in the early stages of welding modeling. During the
initial stages of welding heat transfer modeling conduction based models were developed and later convection models were developed
which are found to be more accurate especially in and around the weld pool According to D. Kolbcar [2], Rosenthal developed a
relation for both line and point moving sources at first. In 1969 Pavelic introduced Gaussian form of distribution which is used by
many researches and has been using the same because of its simplicity and accuracy of such an assumption. This model is not suitable
for modeling an inclined welding torch. Goldak et al [3] in 1984 introduced double ellipsoidal distribution which is the most suitable
distribution for a stationary welding source and can account for the inclined torch position,this model also fails for moving torches.
As an extension of this work in 2003 Sapabathy et al introduced double ellipsoidal model with a differential distribution at the front
and back portion of arc which is most suitable for even vibrating heat sources, that can be used for modeling any type of welding
technique including wave technique. A new method for calculating the thermal cycles in the heat affected zone during gas metal arc
(GMA) welding was done by M.A. Wahab et al [4] and the thermal cycles were predicted in order to estimate the depth of weld
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

757 www.ijergs.org

penetration, the geometry of the weld pool and the cooling rates. They concluded that to obtain optimal weld pool geometry for
tungsten inert gas (TIG) welding the selection of process parameters such as front height, front width, back height and back width of
the weld pool play an important role. The finite element distortion analysis in two dissimilar stainless steel welded plate is analyzed by
J.J. del Coz Daz et.al. [5].They studied the effect of TIG welding in duplex stainless steels. In order to predict the welding
deformation in fillet welds, Dean Deng et.al. [6] developed a 3D thermal elastic plastic finite element computational procedure and
validated numerical results with the experimental measurements and he concluded that numerical models can be effectively used for
the prediction of welding distortion.
1. EXPERIMENTAL DETERMINATION FOR PREDICTING DISTORTION
The experiment was conducted for finding out the distortion of welded plate after TIG welding. A numerical analysis is
carried out and the results were compared with the experimental results
Experimental procedure
In the present work, MS specimens of dimensions 150mm X 50x6 mm were considered. The base plate material used was
commercial mild steel. Each specimen were filed using a flat file and all the surfaces were grinded with surface grinding machine of
240 grit. Flexible abrasive paper (silicon carbide) was used to remove all impurities and to get the required surface finishThe co-
ordinates of the drilled hole were measured using co-ordinate measuring machine (CMM).Two of the side surfaces (at weld start
point) are set as the reference planes and the intersection point of the two reference planes is set as the reference point. With
reference to these reference points the centre of the holes were determined by measuring the cylindrical surface of the holes .Hence
all the co-ordinates of the four holes were determined.

Fig.1 Schematic representation of specimen for distortion measurement in CMM
Four holes of 2mm diameter were drilled at position as in Fig 1. The measurements and the results were saved on a spread sheet.
Then welding was carried out on plates by applying on TIG welding torch to get a bead on the plate in which the torch travelled at a
constant speed of 2mm/sec. Single-pass; autogenous, bead-on-plate TIG welds were made along the center line of the test specimens.
A torch with a standard 2% thoriated tungsten electrode rod with a 3.2 mm diameter was used.The electrode tip was a blunt point
with a 45 angle. Argon gas of 99.99% purity was used as the shielding gas. The tip angle of the electrode was grounded, and the
electrode gap was measured for each new weld prior to welding to ensure that the welding was performed under the same operating
conditions. After welding, test specimens are cleaned and the co-ordinates of the welded specimen were measured using CMM with
respect to the same reference before welding. The results were recorded on spreadsheet document. Measurements taken before and
after welding were compared. Distortions at specified points were determined by the difference between the readings taken before
and after welding.





International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

758 www.ijergs.org


Fig.2. Accurate-Spectra Coordinate Measuring Machine
Fig 1 shows the schematic diagram of distortion measurement process and Fig.2. shows the Accurate Spectra Coordinate
Measuring Machine which is used for measuring the distortion in specimen.The TIG welding process was performedon
the test specimenusing TIG welding machine (WARPPTIME, WSM- 160). Table-1 lists the welding conditions used in
this study.

Table.12: Welding parameters for TI G welding experiments.
Specifications values
Diameter of electrode 0.8mm
Tip angle of electrode 60
Electrode gap 3mm
Shielding gas Argon
Gas flow rate 25 L/min
Welding current 150A

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

759 www.ijergs.org

2. FINITE ELEMENT ANALYSIS
The finite element analysis is carried out to analyze the thermal cycles and nature of residual stressfor a TIG welding in a low carbon
steel plate. The dimensional changes during welding are negligible and mechanical work done is insignificant compared to thermal
energy from the welding arc. The thermo-mechanical behavior of weldment during welding is simulated using uncoupled formulation.
Thermal problem is solved independently from the mechanical problem to obtain thermal cycles.
A. Thermal Analysis

Analysis is done for a plate of 150mm length and 50mm width of 6mm thickness fig.3. Because of symmetry one half of the model is
selected for the analysis.

Fig.3. 17D Finite Element model.
Fig 2 shows the 3D finite element model which is used for thermal analysis. The model is discretized to finite number of elements as
shown. The element type used for thermal analysis is 20 nodded thermal brick elements as shown in fig 2.The thermal physical
properties [7] and mechanical properties [8] of the low carbon steel are obtained from the available literature.

The governing equation for welding transient heat transfer is given by

, , , = . , , , +, , , (1)
where is the density of the materials, c is the specific heat capacity, T is the current temperature, q is the heat flux vector, Q is the
internal heat generation rate, x, y and z are the coordinates in the reference system, t is the time, and is the spatial gradient operator.
In this study, the heat from the moving welding arc is applied as a volumetric heat source with a double ellipsoidal distribution
proposed by Goldak et al. [1], and is expressed by the following equations:

, , , =

63

2
3

2
3
+ ( )

2

(2)
Where x, y and z are the local coordinates of the double ellipsoid model. f is the fraction of heat deposited in the weld region. U
and I are the applied voltage and current. The arc efficiency , is assumed to be 70% for the TIG welding process. The parameters a, b
and c are related to the characteristics of the welding heat source. The parameters of the heat source can be adjusted to create a desired
melted zone according to the welding conditions. A function is generated using ANSYS APDL code to apply the heat generation to
the plates.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

760 www.ijergs.org

To consider the heat losses, both the thermal radiation and heat transfer on the weld surface are assumed. Radiation losses are
dominating for higher temperatures near and in the weld zone, and convection losses for lower temperatures away from the weld zone.
To accommodate these both effects combined temperature-dependent heat transfer coefficient is applied on boundaries.
= 24.1 10
4

1.61
(3)
Where is the emissivity of the body surface which is taken as 0.8, T is the temperature of the material surface. The above thermal
boundary condition is employed for all free boundaries of the plates. The thermal effects due to solidification of the weld pool are
modeled by taking into account of solidus temperature as1415C, liquidus temperature as 1477C and the latent heat of fusion as
285000kJ/kg.
B. Mechanical Analysis
The same discretized thermal model is used for Mechanical analysis. Here the element type is changed to the 20 noded brick element
with degree of freedom. The temperature histories of each node from the preceding thermal analysis are input as nodal body load in
conjunction with temperature-dependent mechanical properties and structural boundary conditions. During the welding process, solid-
state phase transformation does not occur in the base metal, and in the weld metal, the total strain rate can be decomposed into three.

(3)

total
is the total strain produced,
e
is the elastic strain,
p
is the plastic strain and
th
is the thermal strain

3. RESULTS AND DISCUSSION

In order to study the effect of distortion on plates, the FE analysis is carried out using the parameters given in Table 1
Table.2: Weld parameters
V I
(A)
Weld pool parameters
mm/s

(%)
12 150 a(mm)
4
b(mm)
3
c(mm)
5
2 0.7

Fig.4.shows the various weld pool parameters, in a double ellipsoidal distribution proposed by Goldak et al. [1],

Fig.4. Weld pool parameters[2]

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

761 www.ijergs.org

4. Prediction of maximum distorted plane during welding of plates

Finite element analysis is carried out to study the maximum distorted dimension of the specimen.For the analysis the deformations
were defined parallel and perpendicular to the weld line at each of the distances near and away from the weld line and the
corresponding X, Y, Z distortions were obtained through the analysis to determine the maximum distorted plane and direction during
welding of plates. For that distortion along X, Y, and Z direction were plotted at different locations over the plate.Fig.4. Shows the X
direction distortion along a line (AB) at the top, middle and bottom surface of the plate.Fig.5.shows Y direction distortion along a line
(AB) at the top, middle and bottom surface of the plate.Fig.6.shows Z direction distortion along a line (AB) at the top, middle and
bottom surface of the plate.


Fig.5.X, Y, Z Distortion along a line parallel to the weld at a distance of 10mm

Fig.6. X
direction distortion along lines parallel to weld at top, middle and bottom surface

Fig.7.Y direction distortion along lines parallel to weld at top, middle and bottom surface
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

762 www.ijergs.org


Fig.8.Z direction distortion along lines parallel to weld at top, middle and bottom surface.
On comparing the distortion along X, Y and Z directions, it can be observed that maximum distortion has occurred in the bottom
surface of the plate. So analysis in bottom surface will be considered in future study. Similarly certain lines perpendicular to the weld
line has also been analyzed to find the maximum distorted surface of the plate. The graphs shown below gives the X, Y and Z
direction distortions for a line at the top, middle and bottom surface of the plate.


Fig.9. X, Y, Z Distortion along a line perpendicular to the weld at a certain distance from weld


Fig.10. X direction distortion along lines perpendicular to weld line at a distance of 37.5 mm from edge

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

763 www.ijergs.org


Fig.11. Y direction distortion along lines perpendicular to weld line at a distance of 37.5 mm from edge

5. Comparison of distortion in different directions on bottom surface(parallel to the weld line)

The specimen shown below is of dimension 150x50x6mm.The shaded region represents the welded area. A line 10 mm away from the
weld at the bottom surface is considered for the analysis.


Fig.13. Distortion in different directions along a line at the bottom surface
On comparison of distortion along X, Y and Z direction it is clear that the bottom surface shows the maximum distortion. Hence
another analysis has been carried out to determine among X, Y and Z direction, which direction shows the maximum distortion on the
bottom surface. For that a line at a certain distance away and parallel to the weld were considered.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

764 www.ijergs.org


Fig.14. Distortion in X, Y and Z directions along a line at the bottom surface of the plate (parallel to weld line)

From the graph it can be concluded that at the bottom surface, X direction shows the maximum distortion and the other two directions
shows comparatively minimum value. To verify the result a line perpendicular to the weld has also been analyzed.
6. Comparison of distortion in different directions on bottom surface(perpendicular to the weld line)

Fig.15.Distortion in different directions along a line perpendicular to the weld
For the analysis a line perpendicular to the weld at a distance of 37.5 mm were considered. Finite element analysis has been carried
out to study the maximum distorted dimension of the plate in the bottom surface. Fig.15. Shows the distortion in X, Y and Z directions
along a line perpendicular to the weld at the bottom surface of the plate On comparing the distortion along X,Y&Z directions,
maximum distortion has occurred along the weld direction i.e. along X direction, while other direction shows comparatively less
deformation. So here analysis in X direction will be considered in future study.

Fig.16. Distortion in X, Y and Z directions along a line at the bottom surface of the plate (perpendicular to weld line)

From the Fig.16 also it can be concluded that X direction has the maximum distortion than Y and Z direction.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

765 www.ijergs.org

7. Comparison of X distortion along lines parallel to weld directions on bottom surface

Fig.17.X direction distortion along different lines parallel to the weld
An analysis has been carried out to determine the deformation pattern along the weld line and at a distance 10 mm away from the weld
line. a1 represents the weld line, b1 at a distance of 10mm from the weld line and c1 at a distance of 25mm from the weld line on the
bottom surface of the plate. From the analysis the weld line shows the maximum distortion and the rate of distortion goes on
decreasing away from the weld line.
8. Variation in distortion parallel to the weld line at a certain distance away from the weld line


Fig.18.Distortion along different longitudinal lines parallel to weld
In order to find whether there is any critical variation in distortion pattern in between the weld line and at a line 10 mm away from the
weld line four nearby points were taken from the weld line and the deformation plots were obtained parallel to the weld line at each of
the distances. From the analysis it is clear that there is no shift in the distortion pattern and it is varying uniformly between the centre
line and the line which is 10 mm away from the weld line.


Fig.19.X direction distortion along different longitudinal lines parallel to weld on bottom surface





International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

766 www.ijergs.org

9. Comparison of X distortion along line perpendicular to weld on bottom surface


Fig.20.X direction distortion along different lines perpendicular to the weld

Fig.21.Distortion along lines perpendicular to the weld in X direction on bottom surface
An analysis has been carried out to determine the deformation pattern along certain lines perpendicular to the weld. From the analysis
it is clear that the higher heat input leads to prone distortion. At the initial point of weld the nature of distortion is positive and near the
end region where the weld ends shows the maximum distortion due to higher heat input.
10. Comparison of Experimental and Numerical Result for 6mm Plate
Hole
no
Numerical
Result
Experimental
Result
1 .016 .03
2 .022 .02
3 .021 .03
4 .023 .01
Table.4: Numerical and Experimental results
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

767 www.ijergs.org


Conclusion

The effect of distortion on welding of low carbon steel plates has been studied. The primary results and conclusions can be
summarized as follows:
1. During TIG welding of steel plates the surface opposite to the weld shows maximum distortion.

2. Among the three directions X, Y, Z directions, X direction (along the weld) shows maximum distortion while others shows
comparatively less.
3. On comparing the distortion pattern in and around the weld pool, the maximum distortion has occurred near the weld and the rate of
distortion goes on decreasing as the distance from the weld increases.
4. The numerical and experimental results are validated.

REFERENCES:
[1]. K. Masubuchi, Control of Distortion and Shrinkage in Welding, Bulletin of WRC; 149.Materials Science, Vol. 31, 2004, pp.
368378
[2]. D. Klobcar, J. Tu_sek and B. Taljat: Finite element modeling of GTA weld surfacing applied to hot-work
tooling,Computational
[3]. John Goldak, AdityaChakravarti and Malcolm Bibby: A new finite element model for welding heat sources, Metallurgical
Transactions B 15, June-1984, pp. 299305.
[4]. S.W. Shyu, H.Y. Huang, K.H. Tseng, and C.P. Chou, 2008,Study of the Performance of Stainless Steel A- TIG Welds, ASM
International JMEPEG vol.17(2),pp 193201.
[5]. M.A. Wahab, M.J. Painter and M.H. Davies:The prediction of the temperature distribution and weld pool geometry in the gas
metal arc welding process, Journal of Materials Processing Technology, Vol. 77, 1998, pp. 233239
[6]. DeanDeng,WeiLiang,HidekazuMurakawa,Determination of welding deformation in fillet-welded joint by means of numerical
simulation and comparison with experimental measurements, Journal of Materials Processing Technology 183 (2007) 219225.
[7]. Talijat,B.,Radhakrishnan,B.,Zacharia,T.,Numerical
Analysis of GTA welding process with emphasis on post-soldification phase transformation effects on residual stress.J.Mater.Sci
Eng. A 246,1998,pp 45-44
[8]. Afzaal M.Malik,Ejaz.M Qureshi,Naeem Ullah Dar,Iqbal Khan,Analysis of circumferentially arc welded thin-walled cylinders to
investigate the residual stress fields.,J.Thin-Walled Structures-46,2008pp1391-1401.
[9]. G.W. Krutz, L.J. Segerlind, Finite element analysis of welded structures, Welding Journal Research Supplement
57 (1978) 211s216








International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

768 www.ijergs.org

Multi-Resolution Analysis Based MRI Image Quality
Analysisusing DT-CWT Based Preprocessing Techniques

M. Vinay Kumar
1
, Sri P. MadhuKiran
2

1
PG Scholar, Department of E.C.E, Sree Chaitanya College of Engineering, Karimnagar, Telengana, India
2
Assistant professor, Department of E.C.E,Sree Chaitanya College of Engineering, Karimnagar, Telengana,India
E-mail- vinaykumar.mundada@gmail.com, Contact No-+91-9492204128

AbstractThe main objective of this paper is to improve the image quality by de-noising and resolution enhancement Techniques.
The medical image quality parameters are mainly noise and resolution. This paper targets the average, median and wiener filters for
image denoising and Discrete Wavelet Transform (DWT) and Dual tree Complex Wavelet Transform (DT-CWT) techniques for
resolution enhancement, and it shows the results comparisons between DWT and DT-CWT. The performance of these techniques is
evaluated using Peak Signal to Noise Ratio (PSNR).
Keywords Imagepreprocessing, salt andpepper noise, Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT),
Dual Tree complex Wavelet Transform (DT-CWT), and Peak Signal to Noise Ratio (PSNR).

INTRODUCTION

Magnetic resonance imaging (MRI) is a medical imaging technique used to produce a high quality soft tissues of human body, here
externally applied strong magnetic field and hydrogen atoms in 1.5 Tesla field process at 64 MHzso at that range of high frequencies
tremendous noise is added to the MRI image [1]. MRI scans can be used to study the brain, spinal cord, bones, joints, breasts, the heart
and blood vessels. It can also be used to look at other internal organs. MRI scans can be used to find blood clots as well. An MRI scan
can be used as an extremely accurate method of disease detection throughout the body. Neurosurgeons use an MRI scan not only in
defining brain anatomy but in evaluating the integrity of the spinal cord after an injury. An MRI scan can evaluate the structure of the
heart. Even small amount of noise can change the image classification. The MRI noisy image can cause misclassification of Gray
Matter (GM) and White Matter (WM).The gray matter is made up of neural cell bodies and white matter is a component of central
nervous system [2]. So the noisy image is preprocessed using denoising and resolution enhancement. The denoising mechanism is not
better performed along the edges; this drawback is modified by using of Resolution enhancement Techniques. In order to significantly
accelerate the algorithm, filters are introduced to averaging filter, median filter, wiener filters for eliminate the noise in denoising
mechanism. These filters are reducing the quadratic complexity of a given image. The median filter produces better denoising image
in the addition of salt and pepper noise. The wiener filter performs remove the image additive noise and inverts the blurring
simultaneously. After denoising mechanism the noise gets removed and edges are not good, because some missing of neighborhood
pixels in the edges. This edges information is recovered by using of resolution enhancement techniques. In MRI image, the doctors are
mainly concentrated at edges like tissues, tumors etc. in this project Discrete Wavelet transform (DWT), and Dual tree Complex
Wavelet Transform (DT-CWT) is used for resolution enhancement. In the proposed technique, DWT (Discrete Wavelet Transform)
and SWT (Stationary Wavelet Transform) high frequency sub-bands are merged and getting estimated high frequency bands they are
LH, HL, and HH respectively. After applying of inverse wavelet transform, getting one resultant image. And that resultant image is
processed by DT-CWT technique, getting more qualitative enhanced image. This enhanced image is more helpful for proper
treatment.

DENOISING MECHANISM

MRI images are degraded by noise. So that the image is preprocessed using denoising mechanism to extract useful information .this
paper concentrates the mean filter, median filter, wiener filter for image denoising & Discrete Wavelet transform (DWT) and Dual
tree complex Wavelet Transform (DT-CWT) for Resolution enhancement. The block diagram of proposed as shown in below.



Salt and pepper noise

Medical images corrupted by salt-and-pepper noise (ON or OFF pixels), modeled as only two possible values they are. Salt and pepper
noise May occurring white (salt) and black (pepper) pixels. For an 8 bit/pixel image, the typical intensity value for pepper noise is
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

769 www.ijergs.org

close to zero and for salt noise is close to 256. The noise density term usually quantifies the number of salt and pepper noise in a
picture. A complete noise density of ND in an M x N image means ND x M x N pixels contain noise .the whole noise density is

ND =ND1 +ND2 (1)

Here ND1 and ND2 are salt and pepper noise densities respectively.



Fig.1. Overview of proposed work


Average filter

Filtering is a part of image enhancement .the Mean filter or average filter is a simplest linear filter used as smoothing image
applications. Average filter is used to reduce the noise and intensity variation from one pixel to another. The average filter works by
moving through the image pixel by pixel, replacing each value with the average value of neighboring pixels.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

770 www.ijergs.org




Fig.2. Functionality behind the averaging filter


Where M is total number of pixels in the neighborhood N and k .I=1,2. For example 3x3 neighborhood about [i,j] yields:


Median Filter
Median filtering is a nonlinear operation, is used to remove the noise from given Images. It is widely used as effective noise reduction
and preserving the edges. Medical images typically contain salt and pepper noise. Median filter was performed by simply applying of
3x3 window method over the image [3, 4]. The arrangement of neighbors is called the window technique. Median is calculated by first
arranging all the pixel values from the Window into numerical order, and then exchanging the center pixel with median value. Note
that if window has an odd number of entries, and then the median is easier to define. There is more than one possible median, for even
number of entries. Fig shows the working principle of median filter






Fig3.Working principleof median filter

Wiener filter

Wiener filter is a class of linear optimum filter which involves linear estimation of desired image. It removes the additional noise in
corrupted image and reverses blurring at the same time. The linear filter is most advantageous in terms of mean square error i.e.
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

771 www.ijergs.org

Tradeoff between image recovery and noise suppression. This filter is effective at image de-blurring and noise suppression [4]. Wiener
filter is followed by these characteristics

a) Image and noise with known spectral characteristics
b) Filter must be causal
c) Performance criterion based on minimum mean square error(MMSE)

In this project wiener filter performs image denoising and quality is analyzed using peak signal to noise ratio (PSNR).

RESOLUTION ENHANCEMENT

Resolution is referred to be an important factor of a medical image processing. These Images are being preprocessed in order to obtain
more enhanced resolution. Interpolation is one of the mostly referred techniques for image resolution enhancement [6]. Initially the
image is preprocessed using denoising, it results noise reduction and loss of quality at the image edges. So resolution enhancement
technique is used to preserve the edges and contour information of a filtered image.

Resolution is the measurement of quality of a denoised image. In order to enhance the resolution of an image an improved Discrete
Wavelet Transform (DWT) and Dual Tree complex Wavelet Transform (DT-CWT) is used in this project. The performance of
resolution enhancement is measured using Peak Signal to Noise Ratio (PSNR).

Discrete Wavelet Transform

Wavelets are playing an important role in many image processing applications. It is used to examine an image into the different
frequency components at different resolution scales (i.e. multi-resolution). Any wavelet-based image processing approach has the
following steps. Calculate the 2D-DWT of an image, alter the transform coefficients (i.e. sub-bands), and calculate the inverse
transform [7, 9, 10]. In this technique interpolation based DWT is used to preserve high frequency component. DWT decomposes the
input image into four sub-bands that are low-low (LL), low-high (LH), high-low (HL), and high-high (HH). After decomposition of
input image, interpolation is applied on these four sub-bands. The interpolation technique is used to increase the number pixels in an
image.



Fig.4 Block diagram of Discrete Wavelet transform


Inverse Discrete Wavelet Transform (IDWT) is a process by which components can be accumulated back into the original image to
get without loss of information is called reconstruction. It reconstructs an image from the approximation and detail coefficients
extracted from decomposition. The performance of denoised and enhanced image is evaluated by calculating PSNR value.

PROPOSED METHOD

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

772 www.ijergs.org

In the proposed method finally Dual tree complex Wavelet Transform (DT-CWT) is used for image enhancement. In this method the
interpolated DWT and SWT, high frequency sub-bands (LH, HL, and HH) are merged, and estimated new sub-bands (LH, HL, HH).
After applying of inverse wavelet transform we get a resultant image [6]. Here the DWT and SWT high frequency sub-bands are same
size.

In discrete wavelet transform (DWT) there is a deficiency of translation-invariance. Stationary wavelet transform (SWT) is a wavelet
transform algorithm procedure designed to overcome the deficiency of translation-invariance of thediscrete wavelet transform (DWT)
. Translation invariance is achieved by removing of down samplers and up samplers in the DWT


Fig.5 Block diagram of proposed work

In the proposed technique, the resultant image has applied Dual Tree Complex Wavelet Transform (DT-CWT). In this method DT-
CWT image is divided in to real part and imaginary part, and in this proposed technique those real & imaginary parts are merged [10].
After applying of inverse DT-CWT we get quality enhanced image. The enhanced MRI image quality is analyzed, based on peak
signal to noise ratio (PSNR).

QUALITY ANALYSIS

The quality of the pre-processed images is analyzed using Peak Signal to Noise Ratio (PSNR). It is defined as the ratio between the
utmost possible powers of an image to the power of corrupting noise measure of the peak error. Peak signal-to-noise ratio is measured
in decibels between two images. To compute PSNR using following equation
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

773 www.ijergs.org



Here Mean Square Error (MSE) is the cumulative squared error between the denoised and the original image. R is the maximum
fluctuation in the input image. To compute MSE following equation


Where I1 (m,n)denotes original image, I2 (m,n)denotes denoised image and M and N are the number of rows and columns in the input
images. Logically, if the PSNR is higher it gives the better quality of the reconstructed image.

EXPERIMENTAL RESULTS

The noisy image is taken as the input image and denoising is performed using average, median and wiener filter. Fig.6 and Fig.7
shows input image and denoised images and Table.1 shows performance of denoised image.

Original Image Noisy Image

Fig.6 Original and Noisy image

Averaging Filter Median Filter Weiner Filter

Fig.7. Denoised Images a) Averaging Filter b) Median Filtering c) Wiener Filter




TABLE I . PSNR VALUES OF THE DENOISED IMAGE



DENOISING MECHANISM
FILTER PSNR (dB)
a)Averaging Filter 50.5393
b)Median Filter 60.3634
c)Weiner Filter 88.0153


Original image Enhanced image using DWT
International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

774 www.ijergs.org


Fig.8 Original and Enhanced (DWT) image


Original imageEnhanced Image Using DT-CWT

Fig.9. Original and Enhanced (DT-CWT) image


TABLEII. COMPARISON OF ENHANCED IMAGE QUALITY METRICS USING DWT AND DT-CWT TECHNIQUES

RESOLUTION ENHANCEMENT
METHOD QUALITY METRICS


DWT
PROPOSED METHOD
PSNR(dB) RMSE MAE QI
57.3492 0.121 0.076 0.385
61.1933 0.049 0.037 0.676




CONCLISION

The MRI brain image is preprocessed by denoising and resolution enhancement. In denoising, the noise gets reduced better by Weiner
filtering and the resolution of an image is enhanced by Dual Tree Complex Wavelet Transform (DT-CWT). The PSNR value of Dual
Tree Complex Wavelet Transform is better than, compared to Discrete Wavelet Transform (DWT). While analyzing these techniques
are essential for improving the qualitative performance of an image.
REFERENCES

[1]Alexandra Constantin, Ruzena Bajcsy, Berkeley Sarah, Unsupervised Segmentation of Brain Tissue in Multivariate MRI, IEEE
international conference on Biomedical imaging: from nano to Macro, pp. 89-92, April 2010.J.

[2]Subhranil Koley, Aurpan Majumder, Brain MRI Segmentationfor Tumor Detection using Cohesion based Self Merging
Algorithm, Communication Software and Networks ICCSN, pp. 781-785, May 2011.

International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

775 www.ijergs.org

[3]Suresh Kumar, Papendra Kumar, Manoj Gupta, Ashok Kumar Nagawat, Performance Comparison of Median and wiener filter in
image denosing, International Journal of Computer Applications , vol. 4, pp . 0975 8887, November 2010

[4]Mona Mahmoudi, Guillermo Sapiro, Fast Image and Video Denoising via Non-Local Means of similar neighborhood, IEEE
Signal Processing Letters , vol. 12, pp. 839-842, 2005.D.Keren, S. Peleg and R. Brada, Image sequence enhancement using sub-pixel
displacements, Computer Vision and Pattern Recognition, Page(s): 742-746, 1988.

[6]H. Demirel, G. Anbarjafari, Satellite Image Resolution Enhancement Using Complex Wavelet Transform, Geoscience and
Remote Sensing Letters, Volume: 7, Page(s): 123-126, 2010.

[7]H. Demirel, G. Anbarjafari, Discrete Wavelet Transform-Based Satellite Image Resolution Enhancement. Geoscience and
Remote Sensing, Volume: 49, Page(s): 1997-2004, 2011

[8]Liu Fang,Yang Biao, KaiGang Li, Image Fusion Using Adaptive Dual-Tree Discrete Wavelet Packets Based on the Noise
Distribution Estimation, international conference on Audio, Language and Image Processing (ICALIP), 2012 pp. 475 - 479, 2012.


[9]Daniel G. Costa, Luiz Affonso Guedes, A Discrete Wavelet Transform (DWT)-Based Energy-Efficient Selective Retransmission
Mechanism for Wireless Image Sensor Networks, Journal of Sensor and Actuator Networks, vol. 1, pp. 3-35 , 2012.

[10]ZhangZhong, Investigations on Image Fusion, PhD Thesis, Universityof Lehigh, USA. May1999

[11]Mark J. Shensa,The Discrete Wavelet Transform: Wedding the A Trous and Mallat Algorithms, IEEE transactions on signal
processing,vol. 40, pp. 2464 - 2482, 1992





International Journal of Engineering Research and General Science Volume 2, Issue 5, August-September, 2014
ISSN 2091-2730

776 www.ijergs.org

Vous aimerez peut-être aussi