Académique Documents
Professionnel Documents
Culture Documents
International Journal of
IJERT
Index
Paper Title
Optimization Of Heat Transfer Rate In Wax Tank For Wax Injection Molding Machine
A Survey on Maintaining Privacy in Data Mining
A Review On Web Mining
A Survey Paper on HyperlinkInduced Topic Search (HITS) Algorithms for Web Mining
Paper ID
IJERTV1IS1003
IJERTV1IS1004
IJERTV1IS1005
IJERTV1IS1006
Page no.
1
6
7
10
11
15
16
23
An Efficient CT Image Reconstruction with Parallel Modeling for Superior Quantitative Measures
IJERTV1IS1009
24
29
6
7
IJERTV1IS1010
IJERTV1IS2001
30
33
32
38
IJERTV1IS2002
39
49
Sr.no
1
2
3
4
30
1
MINIMUM TEMPERATURE
329K
MAXIMUM TEMPERATURE
333 K
MINIMUM TEMPERATURE
328K
Fig 1 show the wax tank model in solidworks. In fig red color
indicate the band heater which is placed on inner tank. Around
the heater glass wool is placed as insulator and finely outer
cover around the glass wool. In ANSYS analysis the outer
cover is neglected. In below Fig wax domain is not showing.
The wax domain is consider in ANSYS analysis for giving the
wax property.
15815
59140
8190
15392
13351
111888
ELEMENT
69849
318434
4092
7200
48106
447681
31
2
32
3
Fig 6: Analysis 2
Fig 8: Analysis 4
Fig 7: Analysis 3
Fig 9: Analysis 5
33
4
34
5
ABOUT AUTHOR
Analysis
Diameter
(mm)
Speed
(rpm)
Heater
position
1
2
3
4
5
6
7
8
9
380
380
380
415
415
415
350
350
350
15
20
25
15
20
25
15
20
25
A
B
C
B
C
A
C
A
B
Temperature
Difference
(k)
6.1
4.8
5.2
7.1
5.3
2.3
6.4
4.9
4.1
IV. CONCLUSION
Temperature of wax in wax pattern is important factor for the
best quality of wax pattern. For best quality of pattern the
temperature is at 60 c uniform in wax tank. From table 5 we
can say that analysis 6 is give the best result for maintain
uniform temperature because the temperature difference is
only 2.3 k. so the best modal is 415mm diameter, 25 rpm
speed and heater position A.
REFERENCES
[1]. Design and optimisation of conformal cooling channels in injection
moulding tools D.E. Dimla a, , M. Camilotto b, F. Miani b a
School of Design, Engineering and Computing, Bournemouth
University, 12 Christchurch Road, Bournemouth, Dorset BH13NA,
UK b DIEGM, Universit`a Degli Studi di Udine, via delle Scienze
208, 33100 Udine, Italy
[2]. Understanding heat transfer mechanisms during the cooling phase
of blow molding using infrared thermography A. Bendadaa,*, F.
Erchiquib, A. Kippingc aNational Research Council of Canada,
Industrial Materials Institute, 75 De Mortgane, Boucherville, Que.,
Canada J4B 6Y4 bUniversity of Quebec in AbitibiTemiscamingue, 445 Universite Blvd., Rouyn-Noranda, Que.,
Canada J9X 5E4 cUniversity of Siegen, Paul-Bonatz Strasse 9-11,
Siegen 57068, Germany Received 15 June 2004; accepted 25
November 2004.
[3]. Influence of injection parameters and mold materials on
mechanical properties of ABS in plastic injection molding Babur
Ozcelik , Alper Ozbay Erhan Demirbas a Department of
Mechanical Engineering, Gebze Institute of Technology 41400
Gebze-Kocaeli/Turkey b Department of Chemistry, Gebze Institute
of Technology, 41400 Gebze[4]. Process parameter optimization for MIMO plastic injection
molding by Wen-Chin Chen , Gong-Loung Fu b,c, Pei-Hao Tai b,
Wei-Jaw Deng d,Turkey.
[5]. Effects Of Radiation Heat Transfer On Part Quality Prediction Adi
Sholapurwalla ESI Group, Bloomfield Hills, Michigan Sam Scott
ESI Group, Bloomfield Hills, Michigan
Experimental Investigation of Phase Change Phenomena of
Paraffin Wax inside a Capsule S. A. Khot N. K. Sane B. S.
Gawali Department of Mechanical Engineering, Latthe polytechnic
Sangli. Department of Mechanical Engineering, Walchand College
of Engineering Sangli (Maharashtra) India.
[6] Introduction to CFD Basics by Rajesh Bhaskaran Lance Collins.
[7]. Design of experiment using the taguchi approach by Ranjit K Roy
35
6
I. INTRODUCTION
26
7
A Advantage
One key advantage of the randomization method is that it is
relatively simple, and does not require knowledge of the
distribution of other records in. Noise is independent of data
and does not need entire dataset for perturbation. The
randomization method can be implemented at data collection
time, and does not require the use of a trusted server
containing all the original records in order to perform the
anonymization process. The randomization approach has also
been extended to other applications such as OLAP [6].
And it is much faster compared to SMC.
B Disadvantage
It treats all records equally irrespective of their local
density. Therefore, outlier records are more susceptible to
adversarial attacks as compared to records in more dense
regions in the data.
C. Mulplicative Randomization
In this type of randomization, records are multiplied by
random vectors. And then transform data so that inter-record
distances are preserved approximately. These types of
randomization can be applicable in Privacy-Preserving
clustering and classification. Attacks can be known inputoutput or known sample attack. In known input-output attack,
Attacker knows some linearly independent collection of
records and their perturbed versions and in Known sample
attack, Attacker has some independent samples from the
original distribution.
D. Randomization for Association Rule Mining
This type of randomization is done through deletion and
addition of items in transactions. Following steps are
performed:
First we should select-a-size operator. Now assume
transaction size = m and a probability distribution p[0], p[1],
, p[m], over {0, 1, , m}.Given a transaction t of size m,
generate randomized transaction t as: Select j at random from
0, .., m using above distribution Select j items from t
(uniformly without replacement) and place in t For each item
a not in t, place a in t with probability , here p is the
randomization level[7] .
27
8
both row and column margins, and takes into account the
global structure of the dataset. A motivating example for why
it is important to maintain both column and row margins is
given in the next section
A. Applications
Swap randomization has been considered in various
applications. An overview is presented in a survey paper by
Cobb and Chen [2003]. A very useful discussion on using
Markov chain models in statistical inference is Besag [2004],
where the case of 01 data is used as an example. The problem
of creating 01 datasets with given row and column margins is
of theoretical interest in itself; see, among others Bezakova
et al. [2006] and Dyer [2003]. Closely related is the problem
of generating contingency tables with fixed margins, which
has been studied in statistics (such as Chen et al. [2005]). In
general, a large body of research is devoted to randomization
methods [Good 2000]
V RANDOMIZATION TO PROTECT PRIVACY
Return x+ r instead of x, where r is a random value drawn
from a distribution. Uniform and Gaussian Reconstruction
algorithm knows parameters of r's distribution.
B. Classification Example
Decision-Tree Classification:
Partition (Data S)
begin
if (most points in S belong to same class)
return;
for each attribute A
evaluate splits on attribute A;
Use best split to partition S into S1 and S2;
Partition (S1);
Partition (S2);
End
C. Training using Randomized Data
In this we need to modify two key operations .Determining
split point and partitioning data. When and how we should
reconstruct distribution is primary question. First solution is to
reconstruct using the whole data (globally) or reconstruct
separately for each class. Second solution is to reconstruct
once at the root node or at every node.
28
9
29
10
I. INTRODUCTION
21
11
22
12
it. This process is done iteratively until the rank of all pages
are determined. The rank of a page p can be written as:
and
respectively.
(1)
(2)
On is number of outgoing links of page n, Op is number of
outgoing links of page p, Then the weighted PageRank is
given by formula in (3)
(3)
23
13
24
14
Srivastava, J., Cooley, R., Deshpande, M., And Tan, P-N. (2000). Web
usage mining: Discovery and applications of usage patterns from web
data, SIGKDD Explorations, 1(2), 12-23.H. Poor, An Introduction to
Signal Detection and Estimation. New York: Springer-Verlag, 1985,
ch.4.
[5]
[6] Meo R., Lanzi P., Matera M., Esposito R. (2004). Integrating Web
Conceptual Modeling and Web Usage Mining. In Proc. of Web KDD2004 workshop on Web Mining and Web Usage Analysis, part of the
ACM KDD: Knowledge Discovery and Data Mining Conference,
Seattle, WA.
[7] Desikan P. and Srivastava J. (2004), Mining Temporally Evolving
Graphs. In Proceedings of Web KDD- 2004 workshop on Web Mining
and Web Usage Analysis, B. Mobasher, B. Liu, B. Masand, O.
Nasraoui, Eds. part of the ACM KDD: Knowledge Discovery and Data
Mining Conference, Seattle, WA.
[8] Berendt B., Bamshad M, Spiliopoulou M., and Wiltshire J. (2001).
Measuring the accuracy of sessionizers for web usage analysis, In
Workshop on Web Mining, at the First SIAM International Conference
on Data Mining, 7-14.
[9] Srivastava, J., Cooley, R., Deshpande, M., And Tan, P-N. (2000). Web
usage mining: Discovery and applications of usage patterns from web
data, SIGKDD Explorations, 1(2), 12-23.
[10] J. Hou and Y. Zhang, Effectively Finding Relevant Web Pages from
Linkage Information, IEEE Transactions on Knowledge and Data
Engineering, Vol. 15, No. 4, 2003.
[11] R. Kosala, and H. Blockeel, Web Mining Research: A Survey,
SIGKDD Explorations, Newsletter of the ACM Special Interest Group
on Knowledge Discovery and Data Mining Vol. 2, No. 1 pp 1-15, 2000.
V. CONCLUSION
In this article, we have outlined three different modes of web
mining, namely web content mining, web structure mining and
web usage mining. Needless to say, these three approaches can
not be independent, and any efficient mining of the web would
require a judicious combination of information from all the
three sources. We have presented in this paper the significance
of introducing the web mining techniques. The development
and application of Web mining techniques in the context of
Web content, usage, and structure data will lead to tangible
improvements in many Web applications, from search engines
and Web agents to Web analytics and personalization. Future
efforts, investigating architectures and algorithms that can
25
15
I. INTRODUCTION
As the volume of information on the internet is increasing Day
by day so there is a challenge for website owner to Provide
proper and relevant information to the internet user. Retrieving
of the required web page on the web, efficiently and
effectively, is becoming a challenge. Whenever a user wants to
search the relevant pages, he/she prefers those relevant pages
to be at hand. The bulk amount of information becomes very
difficult for the users to find, extract, filter or evaluate the
relevant information. This issue raises the necessity of some
technique that can solve these challenges. Web mining can be
easily executed with the help of other areas like Database
(DB), Information retrieval (IR), Natural Language Processing
(NLP), and Machine Learning etc. These can be used to
discuss and analyze the useful information from WWW.
Following are some challenges:
1) Web is huge. 2) Web pages are semi structured. 3) Web
information stands to be diversity in meaning. 4) Degree of
quality of the information extracted. 5) Conclusion of
knowledge from information extracted.
13
16
and client applications can quite easily capture data about Web
usage.
14
17
15
18
16
19
Iterative Algorithm
Each page p is assigned two nonnegative weights, an
authority weight a and a hub weight h.Update the weights of a
and h
End
Based on the Survey of this HITS algorithm the overall graph
of with Authority and Hubness represented as following. HITS
algorithm discovered, they share similar roles in terms of their
email communication pattern in the data set. Our algorithm
discovers this structure as well. The estimated rankings are so
close to the actual ones that it is difficult to distinguish them.
17
20
From this, we conclude that to the extent that the fixed degree
sequence random graph approximate the web, ranking web
pages by their authority scores is the same as ranking by their
in degrees. Analogous results hold for hub ranking. These
indicate that the duality relationship embedded in mutual
reinforcement between hubs and authorities are manifested by
their in degree and out degrees.
(2) Uniqueness. If d1 is larger than d2, then the principal
eigenvector of LTL is unique, and is quite different from the
second principal eigenvector.
(3) Convergence. The convergence for HITS can be rather
fast: (1) the starting vector x(0) = (1,---, 1)T has large overlap
with principal eigenvector u1, but little overlap with other
principal eigenvectors uk; k = 2; ---,m, because uk contains
negative nodal values (2) In the iterations to compute u1, the
convergence rate depends on Y2/Y1 ~ h1/h2~ d1/d2 ' (1/2)2 =
1/4; using and the fact that in degrees follow power-law
distribution [10]: di * 1=i2. Thus the iteration converges
rapidly. Typically 5-10 iterations are sufficient.
(4) Web communities. HITS algorithm has been used to
identify multiple web communities using different
eigenvectors [22, 16]. The principal eigenvector defines a
Dominant web community. Each of other principal eigenvector
uk defines two communities, one with non-negative values
{i|uk(i) > 0}and the other with negative values {i|uk(i) <
0}.From the pattern of eigenvectors in our solutions, the
positive region of different eigenvectors overlap substantially.
Thus the communities of positives regions nest with each
other; so do communities of negative regions. Therefore, we
believe this method to identify multiple communities is less
effective. This difficulty is also noticed in practical
applications .A number of web community discovery
algorithms are being developed, e.g., trawling to find bipartite
cores network maximum flow and graph-clustering. One
advantage of these methods is that weak communities (topics)
can be separated from dominant communities and thus
identified. Without explicit community discovery, web pages
of weak topics are typically ranked low by HITS (and by in
degree ranking) and are often missed.
5 Authority
18
21
pages with high Page Rank receives a high rank itself. If there
are no links to a web page there is no support for that page.
19
22
VII. CONCLUSION
Web Mining is powerful technique used to extract the
Information from past behavior of users. Selective expansion
Of root set and a different way of calculating hub and authority
values. As a result we had a very small base set and we were
able to distill results only for one topic even if a query was
ambiguous. Various Algorithms are used in Web Mining to
rank the relevant pages. The main focus of web structure
mining is on link information. Web usage mining focuses on
understanding user behavior as depicted in the web access logs
while interacting with a website. PageRank, Weighted
PageRank and HITS treat all links equally when distributing
the rank score. In the Problem of page rank and weight page
algorithm relevant terms may not appear on the pages of
authoritative websites. Many prominent pages are not self
descriptive. In HITS algorithm all links should be equally
treated so we considerations two problem. Some links may be
more meaningful than other links.Further.we also observed
that selective expansion of the root set is also rich in quality, as
many pages from the expanded root set topped the hub and
authority list. For the future works, there are still many issues
that need to be explored With the HITS algorithm, Being
HITS algorithms are not good enough to be applied in mining
the informative structures, the phenomenon that authorities
converge into densely linked irrelevant pages is called topic
drift problem. This problem is notorious in the area of
Information Retrieval. To address this problem, we propose
some other types of link analysis- based modification.
REFERENCES
[1] Rekha Jain, Dr G.N.Purohit, Page Ranking Algorithms for Web
Mining, International Journal of Computer application,Vol 13,
Jan 2011.
[2] Cooley, R, Mobasher, B., Srivastava, J."Web Mining: Information
and pattern discovery on the World Wide Web. In proceedings of
the 9th IEEE International Conference on tools with Artificial
Intelligence (ICTAI 97).Newposrt Beach,CA 1997.
[3] Pooja Sharma, Pawan Bhadana, Weighted Page Content Rank For
Ordering Web Search Result, International Journal of
Engineering Science and Technology, Vol 2, 2010.
[4] R. Kosala, H. Blockeel Web mining research A survey. ACM
Sigkdd Explorations,2(1):1-15, 2000.
[8] J. Hou and Y. Zhang, Effectively Finding Relevant Web Pages from
Linkage Information, IEEE Transactions on Knowledge and Data
Engineering, Vol. 15, No. 4, 2003.
[9] P Ravi Kumar, and Singh Ashutosh kumar, Web Structure Mining
Exploring Hyperlinks and Algorithms for Information Retrieval,
American Journal of applied sciences, 7 (6) 840-845 2010.
[10] M.G. da Gomes Jr. and Z. Gong, Web Structure Mining: An
Introduction, Proceedings of the IEEE International Conference on
Information Acquisition, 2005.
[11] R. Kosala, and H. Blockeel, Web Mining Research: A Survey,
SIGKDD Explorations, Newsletter of the ACM Special Interest
Group on Knowledge Discovery and Data Mining Vol. 2, No. 1 pp
1-15, 2000.
[12] HITS Algorithm - Hubs and Authorities on the Internet",
Available:http://www.math.cornell.edu/~mec/Winter2009/Raluca
Remus/Lecture4/lecture4.html
[13] HITS ", Available: http://en.wikipedia.org/wiki/PageRank.
[14] R. Weiss, B. Velez, M. Sheldon, C. Nemprempre, P. Szilagyi, D.K.
Gifford,, HyPursuit: A Hierarchical Network Search Engine that
Exploits Content-Link Hypertext Clustering," Proceedings of the
Seventh ACM Conference on Hypertext, 1996.
[15] M.R. Hen zinger. Hyperlink analysis for the web. IEEE Internet
Computing, 5:45{50, 2001.
[16] M. Kessler. Bibliographic coupling between scientific papers.
American documentation, 14:10-25, 1963.
[17] J. M. Kleinberg. Authoritative sources in a hyperlinked
Environment. J. ACM, 48:604-632, 1999
[18] R. Lempel and S. Moran. SALSA: stochastic approach for linkStructure analysis and the TKC effect. ACM Trans.
Information Systems, 19:131-160, 2001.
[19] S. Chakrabarti, B. Dom, D. Gibson, J. Kleinberg, P. Raghavan, S.
Rajagopalan,Au-tomatic Resource Compilation by Analyzing
Hyperlink Structure and Associated Text," Proc. 7th International
World Wide Web Conference, 1998.
[20] D. Gibson, J. Kleinberg, and P. Raghavan. Inferring Web
Communities from link topology. In Proc. 9th ACM Conference
On Hypertext and Hypermedia (HyperText 98), pages
225234, Pittsburgh PA, June 1998.
20
23
ABSTRACT
Image segmentation algorithms based on ROI
typically rely on the homogeneity of image intensities. CT
scanner is dedicated as research Scanner which has been
developed in view of imaging applications. A key Feature of
the work is to use Empirical system to achieve resolution
recovery with novel region based method. This method
identifies local intensity cluster with local clustering criterion
function with respect to neighborhood center. Reconstruction
quality is analyzed quantitatively in terms bias field correction
for intensity inhomogenity correction. This method is valid on
synthetic images of various imaging modalities. A significant
improvement in reconstruction quality can be realized by
faster and more accurate visual quality quantitative measures
where Reconstruction quality is analyzed quantitatively in
terms of bias-variance measures (bar phantom) and mean
square error (lesion phantom). However, with the inclusion of
the empirical kernel, the iterative algorithms provide superior
reconstructions compared to FBP, both in terms of visual
quality and quantitative measures. Simulated results show
improved tumor bias and variance characteristics with the
proposed algorithm.
Keywords: Intensity inhomogeneities, Empirical system
kernel, Bias-variance, Iterative algorithms
1. INTRODUCTION
Image segmentation is often an essential step before further
image-processing of three-dimensional medical images canbe
done. An object can be segmented based on shape and/or
intensity characteristics. The task of image segmentation can
be simplified with initialized parameters to guide accurate
segmentation. Semi-automated and interactive methods [1]
have been relatively successful, but require varying degrees of
human input. Segmentation is often an important step in US
B-mode image analysis .we consider the problem of
correcting for attenuation-related intensity inhomogenieties
i.e., those that cause a slowly changing (low-frequency)
intensity contrast and are not due to speckle-mode imaging
artifacts include speckle noise, attenuation(absorption and
scattering), etc. The statistical analysis and reduction of
speckle noise has been studied extensively in the literature
[1][7]. Other artifacts, particularly those caused by non
uniform beam attenuation within the body that are not
accounted for by time gain compensation (TGC), also
decrease the image signal-to-noise ratio (SNR).Existing level
set methods for image segmentation can be categorized into
two major classes: region-based models [4], [10], and edgebased models [3], [7], [8], [12]. Region-based models aim to
identify each region of interest by using a certain region
descriptor to guide the motion of the active contour. However,
it is very difficult to define a region descriptor for images with
K. Lokeswara Reddy2
M.Tech (E.S) student
Department of ECE,
A.I.T.S Rajampet, A.P, India
e -mail: lokes9@gmail.com
2. Methods:
BACKGROUND:
In this section, we review the method proposed by Zhang for
estimating the field
distortion and simultaneously
segmenting an MR image and provide implementation details
on how it has been adapted to work with US images. This
method essentially estimates the low (spatial)-frequency
multiplicative degradation field while at the same time
identifying regions of similar intensity inhomogeneity using
an MRF-MAP frame work. As we will explain in Section III,
although developed for another imaging modality, under
simplified assumptions, we can justify using the same
approach on displayed US images.
241
Where
denotes the log-transformed intensity distortion
field. Segmentation can be considered as a problem of
statistical classification, which is to assign every pixel a class
label from a label set. Let denote the label set. A labeling
of will be denoted by
in which
is the
corresponding class label of pixel Given the class label , it
is assumed that the intensity value at pixel
follows a
Gaussian distribution(this assumption will be justified in
Section III) with parameter
being
the mean and the variance of class
respectively
, with I= (1,1,,1 .
Here,
(11)
(12)
And is the mean inverse covariance, in which if
otherwise.
(13)
where
3. Proposed Algorithms:
(6)
(9)
252
the first iteration, the search window has not been adaptively
altered to match the edge proximity for the image slice. There
is a possibility that a false edge will be included in the
Bayesian criterion. To prevent this, an inverse weighted
distance transform, M, is multiplied to Fi,k, where M is a
square matrix. Denoting pq M as an element in M and any two
points on the Bi as bp and bq.,pq M is defined in Eq. 1.
(1)
,with
distinct
cluster
centers
, (because the constants
are distinct and the variance of the Gaussian noise is
assumed to be relatively small). This local intensity clustering
property is used to formulate the proposed method for image
segmentation and bias field estimation as follows.
be
classified
in
to
clusters with centers
, This allows us to apply the
standard K-means clustering to classify these local intensities.
Specifically, for the intensities
in the neighbourhood
, the K-means algorithm is an iterative process to minimize
the clustering criterion [19], which can be written in a
continuous form as
(6)
Where
is the cluster center of the
cluster, is the
mem
and
for
Since is the
membership function of the region
, we can rewrite as
dx.
(7)
(4)
are
criterion function
can be rewritten as
i.e.
(9)
(5)
Where
is additive zero-mean Gaussian noise. Therefore,
the intensities in the set
263
And
to give a three-phase level
set formulation of our method. For the four-phase case
,
the definition of
can be defined as
functional
by
(25)
and
For notational simplicity, we denote these level set functions
by a vector valued function
.
Thus, the membership functions
can
be written as
. The energy in (10) can be converted
to
a
multiphase
level
set
formulation
With
given by
(16). For the function
)
.
.
.
Where
and
274
Author Profile:
7. REFERENCES
160
140
120
100
80
60
40
20
0
50
100
150
200
250
5.Conclusion:
This work presents a variational level set framework
for segmentation and bias correction of images with intensity
inhomogeneities. Based on a generally accepted model of
images with intensity inhomogeneities and a derived local
intensity clustering property, the work defines an energy of
the level set functions that represent a partition of the image
domain and a bias field that accounts for the intensity
inhomogeneity. Segmentation and bias field estimation are
therefore jointly performed by minimizing the proposed
energy functional. The slowly varying property of the bias
field derived from the proposed energy is naturally ensured by
the data term in our variational framework, without the need
to impose an explicit smoothing term on the bias field. The
proposed method is much more robust to initialization than
the piecewise smooth model. Experimental results have
demonstrated superior performance of our method in terms of
accuracy, efficiency, and robustness.
6. ACKNOWLEDGMENTS:
The work was supported by my guide S.Asif
Hussain from Annamacharya Institute of Technology &
Sciences,,Rajampet ,India under Research grants of R.P.S
A.I.C.T.E, New delhi.
285
296
MICROCONTROLLER BASED
LIFT SYSTEM
Jayshree sahu*Dr Amita Mahor**Dr S.k.sahu***
*NIIIST Bhopal M.P**NIIST Bhopal M.P.
*** Neelam collegeof engg. &Technology Agra
ABSTRACT-This paper presents the microcontroller
based lift system using microcontroller chip AT89C52
based on messege scheduling which basically belongs to
data based system in which change in one operation is
visible to other concurrent operation ,in which user can
programme each set by entering no of series of text
date time etc in data base system which can be
performed on the priority basis.
INDEX ITEM-Introduction Block diagram,circuit
diagram,circuit description,algorithm
INTRODUCTION-Conventional lift system based on
elevated control system which has no of disadvantages
as large no of cables ,risk factor,complicated,less
intelligent, uneconomical.But modern Distributed
elevated control system is in intelligent economical
system which provide all above reduced disadvantages.
3 recent innovations include permanent earth magnet
motors, machine room less, rail mounted gearless
machine aned microprocessor controls.
METHODOLOGY
Based on plate monitoring and control
lets understood with an example consider floor 1 & 2
lift
Floor
1
Floor
2
BLOCK DIAGRAM
SENSOR
SWITCHES
MICOCON
TROLLER
AT89C52
power
MOTOR
DRIVER CHIP
L
C
L293D
MOTOR
WORKING PRINCIPLE
Based on sensor and switch polling method
in which observation of changes made over
switches done by sensors through messege
scheduling.
COMPONENTS USED
microcontrollerAT89C52,5.5V,16MHZ,Pr
ocessed billions of instructions per cycle
per second
Motor driver chip L293D ,Intetfacing
between lift and
microcontroller,16pinIC runs on 5v dc.
Sensors and switches-reed type,.25w
power, 0-16 AT.
Crystal oscillator-11.059mhz for serial
communnication.
DC- Motor -1W, Rectifier-IN-4007
47
30
PCB LAYOUT
CIRCUIT OPERATION
RESULT
ALGORITHM
48
31
OLX Classified.
CONCLUSION
FUTURE SCOPE
on
http://www.electronics4u.com
http://www.ttransenergic.co.au
Microprocessors
And
Interfacing(
Programming & Hardware)-Douglas V.Hall
Vedam Subrahmanayam- Power Electronics.
Alberto Sangiovanni-vincentalli, IEEE
microelectron,. (May 2003)8-18 .
Chris Herring,IEEE microelectron(Nov
2000)45-51.
Todd D Morton,Embedded
In Embedded system.
Security system baesd
modulation signal.
http://www.atmel.com
space
vector
REFERENCES
IEEE Expo 2011-Internal Elevator and Escalator
Expo
Microchip PICC Tutorial.
Spackling Tutorial.
Arm Cartox-A Series-High performance for
open operating system.
URE SCOPE
BIBLIOGRAPHY
8051 MICROCONTROLLER and
embedded system by ali
maizidi,rolin d mekinly,Denny
carsey.pearson
Education..Edition 2.
Advanced microcontroller
application by Jarice mazidi,Gillirpe
maizidiPearson education.
49
32
Hurieh Khalajzadeh
Intelligent Systems Laboratory
(ISLAB), Faculty of Electrical
& Computer Engineering
K.N. Toosi University of
Technology, Tehran, Iran
h_khalajzadeh@ee.kntu.ac.ir
Mohammad Mansouri
Intelligent Systems Laboratory
(ISLAB), Faculty of Electrical &
Computer Engineering
K.N. Toosi University of
Technology, Tehran, Iran
mohammad.mansouri@ee.kntu.ac.ir
Abstract
The style of peoples handwritten signature is a
biometric feature used in person authentication. In this
paper, an offline signature verification scheme based
on Convolutional Neural Network (CNN) is proposed.
CNN focuses on the problems of feature extraction
without prior knowledge on the data. The classification
task is performed by Multilayer perceptron network
(MLP). This method is not only capable of extracting
features relevant to a given signature, but also robust
with regard to signature location changes and scale
variations when compared to classical methods. The
proposed method is evaluated on a dataset of Persian
signatures gathered originally from 22 people. The
simulation results reveal the efficiency of the suggested
algorithm.
1. Introduction
There is an increasing interest in trustworthy
identity verification. Biometric authentication is as
more trustable alternative to password based security
systems. This method is gaining popularity as it is
relatively hard to be forgotten, stolen, or guessed.
Several biometric features have been studied and
proved useful, including biological characteristics such
as fingerprint, face, iris, and retina pattern or behavioral
traits such as signature and speech. In compare with
conventional methods of identification such as
employing PIN-codes, passwords, magnet, or smart
Mohammad Teshnehlab
Intelligent Systems Laboratory
(ISLAB), Faculty of Electrical
& Computer Engineering
K.N. Toosi University of
Technology, Tehran, Iran
teshnehlab@eetd.kntu.ac.ir
337
vj
wj
i 1
yi
bj
(1)
348
yj
( yi
yj
vj
A tanh( Sv j )
(2)
0,
, 9.
(3)
i 1
3. Proposed
verification
The subsampling layer S4 acts as S2 and reduces the
size of the feature maps to 55. The last convolutional
layer C5 differs from C3 as follows. Each one of its
120 feature maps is connected to a receptive field on all
feature maps of S4. And since the feature maps of S4
are of size 55, the size of the feature maps of C5 is 1
1. Thus C5 is same as a fully connected layer. The
fully connected layer (F6) contains 84 units connected
to the 120 units of C5. All the units of the layers up to
F6 have a sigmoid activation function of the type:
wij ) ,
Method
for
signature
359
10
36
3.2. Classification
10
4. Data
In this research, 176 original Persian signatures
from 22 people are used. For each person, 8
signatures are considered for training, testing, and
validation of the algorithm. Some signature images
used in this paper are shown in Fig. 3. The size of
the images is 640480.
Train
Validation
Test
Best
10
-1
10
-2
10
100
200
300
400
500
600
1000 Epochs
700
800
900
1000
6. Conclusions
In this study a general CNN architecture is
applied to the task of Persian signature verification.
The style of peoples handwritten signature is a
biometric feature used in person authentication.
CNNs may be expected to achieve significantly
better results than standard feed-forward networks
for many tasks. The key characteristic of weight
sharing is appropriate when the input data is scarse.
In this paper, despite the fact that input data are
little in quantity and great in dimensionality good
results are obtained. Furthermore, CNNs are
invariance to distortions and simple geometric
transformations like translation, scaling, rotation
and squeezing. Another characteristic which is
more important than other characteristics for the
task of signature verification is the ability of CNNs
in extracting features from input data. So, it would
solve the preprocessing problem of offline
signature verification task. Proposed method is not
only capable of extracting features relevant to a
given signature, but also robust with regard to
signature location changes and scale variations
when compared to classical methods. The
simulation results reveal the efficiency of the
suggested algorithm.
6. References
Jonghyon Yi, Chulhan Lee, and Jaihie Kim, Online
signature verification using temporal shift estimated
by the phase of gabor filter, IEEE transactions on
signal processing, vol. 53, no. 2, february 2005,
776-783.
[2] Elaheh Dehghani, Mohsen Ebrahimi Moghaddam,
On-line Signature Verification Using ANFIS,
Proceedings of the 6th International Symposium on
[1]
11
37
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
12
38
Email- ankitadhyani.84@gmail.com,
Email: dgupta@amity.edu,
Email: ssani@aiit.amity.edu
ABSTRACT:
The objective of this paper is to recognize different textures in an image, particularly a satellite
image where properties of the image are not distinctly identified. Texture classification involves
determining texture category of an observed image. The present study on Image Processing &
Texture Classification was undertaken with a view to develop a comparative study about the
texture classification methods. The algorithms implemented herein classify the different parts of
the image into distinct classes, each representing one property, which is different from the other
parts of the image. The aim is to produce a classification map of input image where each uniform
textured region is identified with its respective texture class. The classification is done on the basis
of texture of the image, which remains same throughout a region, which has a consistent property.
The classified areas can be assigned different colours, each representing one texture of the image.
In order to accomplish this, prior knowledge of the classes to be recognized is needed, texture
features extracted and then classical pattern classification techniques are used to do the
classification.
Examples where texture classification was applied as the appropriate texture processing method
include the classification of regions in satellite images into categories of land use. Here we have
implemented two methods namely- Cross Diagonal Texture Matrix (CDTM) and Grey-Level Cooccurrence Matrix (GLCM), which are based on properties of texture spectrum (TS) domain for
the satellite images. In CDTM, the texture unit is split into two separable texture units, namely,
Cross texture unit and Diagonal texture unit of four elements each. These four elements of each
texture unit occur along the cross direction and diagonal direction. For each pixel, CDTM has
been evaluated using various types of combinations of cross and diagonal texture units. GLCM, on
the other hand, is a tabulation of occurrence of different combinations of pixel brightness values
(grey levels) in an image. Basically, the GLCM expresses the spatial relationship between a graylevel in a pixel with the gray-level in the neighboring pixels. The study focuses on extraction of
entropy, energy, inertia and correlation features using several window sizes, which are calculated,
based on the GLCM. A maximum likelihood supervised classifier is used for classification. While
applying the algorithms on the images, we characterize our processed image by its texture
spectrum. In this paper we deal with extraction of micro texture unit of 7X7 window to represent
the local texture unit information of a given pixel and its neighborhood. The result shows that
increasing the window size showed no significant contribution in improving the classification
accuracy. In addition, results also indicate that the window size of 7x7 pixels is the optimal
window size for classification. The texture features of a GLCM and CDTM have been used for
comparison in discriminating natural texture images in experiments based on minimum distance.
Experimental results reveal that the features of the GLCM are superior to the ones given by
CDTM method for texture classification.
1. IMAGE PROCESSING
In computer science, image processing is any form of signal processing for which the
input is an image like photographs or frames of video; the output of image processing can
however be either an image or a set of characteristics or parameters related to the image.
36
39
3. OBJECTIVE
The objective is to recognize different textures in an image, particularly a satellite image
wherein the properties of the image are not distinctly identified.
The algorithms implemented herein classify the different parts of the image into distinct
classes, each representing one property that is different from the other parts of the image.
The classification is done on the basis of texture of the image. The texture remains same
throughout a region that has a consistent property. The classified areas can be assigned
different colours, each representing one texture of the image.
Some application of image processing
Computer vision
Face detection
Feature detection
Remote sensing
4. ADVANTAGES OF IMAGE PROCESSING
The Image Processing software will help Security personnel to use processed Images of
the terrain, which are much clearer than the images taken by satellites. These images give
a clear picture of the terrain by distinguishing the land region from the water bodies and
other geographical regions on the earth such as desert, forest, hills etc. Thus classification
of satellite images has following attributes: The software would help in discriminating the features of an unknown image
taken from a satellite.
It helps in extracting the features of an image that are not visible from our
naked eyes.
It helps in locating the terrain at the time of war.
37
40
38
41
6.2. METHODOLOGY
6.2.1 Texture Spectrum
The basic concept of textural spectrum method for analysis was introduced by He & Wang
(1990, 1991a, and 1991b) is that a texture can be extracted from a neighborhood of 3X3
window, which constitute the smallest unit called texture unit. In the neighborhood of
3X3 window comprising of nine elements respectively as V = [V1 , V2 , V3 , V4 ,V0 , V5 ,
V6 , V7 ,V8 ] where V0 is the central pixel value, and V1,....., V8 are the values of
neighboring pixels within the window (Figure 3.5). The corresponding texture unit for this
window is then a set containing eight elements surrounding the central pixel, represented
as TU = (E1, E2, E3, E4, E5, E6, E7, E8) where Ei is defined as,
and the element Ei occupies the corresponding Vi pixel. Since each of the eight elements
of the texture unit has any one of these three values (0, 1 or 2), the texture unit value, TU,
can range from 0 to 6560 (38 , i.e., 6561 possible values). The texture units are labeled by
using the relation,
where, NTU is the texture unit value. The occurrence distribution of texture unit is called
the texture spectrum (TS). Each texture unit represents the local texture information of a
3x3 pixels, and hence statistics of all the texture units in an image represent the complete
texture aspect of entire satellite image. Texture spectrum has been used in texture
characterization and classification, and the computational time depends on the number of
texture units identified in the image.[3]
6.2.2 Cross Diagonal Texture Matrix
Al-Janobi (2001) has proposed a cross-diagonal texture matrix technique, in which the
eight neighboring pixels of a 3x3 window is broken up into two groups of four elements
each at cross and diagonal positions. These groups are named as cross texture unit (CTU)
and diagonal texture unit (DTU) respectively. Each of the four elements of these units is
assigned a value (0, 1 or 2) depending on the gray level difference of the corresponding
pixel with that of the central pixel of the 3X3 window. Now these texture units can have
values from 0 to 80 (34, i.e., 81 possible values).[1]
39
42
Cross texture unit (CTU) and diagonal texture unit (DTU) can be defined as:
Where, NCTU and NDTU are the cross texture and diagonal texture unit numbers
respectively; Eci and Edi are the ith element of the texture unit.[1]
6.2.3 Modified Texture Filter
In the proposed method, NCTU and NDTU values have been evaluated which range from
0 to 80. For each type of texture unit, there can be four possible ways of ordering, which
give four different values of CTU and DTU. Finally a cross diagonal texture matrix
(CDTM) value for each pixel position is evaluated from corresponding CTU and DTU
possible values. In the present work, several techniques of estimating CDTM values have
been undertaken, which are listed below.
40
43
Where, NiCTU and NjDTU are the ordering ways for evaluation of NCTU and NDTU. After
obtaining the CDTM values of 3x3 window through entire image the occurrence
frequency of each CDTM values are recorded. This CDTM value is then assigned to the
respective pixel locations. Now based on the range of the CDTM values we divide the
CDTM values into different classes and give specific colours to all the classes. Thus we
obtain our resultant CDTM classified image. Same procedure has been followed with 7x7
windows also. The techniques described above have been applied on several satellite
imagery spiked with induced noises of different percentages.
6.3 Flowchart: CDTM
41
44
Therefore, general GLCM texture measure is dependent upon matrix size and
directionality, and known measures such as contrast, entropy, energy, angular second
moment (ASM) and correlation are used.[5]
7.1 Introduction
Grey-Level Co-occurrence Matrix texture measurements have been proposed by Haralick
in the 1970s. Its use improves classification of satellite images.
This study concerns some of the most commonly used texture measures, which are derived
from the Grey Level Co-occurrence Matrix (GLCM). This involves:
Defining a Grey Level Co-occurrence Matrix (GLCM)
Creating a GLCM
Using it to calculate texture
Understanding how calculations are used to build up a texture image
Textures in images quantify:
Grey level differences (contrast)
Defined size of area where change occurs (window)
Directionality and its slope
Definition: The GLCM is a tabulation of how often different combinations of pixel
brightness values (grey levels) occur in an image.
Properties of the GLCM1. It is square
2. Has the same number of rows and columns as the quantization level of the
image
3. It is symmetrical around the diagonal
The GLCM is used for a series of "second order" texture calculations. Second order
measures consider the relationship between groups of two (usually neighboring) pixels in
the original image.
7.2 Steps in creating a symmetrical normalized GLCM:
1.
2.
3.
4.
5.
42
45
Edge of image problems Each cell in a window must sit over an occupied image cell.
This means that the centre pixel of the window cannot be an edge pixel of the image. If a
window has dimension N x N, a strip (N-1)/2 pixels wide around the image will remain
unoccupied. The usual way of handling this is to fill in these edge pixels with the nearest
texture calculation.
Correlationwhere, i and j are coordinates of the co-occurrence matrix space, P(i,j) is element in the
co-occurrence matrix at the coordinates i and j.[5]
7.5 Implementation of GLCM
Stand-alone application program for GLCM texture measure and texture image creation is
implemented in this study. In this program, general graphic image formatted as jpg, tiff,
bmp can be used as input data. Also, a user determines two texture parameters such as
window size and direction in the main frame. The grey value relationships in the target
image are transformed into the co-occurrence matrix space by a given window size such as
3x3, 5x5, 7x7 and 11x11, the neighboring pixels as one of the four directions as East-West
of 0, North-East of 45, North-South of 90, North-West of 135, and omni-direction will
43
46
be computed in the co-occurrence matrix space. Among them, texture image is obtained as
the resultant GLCM classified image.[5]
7.6 Flowchart: GLCM
.
8. COMPARISON BETWEEN CDTM AND GLCM
Cross-Diagonal Texture Matrix (CDTM)
44
47
CONCLUSION
Most previous studies for second order texture analysis have been directed toward the
improvement of classification accuracy, with supervised or un-supervised classification
methods, showing high accuracy [7]. Scope of this study is somewhat different from
previous works. An application program for texture measures based on CDTM and GLCM
is newly implemented in this study. By using this program, CDTM and GLCM based
texture images by different quantization level, window size, and texture type are created
with the high-resolution satellite image of the terrain. In application of feature
characterization to texture measures, texture images is helpful to detect shadow zone,
classify building types; and distinguishing the land, water, forest, desert etc. regions from
one another which are not fully analyzed in this study.
In this paper we compare two different image texture classification techniques based on
feature extraction by first and higher order statistical methods that have been applied on
our images. The extracted features are used for unsupervised pixel classification with
CDTM and GLCM algorithms to obtain the different classes in the image [4]. From the
results obtained with 3x3, 5x5 and 7x7 windows on several satellite imagery data
corrupted with different percentages of induced noise, it is found that the results with 7x7
windows are comparatively more effective in removing the noises from the imagery data
than that by the 3x3 and 5x5 texture windows. Another very important advantage of the
proposed technique is the substantial reduction in the computational time involved using
CDTM method. Moreover,
The algorithms work well for distantly clicked images such as satellite images.
The algorithms can successfully recognize distinct regions in an image on the basis
of textures extracted.
When the input data to an algorithm is too large to be processed and it is suspected
to be notoriously redundant (much data, but not much information) then the input
data will be transformed into a reduced representation set of features.
The system helps in simplifying the amount of resources required to describe a
large set of data accurately.
The extracted features are used for unsupervised pixel classification to obtain the different
classes in the image, before using the algorithm. Two methods have been tested with very
heterogeneous results [8]. The hypothesis took into account for the textural analysis
methods are currently modified to justify them more accurately, especially concerning the
number of classes and the size of the analysis window.
Another five parameters were calculated from the grey-level co-occurrence matrix
(GLCM). The linear discriminant analysis was applied to sets of up to five parameters and
then the performances were assessed. The most relevant individual parameter was the
contrast (con) (from the GLCM algorithm).[2]
This paper presents a new texture analysis method incorporating with the properties of
both the gray-level co-occurrence matrix (GLCM) and cross-diagonal texture matrix
(CDTM) methods. The co-occurrence features extracted from the crossdiagonal texture
matrix provide complete texture information about an image. The performance of these
features in discriminating the texture aspects of pictorial images has been evaluated. The
textural features from the GLCM and CDTM have been used for comparison in
discriminating some of satellite images. . Based on the resultant classified images of the
45
48
terrain it is observed that the features of the classified image in GLCM were more clear
and vivid as compared to what we see in CDTM.
Although the GLCM approach is much less computationally intensive than the CDTM, it
nonetheless requires massive amounts of calculation. Most of this computation time is
spent in stepping through the input image and compiling the matrices themselves.
Therefore, if the calculation time for these matrices could be reduced, the GLCM
technique would become more practical.
REFERENCES
[1] Abdulrahman A. AL-JANOBI and AmarNishad M. THOTTAM , Testing and Evaluation of
Cross-Diagonal Texture Matrix Method.
[2] Alvarenga AV, Pereira WC, Infantosi AF, Azevedo CM., 2007 Complexity curve and grey level
co-occurrence matrix in the texture evaluation of breast tumor on ultrasound images.
[3] Amit K. Bhattacharya, P. K. Shrivastava and Anil Bhagat, 2001 A Modified Texture Filtering
technique for satellite Images.
[4] F. Cointault, L. Journaux, M.-F. Destain, and P. Gouton (France), 2008, Wheat Ear Detection by
Textural Analysis for Improving the Manual Countings.
[5] Kiwon Lee, So Hee Jeon and Byung-Doo Kwon Urban Feature Characterization using HighResolution Satellite Imagery: Texture Analysis Approach.
[6] M. Tuceryan and A. K. Jain, ``Texture Analysis,'' In The Handbook of Pattern Recognition and
Computer Vision (2nd Edition), by C. H. Chen, L. F. Pau, P. S. P. Wang (eds.), pp. 207-248, World
Scientific Publishing Co., 1998. (Abstract) (Book Chapter).
[7] Supervised and Unsupervised Land Use Classification.
[8] Varsha Turkar and Y.S. Rao , Supervised and Unsupervised Classification of PolSAR Images
from SIR-C and ALOS/PALSAR Using PolSARPro.
BIBLIOGRAPHY S. Jenicka #1, A. Suruliandi Comparative Study of Texture models Using Supervised
Segmentation.
Mihran Tuceryan & Anil K. Jain Texture Analysis.
Wikantika, K., M.Y. Andi Baso, Hadi F. Analysis of Window Size and Classification Accuracy
using Spectral and Textural Information from JERS-1 SAR Satellite Image.
Hawkins, J. K., Textural Properties for Pattern Recognition, In Picture Processing and
Psychopictorics, B. Lipkin and A. Rosenfeld (editors), Academic Press, New York, 1969.
S. Karkanis, K. Galousi and D. Maroulis Classification of Endoscopic Images Based on Texture
Spectrum.
R. Xu, X. Zhao, X. Li, C. Kwan, and C.-I Chang Target Detection with Improved Image Texture
Feature Coding Method and Support Vector Machine.
Lalit Gupta, Shivani G. Rao, Sukhendu Das Classification Of Textures In Sar Images Using MultiChannel Multi-Resolution Filters.
B. J. Lei, Emile A. Hendriks, M.J.T. Reinders On Feature Extraction from Images.
Web Sites
http://en.wikipedia.org/wiki/Image_processing
http://www.ph.tn.tudelft.nl/Courses/FIP/noframes/fip-Contents.html
http://en.wikipedia.org/wiki/Segmentation_(image_processing)
http://www.patentstorm.us/patents/5949907/claims.html
http://www.gisdevelopment.net/technology/ip/ma05190.htm
http://www.fp.ucalgary.ca/mhallbey/tutorial.htm
http://cat.inist.fr/?aModele=afficheN&cpsidt=14572052
http://www.jmp.org.in/article.asp?issn=0971-6203;
year=2008;volume=33;issue=3;spage=119;epage=126;aulast=Sharma
46
49
Our Publication
www.ijmera.org
www.ijeera.org
www.ijcera.org
www.ijcsrt.org
www.ijecer.org
www.ijmrs.org
Published By