Vous êtes sur la page 1sur 73

A

PROJECT REPORT
ON
SIGNATURE VERIFICATION

SUBMITTED TO
RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA
BHOPAL (M.P.)
In partial fulfillment of the requirement for the award of the
Degree of
BACHELOR OF ENGINEERING
IN
INFORMATION TECHNOLOGY

SESSION
2010-2011

SUBMITTED BY

Vikalp kulshrestha

Vinod Kumar Bharkundiya

UNDER THE GUIDANCE OF

PROF. U.Dutta

DEPARTMENT OF CS/IT
MAHARANA PRATAP COLLEGE OF TECHNOLOGY
GWALIOR – 474006 (M.P.)

1
A
PROJECT REPORT
ON
SIGNATURE VERIFICATION

SUBMITTED TO
RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA
BHOPAL (M.P.)
In partial fulfillment of the requirement for the award of the
Degree of
BACHELOR OF ENGINEERING
IN
INFORMATION TECHNOLOGY

SESSION
2010-2011

SUBMITTED BY

Gajendra Singh Rajpoot

Vinod Kumar Bharkundiya

UNDER THE GUIDANCE OF

PROF. U.Dutta

DEPARTMENT OF CS/IT
MAHARANA PRATAP COLLEGE OF TECHNOLOGY
GWALIOR – 474006 (M.P.)

2
MAHARANA PRATAP COLLEGE OF
TECHNOLOGY, GWALIOR – 474006

DEPARTMENT OF INFORMATION TECHNOLOGY

2010-2011
CERTIFICATE

This is to certify that the project entitled “SIGNATURE


VERIFICATION”, Which is being submitted by Gajendra singh
Rajpoot,Vinod Kumar Bharkundiya in partial fulfillment for the award of
the Degree of Bachelor of Engineering in Information Technology of Rajiv
Gandhi Proudyogiki Vishwavidyalaya, Bhopal (M.P.) is a record of students
own work carried by then under our guidance and super vision.
To the best of our knowledge, the matter presented in this project has
been submitted for the award of any other diploma or degree certificate.

HOD Principal Project Guide


Prof. Arvind sharma Dr. V.M. Sahai Prof. U.Dutta
MPCT, Gwalior (M.P.)

3
ACKNOWLEDGEMENT

The feeling of gratitude when expressed in words is only it’s half


acknowledged and it is with deep sense of gratitude & indebtedness That we
acknowledged the able guidance. Our software is the
" SIGNATURE VERIFICATION”.

We are most fortunate to had Prof. U. Dutta our


esteemed project guide. There are many reasons for arriving on this
Conclusion, but important one is the patience that he exhibited over one year
in trade with fundamental philosophical issues that kept coming up From
strong impact on our thinking .on the competition this work it is
Our proud privilege to express our deep sense of gratitude & indebtedness
towards him for his guidance, unfailing support, stimulated discussion And
constant encouragement not only during this dissertation work Also Our
entire period of our associations with him. We can offer our Most humble
and profound indebtedness for our project.

We thankfully acknowledge our head of the department & faculty


Members of M.P.C.T who have brought us various courses during the
academic program.

Gajendra Singh Rajpoot


Vinod Kumar Bharkundiya

4
Candidate’s Declaration

We are Gajendra singh Rajpoot,Vinod Kumar Bharkundiya


students of B.E. Information Technology, final year, hereby declare that the
project entitled “ SIGNATURE VERIFICATION”Which is being
submitted to the department of “ . . Information Technologyis our authentic
work carried our for submitting as Major Project during the 8th semester.
We declare that the work has not been submitted in part or full to any
other university or institute for the award of any degree.

Students name Roll no.

Gajendra singh rajpoot 0903it061018

Vinod Kumar Bharkundiya 0903it071060

5
Verification

This is verified that above declaration made by the candidates is true to the
best of our knowledge.

Prof. Unmukh Dutta Prof. Arvind Sharma


Director Head of Department
Computer Science &
Engineering

6
INDEX

Introduction

 Preprocessing
• Gray Image Conversion
• Binary Image Conversion
• Noise Reduction
• Width Normalization
• Skeletonization
 Feature Extraction
• Global Features
o Image area.
o Vertical center of signature
o Horizontal center of the signature.
o Maximum vertical projection.
o Maximum horizontal projection.
o Vertical projection peaks.
o Horizontal projection peaks.
o Number of edge points.
o Number of cross points.
• Grid Information Feature
 The Signature Database
 Classification
 The Training Phase
 Implementation in JAVA
 Testing Phase & Result
• Verification scenario
• The recognition scenario

7
 Conclusion

INTRODUCTION

As available computing power eventually increases and computer


algorithms become smarter, tasks that a few years ago seamed completely
unfeasible, now come again to focus. This partly explains why a considerable
amount of research effort is recently devoted in designing algorithms and
techniques associated with the problems like human handwritten signature
recognition and verification.

A signature recognition and verification (SRVS) is a system capable of


eficiently addressing two individual but strongly related tasks (a)identification of
the signature owner, and , (b) decision whether the signature is genuine or forger.
Depending on the actual needs of the problem at hand ,SRVSs are often
categorized in two major classes: on-line SRVSs and offline SRVSs.While for
systems belonging to the former class, only digitized signature images are needed,
for systems in the latter classes .Information about the way the human hand
creates the signature such as hand speed and pressure measurements, acquired
from special peripheral units, is needed.

A SRVS that is based on global and grid features . A technique based on


thickened templates that can be utilized as an initial face of a SRVS in order to
reject signatures that are completely unmatched .A signature retrieval and
identification system based on geometric and topologic features .In SRVS a
directional probability density function in conjunction with back propagation-
trained neural networks is used . Used multiple neural networks supplied by three
sets of global features in combination with a neural network classifier .However
,the experimental results were based on a small number of samples.

8
In this report , a novel approach for off-line signature recognition and
verification is proposed. The presented system is based on two powerful feature
sets in combination with a multiple-stage neural-network-based classifier(Fig.1).

The novelty of the system lies mainly on the structure of the classifier and
the way that it is used . the neural network classifier is arranged in two stages .

The ability to easily add/remove signature from new /obsolete owner to its
database must be inherent. The approach toward this goal is to implement the
structure of the neural network classifier is a one class one network schema that is
for each signature owner of an individual each time signature from a new owner
are added to the SRVS database, only a small, fixed size neural network based
classifier must be trained.

Moreover to farther overcome training dificulties stemming from the


feature set size, the proposed feature set is divided into three individual feature
group of different physical meaning.

For each of the resulting two features group and individual multi layers
perception neural network is implemented. These three small and fixed size neural
networks for each signature owner constitute the first stage of the classifiers. It is
a lake of the second stage classifier ,a radial basis function(RBF)neural network
to combine the results of the first stage to make the final decision of weather
presented to the system signature ,belongs to the candidate owner or not .

9
The experimental results confirmed the effectiveness of the proposed
structure and show its ability to yield high recognition and verification rates.

2. PREPROCESSING

The preprocessing stage is divided into four different parts noise


reduction, data area cropping. Width normalization and signature skeltonization.

2.1 Noise Reduction


Before any further processing takes place, A noise reduction filter is
applied to the binary scanned image .the goal is to eliminate single white pixels
on black back ground and single black pixels on white background und .In order t
o accomplish this , we apply a 3 X 3 mask to the image with a simple decision

Rule: if the no of the 8 neighbors of a pixel that have the same color with
the central pixel is less than two ,we reverse the color of the central pixel .

10
2.2 Data Area Cropping
The signature area is separated from the background by using the well
known segmentation methods of vertical and horizontal projection.
Thus, the white space surrounding the signature is discarded.

2.3 Width Normalization

The image size is adjusted so that the width reaches a default value while
the height to width ratio remains unchanged .

2.4 Skeletonization
A simplified version of the skeletonization technique is used. The
simplified algorithm is used here consists of the following three steps:
Step 1:
Mark all the points of the signature that are candidates for removing (black
pixels that have at least one white 8-neighbor and at least two black 8-
neighborspixels).

Step 2:

Examine one by one all of them, following the contour lines of the
signature image, and remove these as their removal will not cause a break in the
resulting pattern.

Step 3:

If at least one point was deleted go again to Step 1 and repeat the process
once more. Fig.2. shows an example of this skeletonization technique.
Skeletonization makes
the extracted features invariant to image characteristics like the qualities
of the pen and the paper the signer used, and the digitizing method and quality.

11
Figure 2. Example of the skeletonization algorithm.

3. Feature Extraction

The choice of a powerful set of features is crucial in optical recognition


systems. The features used must be suitable for the application and for the applied
classifier .In this system, three groups of features are used categorized as global
features and grid information .

While global features provide information about specific cases concerning


the structure of the signature, grid information and texture features are intended to
provide overall signature appearance information in two different levels of detail.
For grid information features, the image is segmented in 96 rectangular regions.
Only the area (the number of signature points) in each region is utilized in order
to form the grid information feature group. For the texture feature group to be
formed, a coarser segmentation scheme is adopted. The signature image is

12
segmented in only six rectangular areas, while, for each area, information about
the transition of black and white pixels in the four deferent directions is used.

3.1. Global features


 Image area.
The number of black (foreground) pixels in the image. In skeletonized
signature images, it represents a measure of the density of the signature traces.

 Vertical center of the signature.


The vertical center Cy is given by

 Horizontal center of the signature.

The horizontal center Cx is given by

 Maximum vertical projection.

13
The vertical projection of the skeletonized signature image is calculated.
The highest value of the projection histogram is taken as the maximum vertical
projection.

 Maximum horizontal projection.

As above, the horizontal projection histogram is calculated and the highest


value of it is considered as the maximum horizontal projection.

 Vertical projection peaks.

The number of the local maxima of the vertical projection histogram.

 Horizontal projection peaks.

The number of the local maxima of the horizontal projection histogram.

 Number of edge points.

An edge point is defined as a signature point that has only one 8-neighbor.

 Number of cross points. Cross point is a signature point that has at least three 8-
neighbors.

14
Figure 3: Examples of corner (C1, C2, C3, C4) and edge (E1,
E2, E3, E4) points.

3.2. Grid Information Features

The skeletonized image is divided into 96 rectangular segments (12_8),


and for each segment, the area (the sum of foreground pixels) is calculated. The
results are normalized so that the lowest value (for the rectangle with the smallest
number of black pixels) would be zero and the highest value (for the rectangle
with the highest number of black pixels) would be one. The resulting 96 values
form the grid feature vector. A representation of a signature image and the
corresponding grid feature vector is shown in Fig. 4. A black rectangle indicates
that for the corresponding area of the skeletonized image we had the maximum
number of black pixels. On the contrary, a white rectangle
Indicates that we had the smallest number of black pixels.

15
Figure 4: The grid feature vector of a signature.

4. The signature database


For training and testing of the SRVS many signatures are used..

The signatures have been taken from 25 persons (2 signatures from each).
For training the system ,sets taken from the master set, of 50 signatures were
used.

16
The performance of the system has been checked by the use of the
signatures used. In order to make the system robust to intra-personal variations
and to extract worst-case classification rates, the signers were asked to use as
much variation in their signature
sizes and shapes as they should ever use in real circumstances.

5. Classification
Multi-layer Perceptron (MLP) neural networks are among the most
commonly used classifiers for pattern recognition problems. Despite their
advantages, they suffer from some very serious limitations that make their use, for
some problems, impossible. The first limitation is the size of the neural network.
It is very dificult, for very large neural networks, to get trained. As the amount of
the training data increases, this dificulty becomes a serious obstacle for the
training process.
The second dificulty is that the geometry, the size of the network, the
training method used and the training parameters depend substantially on the
amount of the training data. Also, in order to specify the structure and the size of
the neural network, it is necessary to know a priori the number of the classes that
the neural network will have to deal with. Unfortunately, when talking about a
useful SRVS, a priori knowledge about the number of signatures and the number
of the signature owners is not available. The proposed SRVS confronts these
problems by reducing the training computation time and the size of the neural
networks used. This is achieved by:
 Reduction of the feature space. The feature set is split to two different groups,
i.e., global features and grid features. Due to the different nature and the
uncorrelation of the two feature sets, the combination of the two feature vectors
covers the required feature information.

17
 Reduction of the necessary training samples. This is achieved because each
neural network corresponds to only one signature owner. Specifically, during the
first stage of classification, multiple but fixed-size neural networks are used (Figs.
1 and 6). In Fig. 1, each one of the neural-networks NN1 and NN2 specializes in
signatures of only one person. For practical systems, this approach offers another
significant advantage: each time we want to add a set of signatures (a new person)
to the systems database,we only have to train two new small neural networks
(one for each set of features). It is not necessary to retrain a very large neural
network, which is of course a much more dificult situation. Due to the use of
many neural networks, it is necessary to apply a training algorithm that can train
them eficiently, avoiding local minima. Due to its stochastic nature, it presents a

remarkable tendency to avoid local minima.


In this work a Backward Propagation neural network is used in order to
have the final decision. The Backward Propagation neural networks are feed-
forward architectures with a hidden non-linear layer and a linear output layer. The
structure of the network used here is shown in Fig. 7. The network has four inputs

18
(fed by the outputs of the first-stage classifiers), a hidden layer with two non-
linear neurons and a simple output linear neuron.

6.Training Phase

The training of the system includes the following two steps.


We have trained the network by randomly choosing the signature images
from our available database .We passed the grid features into the neural network
and each time we changed the input weights to train the network

7. Implementation in Java
Ten classes has been developed for implementing signature verification
and their coding is as following:

/* signature varification in java using image processing


*/
** 1. …………………………..Class MProj……… …………………………….
import java.io.*;
import java.awt.*;
import java.awt.event.*;
import java.awt.image.*;
public class MProj extends Frame implements ActionListener
{
Graphics g1,g2;
NeuralPanel ne;
int w,h,w1,h1,w2,h2,blackpixel1,blackpixel2;
int p[],q[],thinp[],thinq[];
Image img1,img2,img3,img4,img5,img6,img7,img8;

19
MPanel PanelImg1,PanelImg2;
Button bLoadImage1,bLoadImage2,bPreProcessing1,bPreProcessing2,
bFExtraction1,bFExtraction2,bMatch;
Button neuralNetwork;
int edgepoint1,crosspoint1,blackpixels1;
int verticalhistogram1[],horizontalhistogram1[];
int verticalcentre1,horizontalcentre1;
int vprojectionpeak1,hprojectionpeak1,gridfeatures1[],closedloop1;
int edgepoint2,crosspoint2,blackpixels2;
int verticalhistogram2[],horizontalhistogram2[];
int verticalcentre2,horizontalcentre2;
int vprojectionpeak2,hprojectionpeak2,gridfeatures2[],closedloop2;

double gridvalue1[]=new double[96];


double gridvalue2[]=new double[96];
BackProp bp;

MProj()
{
setSize(800,500);
setLayout(null);
setBackground(Color.gray);
addWindowListener(new WindowAdapter(){
public void windowClosing(WindowEvent e){System.exit(0);}}
);
PanelImg1=new MPanel(0,20);
PanelImg2=new MPanel(450,20);
PanelImg1.setBackground(Color.black);
PanelImg2.setBackground(Color.black);

neuralNetwork=new Button("Neural Network");


bLoadImage1=new Button("Load1");

20
bLoadImage2=new Button("Load2");
bPreProcessing1=new Button("Preprocessing1");
bPreProcessing2=new Button("Preprocessing2");
bFExtraction1=new Button("Feature Extraction1");
bFExtraction2=new Button("Feature Extraction2");
bMatch=new Button("Match");

add(PanelImg1);
add(PanelImg2);

add(neuralNetwork);
add(bLoadImage1);
add(bLoadImage2);
add(bPreProcessing1);
add(bPreProcessing2);
add(bFExtraction1);
add(bFExtraction2);
add(bMatch);

bLoadImage1.setBounds(100,400,50,30);
bLoadImage2.setBounds(570,400,50,30);
bPreProcessing1.setBounds(350,100,100,30);
bPreProcessing2.setBounds(350,150,100,30);
bFExtraction1.setBounds(350,200,110,30);
bFExtraction2.setBounds(350,250,110,30);
neuralNetwork.setBounds(350,320,100,30);
bMatch.setBounds(380,410,50,30);

bLoadImage1.addActionListener(this);
bLoadImage2.addActionListener(this);
bFExtraction1.addActionListener(this);
bPreProcessing1.addActionListener(this);

21
bFExtraction2.addActionListener(this);
bPreProcessing2.addActionListener(this);
bMatch.addActionListener(this);
neuralNetwork.addActionListener(this);

bFExtraction1.setEnabled(false);
bPreProcessing1.setEnabled(false);
bFExtraction2.setEnabled(false);
bPreProcessing2.setEnabled(false);
bMatch.setEnabled(false);
neuralNetwork.setEnabled(false);

setVisible(true);
}
public void actionPerformed(ActionEvent e)
{
String dname,fname;
String cmd=e.getActionCommand();

if(cmd.equals("Load1"))
{
try{
g1=PanelImg1.getGraphics();
FileDialog fd= new FileDialog(this,"open",FileDialog.LOAD);
fd.setVisible(true);
dname=fd.getDirectory();
fname=fd.getFile();
System.out.println(dname+fname);

img1=Toolkit.getDefaultToolkit().getImage(dname+fname);
MediaTracker mt = new MediaTracker(PanelImg1);
mt.addImage(img1,0);

22
try{
mt.waitForID(0);
} catch(Exception e1){}
w=img1.getWidth(null);
h=img1.getHeight(null);
if(w<=0||h<=0)
{
new DD(this,true,"NO FILE SELECTED");
}
w1=(3*w)/4;
h1=(3*h)/4;
p=new int[w1*h1];

img2=img1.getScaledInstance(w1,h1,Image.SCALE_SMOOTH);

PixelGrabber pg=new PixelGrabber(img2,0,0,w1,h1,p,0,w1);


try{
pg.grabPixels();
}catch(Exception e3){}
img3 =createImage(new MemoryImageSource(w1,h1,p,0,w1));
g1.drawImage(img3,20,40,w1,h1,PanelImg1);
PanelImg1.setImage(img3,w1,h1,g1);
bPreProcessing1.setEnabled(true);
}catch(Exception e1){ new DD(this,true,"NO FILE SELECTED");}
}
else if(cmd.equals("Load2"))
{
try{
g2=PanelImg2.getGraphics();
FileDialog fd= new FileDialog(this,"open",FileDialog.LOAD);
fd.setVisible(true);
dname=fd.getDirectory();
fname=fd.getFile();
23
System.out.println(dname+fname);
img4=Toolkit.getDefaultToolkit().getImage(dname+fname);
MediaTracker mt = new MediaTracker(PanelImg2);
mt.addImage(img4,0);
try{
mt.waitForID(0);
} catch(Exception e1){}
w=img4.getWidth(null);
h=img4.getHeight(null);
if(w<=0||h<=0)
{
new DD(this,true,"NO FILE SELECTED");
}
w2=(3*w)/4;
h2=(3*h)/4;
q=new int[w2*h2];
img5=img4.getScaledInstance(w1,h1, Image.SCALE_SMOOTH);
PixelGrabber pg=new PixelGrabber(img5,0,0,w2,h2,q,0,w2);
try{
pg.grabPixels();
}catch(Exception e4){}
img6 =createImage(new MemoryImageSource(w2,h2,q,0,w2));
g2.drawImage(img6,20,40,w2,h2,PanelImg2);
PanelImg2.setImage(img6,w2,h2,g2);
bPreProcessing2.setEnabled(true);
} catch(Exception e1){ new DD(this,true,"NO FILE SELECTED");}

}
else if(cmd.equals("Preprocessing1"))
{
Preprocessing pp=new Preprocessing();
System.out.println("we r in preprocessing block1");
thinp=pp.imagePreprocessing(p,w1,h1);
24
img7 =createImage(new
MemoryImageSource(w1,h1,thinp,0,w1));
g1.drawImage(img7,20,40,w1,h1,PanelImg1);
PanelImg1.setImage(img7,w1,h1,g1);

bFExtraction1.setEnabled(true);
bPreProcessing1.setEnabled(false) ;

}
else if(cmd.equals("Preprocessing2"))
{
Preprocessing pp1=new Preprocessing();
System.out.println("we r in preprocessing block2");
thinq=pp1.imagePreprocessing(q,w2,h2);
img8 =createImage(new
MemoryImageSource(w2,h2,thinq,0,w2));
g2.drawImage(img8,20,40,w2,h2,PanelImg2);
PanelImg2.setImage(img8,w2,h2,g2);

bFExtraction2.setEnabled(true);
bPreProcessing2.setEnabled(false) ;
}
else if(cmd.equals("Feature Extraction1"))
{
ExtractFeature fe1=new ExtractFeature(thinp,w1,h1);
verticalhistogram1=new int[w1];
horizontalhistogram1=new int[h1];
gridfeatures1=new int[90];

edgepoint1=fe1.edgePoint();
crosspoint1=fe1.crossPoint();
blackpixels1=fe1.blackPixels();

25
verticalhistogram1=fe1.verticalHistogram();
horizontalhistogram1=fe1.horizontalHistogram();
verticalcentre1=fe1.verticalCentre();
horizontalcentre1=fe1.horizontalCentre();
vprojectionpeak1=fe1.vprojectionPeaks();
hprojectionpeak1=fe1.hprojectionPeak();
closedloop1=fe1.closedLoop();
int tmp[];
tmp=fe1.grid();
for(int i=0;i<96;i++)
gridvalue1[i]=tmp[i];
neuralNetwork.setEnabled(true);
}
else if(cmd.equals("Feature Extraction2"))
{
ExtractFeature fe2=new ExtractFeature(thinq,w2,h2);
verticalhistogram2=new int[w1];
horizontalhistogram2=new int[h1];
gridfeatures2=new int[90];

edgepoint2=fe2.edgePoint();
crosspoint2=fe2.crossPoint();
blackpixels2=fe2.blackPixels();
verticalhistogram2=fe2.verticalHistogram();
horizontalhistogram2=fe2.horizontalHistogram();
verticalcentre2=fe2.verticalCentre();
horizontalcentre2=fe2.horizontalCentre();
vprojectionpeak2=fe2.vprojectionPeaks();
hprojectionpeak2=fe2.hprojectionPeak();
closedloop2=fe2.closedLoop();
//gridvalue2=fe2.grid();

26
}
else if(cmd.equals("Neural Network"))
{
ne=new NeuralPanel(gridvalue1,this);
if( bFExtraction2.isEnabled()==true)
bMatch.setEnabled(true);
}
else if(cmd.equals("Match"))
{
bp=ne.retBackProp();
double resultID[] = bp.propagate(gridvalue1);
double gridResult=(1-resultID[0])*100;
System.out.println("result"+resultID[0]);
double edgepointdiff,crosspointdiff,blackpixeldiff,
verticalcentrediff,horizontalcentrediff,
vprojectionpeakdiff,hprojectionpeakdiff;
edgepointdiff=((Math.abs(edgepoint1-edgepoint2)/edgepoint1)*100);
crosspointdiff=((Math.abs(crosspoint1-crosspoint2)/crosspoint1)*100);
blackpixeldiff=((Math.abs(blackpixels1-blackpixels2)/blackpixels1)*100);
verticalcentrediff=((Math.abs(verticalcentre1-verticalcentre2)/verticalcentre1)*100);
horizontalcentrediff=((Math.abs(horizontalcentre1-
horizontalcentre2)/horizontalcentre1)*100);
vprojectionpeakdiff=((Math.abs(vprojectionpeak1-
vprojectionpeak2)/vprojectionpeak1)*100);
hprojectionpeakdiff=((Math.abs(hprojectionpeak1-
hprojectionpeak2)/hprojectionpeak1)*100);
System.out.println("edgepoint difference"+edgepointdiff);
if(edgepointdiff<=15 && crosspointdiff<=15
&& vprojectionpeakdiff<=15
&& hprojectionpeakdiff<=15
&& verticalcentrediff<=15
&& horizontalcentrediff<=15

27
&& gridResult<=15)
{
new DD(this,true,"SIGNATURE MATCHED");
}
}
}//end of Action Listener
public static void main(String ar[])
{
MProj mp = new MProj();
}
}

** 2. …………………………..Class BackProp ………… …………………………….


import java.util.Random;

class BackProp
{
private double inputA[];
private double hiddenA[];
private double hiddenN[];
private double hiddenD[];
private double hiddenW[][];
private double outputA[];
private double outputN[];
private double outputD[];
private double oldD[];
private double outputW[][];
private double biasH[];
private double biasO[];
private int numInput;
private int numHidden;
private int numOutput;

28
private int epoch;
private double momentum;
private double alpha;
private double absError;
private Random rand;

public BackProp(int input, int hidden, int output)


{
absError = 0.0D;
numInput = input;
numHidden = hidden;
numOutput = output;
inputA = new double[numInput];
hiddenW = new double[numHidden][numInput];
hiddenA = new double[numHidden];
hiddenN = new double[numHidden];
hiddenD = new double[numHidden];
biasH = new double[numHidden];
outputW = new double[numOutput][numHidden];
outputA = new double[numOutput];
outputN = new double[numOutput];
outputD = new double[numOutput];
oldD = new double[numOutput];
biasO = new double[numOutput];
alpha = 0.29999999999999999D;
momentum = 1.0D;
rand = new Random();
init();
}

public BackProp(int input, int hidden, int output, double alpha, double mom)
{

29
this(input, hidden, output);
this.alpha = alpha;
momentum = mom;
}
private void init()
{
epoch = 0;
for(int i = 0; i < numInput; i++)
{
inputA[i] = frandom(-1D, 1.0D);
}
for(int i = 0; i < numHidden; i++)
{
hiddenA[i] = frandom(-1D, 1.0D);
biasH[i] = frandom(-1D, 1.0D);
for(int m = 0; m < numInput; m++)
{
hiddenW[i][m] = frandom(-1D, 1.0D);
}
}
for(int i = 0; i < numOutput; i++)
{
biasO[i] = frandom(-1D, 1.0D);
for(int m = 0; m < numHidden; m++)
{
outputW[i][m] = frandom(-1D, 1.0D);
}

}
private double sigmoid(double x)

30
{
return 1.0D / (1.0D + Math.exp(-x));
}
private double sigmoidDeriv(double x)
{
return sigmoid(x) * ((double)1 - sigmoid(x));
}
private void feedForward()
{
double sum2 = 0.0D;
for(int i = 0; i < numHidden; i++)
{
sum2 = biasH[i];
for(int j = 0; j < numInput; j++)
{
sum2 += hiddenW[i][j] * inputA[j];
}

hiddenN[i] = sum2;
hiddenA[i] = sigmoid(sum2);
}
for(int i = 0; i < numOutput; i++)
{
sum2 = biasO[i];
for(int j = 0; j < numHidden; j++)
{
sum2 += outputW[i][j] * hiddenA[j];
}
outputN[i] = sum2;
}
}
private void computeDelta(int m)

31
{
outputD[m] = (outputA[m] - sigmoid(outputN[m])) * sigmoidDeriv(outputN[m]);
for(int i = 0; i < numHidden; i++)
{
outputW[m][i] += outputD[m] * hiddenA[i] * alpha;
}
for(int i = 0; i < numOutput; i++)
{
biasO[i] += outputD[m] * alpha;
}
}

private void updateWeights()


{
for(int j = 0; j < numHidden; j++)
{
double sum2 = 0.0D;
for(int i = 0; i < numOutput; i++)
{
sum2 += outputD[i] * outputW[i][j];
}
sum2 *= sigmoidDeriv(hiddenN[j]);
biasH[j] += sum2 * alpha;
for(int i = 0; i < numInput; i++)
{
hiddenW[j][i] += alpha * sum2 * inputA[i];
}
}
}
private double frandom(double min, double max)
{
return rand.nextDouble() * (max - min) + min;

32
}

public double[] propagate(double vector[])


{
for(int i = 0; i < numInput; i++)
{
inputA[i] = vector[i];
}

for(int i = 0; i < numHidden; i++)


{
double sum2 = biasH[i];
for(int j = 0; j < numInput; j++)
{
sum2 += hiddenW[i][j] * inputA[j];
hiddenA[i] = sigmoid(sum2);
}
}

for(int i = 0; i < numOutput; i++)


{
double sum2 = biasO[i];
for(int j = 0; j < numHidden; j++)
{
sum2 += outputW[i][j] * hiddenA[j];
}
outputN[i] = sum2;
outputA[i] = sigmoid(sum2);
}
return outputA;
}

33
public double learnVector(double in[], double out[])
{
for(int i =0;i<numInput;i++)
{
inputA[i] = in[i];
}
for(int i = 0; i < numOutput; i++)
{
outputA[i] = out[i];
}
feedForward();
absError = 0.0D;
for(int j = 0; j < numOutput; j++)
{
computeDelta(j);
absError += Math.pow(outputA[j] - sigmoid(outputN[j]), 2D);
}

updateWeights();
alpha *= momentum;
return absError;
}

public int numInput()


{
return numInput;
}

public int numHidden()


{
return numHidden;
}

34
public int numOutput()
{
return numOutput;
}

public double[] getBiasH()


{
return biasH;
}

public double[] getBiasO()


{
return biasO;
}

public double[][] getHiddenW()


{
return hiddenW;
}

public double[][] getOutputW()


{
return outputW;
}

public double getAlpha()


{
return alpha;
}

public double getMomentum()

35
{
return momentum;
}

public int getEpoch()


{
return epoch;
}
public void setAlpha(double a)
{
alpha = a;
}
public void setMomentum(double mom)
{
momentum = mom;
}
public void setHiddenW(double weight, int hidden, int input)
{
hiddenW[hidden][input] = weight;
}

public void setBiasH(double bias, int hidden)


{
biasH[hidden] = bias;
}

public void setOutputW(double weight, int output, int hidden)


{
outputW[output][hidden] = weight;
}

public void setBiasO(double bias, int output)

36
{
biasO[output] = bias;
}
}
** 3. ……………………..Class DD For Dialog Box………………………………….
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;

class DD extends Dialog implements ActionListener


{
DD(Frame frame, boolean flag, String s)
{
super(frame, flag);
setLayout(null);
setBounds(250,250,300, 200);
Label label =new Label(s);
add(label);
label.setBounds(65,50,200,50);
Button button = new Button("OK");
button.setBounds(120,100,30,30);
add(button);
button.addActionListener(this);
setVisible(true);
}
public void actionPerformed(ActionEvent actionevent)
{
dispose();
}
}
** 4. …………………………..Class ExtractFeature ………………………………….
class ExtractFeature

37
{
int tempindex,edgeblack,countblack=0;
int vh[],hh[],count,vpeak,hpeak;
int temp[]=new int[9],thin[];
int edge=0,crosspoint=0;
int CH,CY;
int el[]=new int[700],edeparture,closedloop;
int sum1=0;
int sum2=0;
int gridval[]=new int [96];
public ExtractFeature(int thin[],int w1,int h1)
{
this.thin=thin;
for(int y=0;y<(h1-2);y++)
{
for(int x=0;x<(w1-2);x++)
{
tempindex=0;
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
temp[tempindex]=thin[(y+i)*w1+x+j];
tempindex++;
}
}
edgeblack=0;
if(temp[4]==0)
{
for(int a=0;a<9;a++)
{
if(a==4)

38
{

}
else
{
if(temp[a]==0)
edgeblack++;
}
}
}
if(edgeblack==1)
{
edge ++;
}
if(edgeblack>=3)
{
crosspoint++;
el[crosspoint-1]=edgeblack;
}
}
}
//................extra departure..........;........

for(int i=0;i<crosspoint;i++)
edeparture+=(el[i]-2);

closedloop=1+(edeparture-edge)/2;

System.out.println("no. of edgepoint"+edge+"\nNo. of crosspoint ="+crosspoint);

//............COUNTING OF BLACK PIXELS................

39
for(int i=0;i<thin.length;i++)
{
if(thin[i]==0)
{
countblack++;
}
}//COUNTING END
//System.out.println("total pixels are"+i+"\nNo. of black pixels are"+countblack);
//............FOR VERTICAL PROJECTION HISTOGRAM'......

vh=new int[w1];
hh=new int[h1];

for(int x=0;x<w1;x++)
{
count=0;
for(int y=0;y<h1;y++)
{
if(thin[w1*y+x]==0)
count++;
}
vh[x]=count;
}
//.........FOR HORIZONTAL HISTOGRAM.........
for(int y=0;y<h1;y++)
{
count=0;
for(int x=0;x<w1;x++)
{
if(thin[w1*y+x]==0)
count++;
}

40
hh[y]=count;
}
//...............FOR VERTICAL PROJECTION PEAKS..........

vpeak=0;
for(int x=0;x<w1-1;x++)
{
if(x==0 && vh[x]>vh[x+1])
vpeak++;
if(vh[x]>vh[x+1] && vh[x]>vh[x-1])
vpeak++;
if(x==(w1-1) && vh[x]>vh[x-1])
vpeak++;
}
//.................FOR HORIZONTAL PROJECTION PEAKS..........

hpeak=0;
for(int x=0;x<h1-1;x++)
{
if(x==0 && hh[x]>hh[x+1])
hpeak++;

if(hh[x]>hh[x+1] && hh[x]>hh[x-1])


hpeak++;
if(x==(h1-1) && hh[x]>hh[x-1])
hpeak++;
}

//.....................Vertical Centre..................
for(int y=0;y<h1;y++)
{
for(int x=0;x<w1;x++)

41
{
if(thin[w1*y+x]==0)
sum1+=w1*y+x;
}
int y1=y+1;
sum1=sum1*y1;
}
for(int x=0;x<w1;x++)
{
for(int y=0;y<h1;y++)
{
if(thin[w1*y+x]==0)
sum2+=w1*y+x;
}
}
CY=sum1/sum2;
//.............for horizontal centre of signature.......

sum1=0;
sum2=0;
for(int x=0;x<w1;x++)
{
for(int y=0;y<h1;y++)
{
if(thin[w1*y+x]==0)
sum1+=w1*y+x;
}
sum1=sum1*x;
}
for(int x=0;x<w1;x++)
{
for(int y=0;y<h1;y++)

42
{
if(thin[w1*y+x]==0)
sum2+=w1*y+x;
}
}
CH=sum1/sum2;
//..............................grid information Feature.................
int gwidth,gheight,indextemp=0,gridblack,blackavg;
int gblack[]=new int[96];
gwidth=w1/12;gheight=h1/8;

for(int y=0;y<h1-gheight;y=y+gheight)
{
for(int x=0;x<w1;x=x+gwidth,indextemp++)
{
gridblack=0;
for(int i=0;i<gheight;i++)
{
for(int j=0;j<gwidth;j++)
{
try{
if(thin[w1*(y+i)+x+j]==0)
gridblack++;
}catch(Exception e){System.out.println("y"+y+"i"+i+"j"+j);}
}
}
gblack[indextemp]=gridblack;

}
}
blackavg=countblack/96;indextemp=0;
System.out.println("grid width="+gwidth+"gridheight="+gheight);

43
for(int y=0;y<h1-gheight;y=y+gheight)
{
for(int x=0;x<w1;x=x+gwidth,indextemp++)
{
System.out.println("No.of black pixel in grid no."+indextemp+"is"
+gblack[indextemp]);
if(gblack[indextemp]>blackavg)
{
for(int i=0;i<gheight;i++)
for(int j=0;j<gwidth;j++)
thin[w1*(y+i)+x+j]=0;
gridval[indextemp]=1;
}
if(gblack[indextemp]<=blackavg)
{
for(int i=0;i<gheight;i++)
for(int j=0;j<gwidth;j++)
thin[w1*(y+i)+x+j]=-1;
gridval[indextemp]=-1;
}
}
}
}
public int edgePoint( )
{
return edge;
}
public int crossPoint( )
{
return crosspoint;
}
public int closedLoop()

44
{
System.out.println("no of closed loop"+closedloop);
return closedloop;
}
public int blackPixels()
{
System.out.println("black pixels"+countblack);
return countblack;
}
public int[] verticalHistogram()
{
// System.out.println("black pixels"+vh.lengt);
return vh;
}
public int[] horizontalHistogram()
{
return hh;
}
public int verticalCentre()
{
System.out.println("vertical centre"+CY);
return CY;
}
public int horizontalCentre()
{
System.out.println("horizontal centre"+CH);
return CH;
}
public int vprojectionPeaks()
{
System.out.println("vertical peak"+vpeak);
return vpeak;

45
}
public int hprojectionPeak()
{
System.out.println("horizontal peak"+hpeak);
return hpeak;
}
public int[] grid()
{
for(int i=0;i<96;i++)
System.out.println("gridvalue="+gridval[i]);
return gridval;
}
}
** 5. …………………………..Class MPanel ………………………………….

import java.awt.*;
import java.awt.image.*;
class MPanel extends Panel
{
Image img;
int w,h;
boolean error=false;
//Graphics g;
MPanel(int x,int y)
{
setBounds(x,y,340,340);
}
public void setImage(Image img1,int w1,int h1,Graphics g1)
{
img=img1;
w=w1;
h=h1;

46
// g=g1;
}
public void paint(Graphics g)
{
if(error){g.drawString("Error ",50,50);}
if(img!=null)
g.drawImage(img,10,40,w,h,this);
}
public void update(Graphics g)
{
paint(g);
}
public boolean imageUpdate(Image img,int flags,int x,int y,int w,int h)
{
if((flags& SOMEBITS)!=0)
{
repaint(x,y,w,h);
}
else if((flags&(ABORT))!=0)
{
error=true;
repaint();
}
return (flags &(ALLBITS |ABORT))==0;
}
}
vii…………………………………………..

import java.awt.*;
import java.awt.event.*;
public class NeuralPanel extends Dialog implements ActionListener
{

47
TextField ipText,opText,hnText,learnText,momentumText,stepsText;
Label ipLabel,opLabel, hnLabel,learnLabel,momentumLabel,stepsLabel;
Button trainNetwork,close;
BackProp nNetwork2;
double ivoriginal[],ov[]={1};
MProj mproj1;

NeuralPanel(double ivorig[],MProj mproj)


{
super(mproj,true);
mproj1=mproj;
ivoriginal=ivorig;
setLayout(null);
setSize(500,400);
trainNetwork=new Button("Train");
close=new Button("Close");
ipText=new TextField("96",10);
opText=new TextField("1",10);
hnText=new TextField("5",10);
learnText=new TextField(".3",10);
momentumText=new TextField(".4",10);
stepsText=new TextField("50",10);

ipLabel=new Label("Input");
opLabel=new Label("Output");
hnLabel=new Label("Hidden");
learnLabel=new Label("Learn Rate");
momentumLabel=new Label("Momentum");
stepsLabel=new Label("Steps");
trainNetwork.addActionListener(this);
close.addActionListener(this);

48
add(trainNetwork);
add(close);
add(ipText);
add(opText);
add(hnText);
add(learnText);
add(momentumText);
add(stepsText);

add(ipLabel);
add(opLabel);
add(hnLabel);
add(learnLabel);
add(momentumLabel);
add(stepsLabel);

ipLabel.setBounds(10,30,100,30);
ipText.setBounds(80,30,100,30);
opLabel.setBounds(10,70,100,30);
opText.setBounds(80,70,100,30);
hnLabel.setBounds(10,110,100,30);
hnText.setBounds(80,110,100,30);
learnLabel.setBounds(10,150,100,30);
learnText.setBounds(80,150,100,30);
momentumLabel.setBounds(10,190,100,30);
momentumText.setBounds(80,190,100,30);
stepsLabel.setBounds(10,220,100,30);
stepsText.setBounds(80,220,100,30);

trainNetwork.setBounds(100,270,50,30);
close.setBounds(100,320,100,30);

49
setVisible(true);
}
public void actionPerformed(ActionEvent ae)
{
String st=ae.getActionCommand();
if(st.equals("Train"))
{
try{
int inputs = Integer.parseInt(ipText.getText());
int hiddens = Integer.parseInt(hnText.getText());
int outputs = Integer.parseInt(opText.getText());
double learnRate = Double.parseDouble(learnText.getText());
double momentum = Double.parseDouble(momentumText.getText());
int steps = Integer.parseInt(stepsText.getText());

BackProp nNetwork = new BackProp(inputs, hiddens, outputs);


nNetwork2=nNetwork;

for(int i = 0; i < steps; i++)


{
double sumError = 0.0D;
sumError += nNetwork.learnVector(ivoriginal,ov );
System.out.println("out put from Trainning"+sumError);
}
}catch(NumberFormatException e){new DD(mproj1,true,"Invalid Input");}
}
else if(st.equals("Close"))
{
dispose();
}
}
public BackProp retBackProp()

50
{
return nNetwork2;
}
}
** 6. …………………………..Class NeuralPanel ………………………………….

import java.awt.*;
import java.awt.event.*;

public class NeuralPanel extends Dialog implements ActionListener


{
TextField ipText,opText,hnText,learnText,momentumText,stepsText;
Label ipLabel,opLabel, hnLabel,learnLabel,momentumLabel,stepsLabel;
Button trainNetwork,close;
BackProp nNetwork2;
double ivoriginal[],ov[]={1};
MProj mproj1;

NeuralPanel(double ivorig[],MProj mproj)


{
super(mproj,true);
mproj1=mproj;
ivoriginal=ivorig;
setLayout(null);
setSize(500,400);
trainNetwork=new Button("Train");
close=new Button("Close");
ipText=new TextField("96",10);
opText=new TextField("1",10);
hnText=new TextField("5",10);
learnText=new TextField(".3",10);
momentumText=new TextField(".4",10);
stepsText=new TextField("50",10);
51
ipLabel=new Label("Input");
opLabel=new Label("Output");
hnLabel=new Label("Hidden");
learnLabel=new Label("Learn Rate");
momentumLabel=new Label("Momentum");
stepsLabel=new Label("Steps");

trainNetwork.addActionListener(this);
close.addActionListener(this);

add(trainNetwork);
add(close);
add(ipText);
add(opText);
add(hnText);
add(learnText);
add(momentumText);
add(stepsText);

add(ipLabel);
add(opLabel);
add(hnLabel);
add(learnLabel);
add(momentumLabel);
add(stepsLabel);

ipLabel.setBounds(10,30,100,30);
ipText.setBounds(80,30,100,30);
opLabel.setBounds(10,70,100,30);
opText.setBounds(80,70,100,30);
hnLabel.setBounds(10,110,100,30);

52
hnText.setBounds(80,110,100,30);
learnLabel.setBounds(10,150,100,30);
learnText.setBounds(80,150,100,30);
momentumLabel.setBounds(10,190,100,30);
momentumText.setBounds(80,190,100,30);
stepsLabel.setBounds(10,220,100,30);
stepsText.setBounds(80,220,100,30);

trainNetwork.setBounds(100,270,50,30);
close.setBounds(100,320,100,30);

setVisible(true);
}
public void actionPerformed(ActionEvent ae)
{
String st=ae.getActionCommand();
if(st.equals("Train"))
{
try{
int inputs = Integer.parseInt(ipText.getText());
int hiddens = Integer.parseInt(hnText.getText());
int outputs = Integer.parseInt(opText.getText());
double learnRate = Double.parseDouble(learnText.getText());
double momentum =
Double.parseDouble(momentumText.getText());
int steps = Integer.parseInt(stepsText.getText());

BackProp nNetwork = new BackProp(inputs, hiddens, outputs);


nNetwork2=nNetwork;

for(int i = 0; i < steps; i++)


{

53
double sumError = 0.0D;
sumError += nNetwork.learnVector(ivoriginal,ov );
System.out.println("out put from Trainning"+sumError);
}
}catch(NumberFormatException e){new DD(mproj1,true,"Invalid Input");}
}
else if(st.equals("Close"))
{
dispose();
}
}
public BackProp retBackProp()
{
return nNetwork2;
}
}
** 7. …………………………..Class Preprocessing………………………………….
import java.awt.*;
public class Preprocessing
{
int p[],q[],thin[];
int rgb,rgb1[],countblack=0;
int tr,tg,tb;
public int[] imagePreprocessing(int pixel[],int w1,int h1)
{
p=new int[w1*h1];
q=new int[w1*h1];
p=pixel;
for(int i=0;i<p.length;i++)
{
rgb=p[i];
tr=(rgb>>16)& 0xff;

54
tg=(rgb>>8)& 0xff;
tb=rgb & 0xff;
rgb=(int) (tr+tg+tb)/3;
q[i]=rgb; //gray image array
}
// .................... Binary Conversion.................

int white=0,black=0;
//loop for black and white assignment start
int sum,h2=h1-(h1%4);
int w2=w1-(w1%4);
for(int y=0;y<h2;y=y+4)
{
for(int x=0;x<w2;x=x+2)
{
sum=0;
for(int i=0;i<4;i++)
for(int j=0;j<4;j++)
sum+=q[x+w1*(i+y)+j]&(0x000000ff);
int avg=sum/16;
//Black and White assignment loop start
for(int a=0;a<4;a++)
for(int b=0;b<2;b++)
{
if(( q[x+w1*(a+y)+b]&0x000000ff)<avg)
q[x+w1*(a+y)+b]=0;
else
q[x+w1*(a+y)+b]=-1;
}
//Black and White assignment loop closed
}
}

55
//...................Noise Reduction of Binary Image..................

int black1,white1;
for(int y=0;y<h1-2;y++)
{
for(int x=0;x<w1-2;x++)
{
black1=0;white1=0;
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
if(i==1 && j==1){}
else
{
if(q[w1*(y+i)+x+j]==0)
black1++;
else
white1++;
}
}
}
if(q[w1*y+x]==0)
{
if(black1<2)
q[w1*y+x]=-1;
}
else
{
if(white1<2)
q[w1*y+x]=0;
}

56
}
}

//....................Thinning of Image...........................

System.out.println("it is in preprocessing class");


thin=new int [w1*h1];
Thin1 imgp=new Thin1();
thin =imgp.makeThin(q,w1,h1);
return thin;
}
/*public static void main(String ar[])
{
Preprocessing pr=new Preprocessing();
}*/
}

** 8. …………………………..Class Thin1………………………………….
import java.awt.*;
import java.awt.image.*;
import java.net.*;
import java.util.*;
import java.io.*;
import java.lang.Math;

public class Thin1 //implements ImgToolInterface


{
private static final int FRONT=0,BKGRND=-1;
private static final int COLOR_BACKGROUND = -1;
// Background color (white)
private static final int COLOR_FOREGROUND = 0;
// Foreground color (black)

57
private static final int COLOR_REMOVABLE = 2;//0xff0000ff;
// Color of the removable pixels
public static int[] makeThin(int[] src_1d,int ww,int hh)
{
boolean flag1;
do
{
flag1 = false;
for(int y = 0; y < hh; y++)
{
for(int x = 0; x < ww; x++)
if(getPixel(src_1d,ww,hh,x, y) == FRONT)
{
int i = nbors(src_1d,ww,hh,x, y);
/**
* originally (i&gt;2&amp;&amp;i&lt;7..), modified here to avoid cut too
much for 'y' or 'k'
* however, spur produced by this modification
* New modification:
* Add matchPatterns() to stop removing p if:
* 000
* 0p1
* 011
*/
if(i >2 && i < 7 && cindex(src_1d,ww,hh,x, y) == 1){
if(!matchPatterns(src_1d,ww,hh,x,y))
src_1d[x+ww*y] = COLOR_REMOVABLE;
}
}
}

for(int y = 0; y < hh; y++)

58
{
for(int x = 0; x < ww; x++)
if(getPixel(src_1d,ww,hh,x, y) == COLOR_REMOVABLE)
{
src_1d[x+ww*y]=BKGRND;
flag1 = true;
}
}
}while(flag1);
return src_1d;
}//end of makeThin()

/**
* get number of neighbors which is not bkgrnd
*/
static int nbors(int[] src_1d,int ww,int hh,int i, int j)
{
int k = 0;
int ai[] = {-1, -1, 0, 1, 1, 1, 0, -1};
int ai1[] = {0, -1, -1, -1, 0, 1, 1, 1};
for(int l = 0; l< 8; l++)
if(getPixel(src_1d,ww,hh,i + ai[l], j + ai1[l]) !=BKGRND)
k++;
return k;
}//end of nbors()
/*
* get the number of pair of neighbor that is different
*/
static int cindex(int[] src_1d,int ww,int hh,int i, int j)
{
int k = 0;
int i1 = 0;

59
boolean flag1 = true;
int ai[] = {-1, -1, 0, 1, 1, 1, 0, -1};
int ai1[] = {0, -1, -1, -1, 0, 1, 1, 1};
for(int k1 = 0; k1<8; k1++)
{
if(flag1)
i1 = getPixel(src_1d,ww,hh,i + ai[k1], j + ai1[k1]);
int l = k1 + 1;
if(l == 8)
l = 0;
int j1 = getPixel(src_1d,ww,hh,i + ai[l], j + ai1[l]);
flag1 = true;
if(j1 != FRONT && l % 2 == 1)
flag1 = false;
else if(i1 == FRONT && j1 != FRONT || i1 != FRONT && j1 == FRONT)
k++;
}
return k / 2;
}//end of cindex()
static int getPixel(int[] src_1d,int ww,int hh,int i, int j)
{
if(i < 0 || i > ww - 1 || j< 0 || j > hh - 1)
return 0;
else
return src_1d[i+ww*j];
}
public static BufferedImage myThin(BufferedImage src)
{
int ai[] = {-1, -1, 0, 1, 1, 1, 0, -1 };
int ai1[] = { 0, -1, -1, -1, 0, 1, 1, 1 };
int ww=src.getWidth();
int hh=src.getHeight();

60
int[] src_1d=new int[ww*hh];
src.getData().getSamples(0,0,ww,hh,0,src_1d);
boolean changed;

Vector centerList=new Vector();


Vector edgeList=new Vector();
do{
changed=false;
Point p;
int value;
/**
* find edges, edge image in dest_1d
*/
for(int y=0; y<hh; y++)
for (int x=0; x<ww; x++)
{
if(src_1d[y*ww+x]!=BKGRND)
{
p = new Point(x,y);
if (nbors(src_1d, ww, hh, x, y) == 8)
{
centerList.addElement(p);
}
else
edgeList.addElement(p);
}
} //for

//now remove those points in edgeList who has any neighbor in centerList
for(int i=0;i<edgeList.size();i++)
{
boolean remove=false;

61
Point pt=(Point)edgeList.elementAt(i);
for(int j = 0; j< 8; j++)
{
Point nbor=new Point(pt.x + ai[j],pt.y + ai1[j]);
if (centerList.contains(nbor))
{
remove=true;
break;
}
}//endfor
if(remove)
{
isSafe(src_1d,ww,hh,pt);
src_1d[ww * pt.y + pt.x] = BKGRND;
changed=true;
}
}//endfor
centerList.clear();
edgeList.clear();
}while(changed);

BufferedImage dest=new BufferedImage(ww,hh,BufferedImage.TYPE_BYTE_GRAY);


dest.getRaster().setSamples(0,0,ww,hh,0,src_1d);
return dest;
} //end of myThin
/**
* check if the image is still connected after the point removed
* @param src_1d the raw data
* @param ww the width of image
* @param hh the height of image
* @param pt the test point
* @return true if safe otherwise false

62
*/
private static boolean isSafe(int[] src_1d,int ww,int hh,Point pt)
{
int[][] m_Matrix22=new int[3][3];
k_ThinningSearchNeighbors(pt.x,pt.y,src_1d,ww,hh,m_Matrix22);
boolean safe=true;

if(m_Matrix22[0][0]==FRONT)
{
if(m_Matrix22[0][1]!=FRONT && m_Matrix22[1][0]!=FRONT)
safe=false;
}

if(m_Matrix22[0][1]==FRONT)
{
if(m_Matrix22[0][0]!=FRONT && m_Matrix22[0][2]!=FRONT
&& m_Matrix22[1][0]!=FRONT && m_Matrix22[1][2]!=FRONT)
safe=false;
}
if(m_Matrix22[0][2]==FRONT)
{
if(m_Matrix22[0][1]!=FRONT && m_Matrix22[1][2]!=FRONT)
safe=false;
}
if(m_Matrix22[1][2]==FRONT)
{
if(m_Matrix22[0][1]!=FRONT && m_Matrix22[0][2]!=FRONT
&& m_Matrix22[2][2]!=FRONT
&& m_Matrix22[2][1]!=FRONT)
safe=false;
}
if(m_Matrix22[2][2]==FRONT)

63
{
if(m_Matrix22[1][2]!=FRONT && m_Matrix22[2][1]!=FRONT)
safe=false;
}

if(m_Matrix22[2][1]==FRONT)
{
if(m_Matrix22[1][2]!=FRONT && m_Matrix22[2][2]!=FRONT
&& m_Matrix22[2][0]!=FRONT && m_Matrix22[1][0]!=FRONT)
safe=false;
}
if(m_Matrix22[2][0]==FRONT)
{
if(m_Matrix22[2][1]!=FRONT && m_Matrix22[1][0]!=FRONT)
safe=false;
}
if(m_Matrix22[1][0]==FRONT)
{
if(m_Matrix22[0][0]!=FRONT && m_Matrix22[0][1]!=FRONT
&& m_Matrix22[2][0]!=FRONT && m_Matrix22[2][1]!=FRONT)
safe=false;
}

return safe;
}
/**
* matchPatterns()
* special method that checks if a specified pattern
* matches the actual position x,y of the array.
* this method is mainly used by the thinning() algorithm
*/
private static boolean matchPatterns(int[] src_1d,int ww,int hh,int x, int y)

64
{
if (x >ww - 1)
x = ww - 1;
if (x < 0)
x = 0;
if (y>= hh - 1)
y = hh - 1;
if (y < 0)
y = 0;
if(neighbour(src_1d, ww, hh, x, y, 0) == COLOR_FOREGROUND &&
neighbour(src_1d, ww, hh, x, y, 1) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 2) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 3) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 4) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 5) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 6) == COLOR_FOREGROUND &&
neighbour(src_1d, ww, hh, x, y, 7) == COLOR_FOREGROUND)
{
return true;
}
else
{
return false;
}
} //match pattern closed
private static int neighbour(int[] src_1d,int ww,int hh,int x, int y, int j)
{
switch (j)
{
case 0:
x++;
break;

65
case 1:
x++;
y--;
break;
case 2:
y--;
break;
case 3:
x--;
y--;
break;
case 4:
x--;
break;
case 5:
x--;
y++;
break;
case 6:
y++;
break;
case 7:
x++;
y++;
break;
}
if (x >= ww - 1)
return -1;
// x = ww - 1;
if (x < 0)
return BKGRND;
// x = 0;

66
if (y >= hh - 1)
return -1;
// y = hh - 1;
if (y< 0)
return -1;
// y = 0;

return src_1d[y * ww + x];


}

static boolean k_ThinningSearchNeighbors(int x, int y,


int[] src_1d, int ww,int hh, int m_Matrix22[][])
/* As (a) in Gonzales and Woods, between 2 and 6 black neighbors */
{
int BlackNeighbor=0;
if ((m_Matrix22[0][0]=getPixel(src_1d,ww,hh,x-1,y-1)) == FRONT)
{
++BlackNeighbor;
}
if ((m_Matrix22[1][0]=getPixel(src_1d,ww,hh,x ,y-1)) == FRONT)
{
++BlackNeighbor;
}
if((m_Matrix22[2][0]=getPixel(src_1d,ww,hh,x+1,y-1)) == FRONT)
{
++BlackNeighbor;
}
if ((m_Matrix22[0][1]=getPixel(src_1d,ww,hh,x-1,y)) == FRONT)
{
++BlackNeighbor;
}
if((m_Matrix22[2][1]=getPixel(src_1d,ww,hh,x+1,y)) == FRONT)

67
{
++BlackNeighbor;
}
if((m_Matrix22[0][2]=getPixel(src_1d,ww,hh,x-1,y+1)) == FRONT)
{
++BlackNeighbor;
}
if((m_Matrix22[1][2]=getPixel(src_1d,ww,hh,x ,y+1)) == FRONT)
{
++BlackNeighbor;
}
if((m_Matrix22[2][2]=getPixel(src_1d,ww,hh,x+1,y+1)) == FRONT)
{
++BlackNeighbor;
}
if((BlackNeighbor>=2) && (BlackNeighbor<=6))
return true;
else
return false;
}
}//end of Thin1
……………………………………………………………………………………………

8. Testing phase and results


According to the above analysis, when the system is asked to decide
whether an unknown signature image belongs to a particular person in the
database the following steps are followed.
 The unknown signature image passes through the pre-processing and feature
extraction stages.
 The three sets of features are applied to the inputs of all of the
three specialized Perceptron neural net-works. The networks are
run forward so that we get outputs for all of them.
68
 The Euclidean distance between the 160 features (all the three sets) of the
unknown signature image and the features of each signature in the TRS1 that
belongs to the candidate person is calculated. The average Euclidean distance is
then extracted.
For example, the Euclidean norm (DN) between the feature vector of the
unknown image XT and the feature vector of the Nth signature in the database
XN is given from the following equation:

The outputs of the three neural networks and the average Euclidean distance
are taken as the inputs of the second-stage classifier (the RBF neural network).
The RBF neural network is then ran forward. If the output is positive, the given
signature belongs to the candidate person. If not, it does not belong to the candidate
person.

For the performance testing of the system, the remaining 500 signatures in the
master set are used (unknown both to the first- and to the second-stage classifiers).
This set is called TS and the system is tested by two deferent scenarios: the
verification scenario and the recognition scenario.
8.1. The Verification Scenario

For each signature in TS, we queried the system 115 times, one time for
each owner. The TS contained 500 signature images and that made
115_500=57,500 testing cases.
The possible cases are
 Correct acceptation :
The system was asked if the signature belonged to the correct owner and the
response was positive.
 False rejection.
69
The system was asked if the signature belonged to the correct owner and the
response was negative.
 False acceptation.
The system was asked if the signature belonged to a false owner and the
response was positive into groups and the adoption of a two-stage structure. We
showed that such a structure leads to small, easily trained classifiers without
hazarding performance by leaving out features that may be useful to the system .
 Correct rejection.
The system was asked if the signature belonged to a false owner and the
response was negative.
8.2. The recognition scenario
For each signature in TS, we queried the system 115 times (one time for
each owner). The system proposes as the signature owner the owner that gives the
maximum output value of the Back Propagation neural network.

9. Conclusion &Remarks

This paper proposes a new o.-line signature verification


and recognition technique. The entire system is based on 160
features grouped to three subsets and on a two-stage neural
network classifier that is arranged in an-one-class-one-network
scheme. During the training process of the first stage, only small,
fixed-size neural networks have to be trained, while, for the
second stage the training process is straightforward.
In designing the proposed system, most of our efforts were
towards of embodying most of the intelligence to the structure of
the system itself. No feature reduction process was used and the
basic rule of thumb in deciding which features to include and
which not was ``use all features and leave the neural networks
decide which of them are important and which are not''. Usually,
such a rule leads to very large and complicated neural networks,
very difficult to get trained. The innovation of the proposed
70
system is the categorization of the features into groups and the
adoption of a two-stage structure. We showed that such a
structure leads to small, easily trained classifiers without
hazarding performance by leaving out features that may be
useful to the system.
Besides the advantage of easily training, the proposed structure offers the
substantial benefit of the ability to expand with new signatures without having to
retrain the entire system from the starting point. That is, no a priori knowledge
concerning the number of persons and the number of signatures is required at
design time.
It is also to be noted that the performance of the system, as it is illustrated
by the recognition and the verification rates that we presented, expresses a worst-
case scenario. The signers were asked to use as much variation in their signatures
as they should ever use under real circumstances. The type of the features and the
classifier used were proven to offer to the entire system independence of the
signature type and size.

71
72
73

Vous aimerez peut-être aussi