Académique Documents
Professionnel Documents
Culture Documents
PROJECT REPORT
ON
SIGNATURE VERIFICATION
SUBMITTED TO
RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA
BHOPAL (M.P.)
In partial fulfillment of the requirement for the award of the
Degree of
BACHELOR OF ENGINEERING
IN
INFORMATION TECHNOLOGY
SESSION
2010-2011
SUBMITTED BY
Vikalp kulshrestha
PROF. U.Dutta
DEPARTMENT OF CS/IT
MAHARANA PRATAP COLLEGE OF TECHNOLOGY
GWALIOR – 474006 (M.P.)
1
A
PROJECT REPORT
ON
SIGNATURE VERIFICATION
SUBMITTED TO
RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA
BHOPAL (M.P.)
In partial fulfillment of the requirement for the award of the
Degree of
BACHELOR OF ENGINEERING
IN
INFORMATION TECHNOLOGY
SESSION
2010-2011
SUBMITTED BY
PROF. U.Dutta
DEPARTMENT OF CS/IT
MAHARANA PRATAP COLLEGE OF TECHNOLOGY
GWALIOR – 474006 (M.P.)
2
MAHARANA PRATAP COLLEGE OF
TECHNOLOGY, GWALIOR – 474006
2010-2011
CERTIFICATE
3
ACKNOWLEDGEMENT
4
Candidate’s Declaration
5
Verification
This is verified that above declaration made by the candidates is true to the
best of our knowledge.
6
INDEX
Introduction
Preprocessing
• Gray Image Conversion
• Binary Image Conversion
• Noise Reduction
• Width Normalization
• Skeletonization
Feature Extraction
• Global Features
o Image area.
o Vertical center of signature
o Horizontal center of the signature.
o Maximum vertical projection.
o Maximum horizontal projection.
o Vertical projection peaks.
o Horizontal projection peaks.
o Number of edge points.
o Number of cross points.
• Grid Information Feature
The Signature Database
Classification
The Training Phase
Implementation in JAVA
Testing Phase & Result
• Verification scenario
• The recognition scenario
7
Conclusion
INTRODUCTION
8
In this report , a novel approach for off-line signature recognition and
verification is proposed. The presented system is based on two powerful feature
sets in combination with a multiple-stage neural-network-based classifier(Fig.1).
The novelty of the system lies mainly on the structure of the classifier and
the way that it is used . the neural network classifier is arranged in two stages .
The ability to easily add/remove signature from new /obsolete owner to its
database must be inherent. The approach toward this goal is to implement the
structure of the neural network classifier is a one class one network schema that is
for each signature owner of an individual each time signature from a new owner
are added to the SRVS database, only a small, fixed size neural network based
classifier must be trained.
For each of the resulting two features group and individual multi layers
perception neural network is implemented. These three small and fixed size neural
networks for each signature owner constitute the first stage of the classifiers. It is
a lake of the second stage classifier ,a radial basis function(RBF)neural network
to combine the results of the first stage to make the final decision of weather
presented to the system signature ,belongs to the candidate owner or not .
9
The experimental results confirmed the effectiveness of the proposed
structure and show its ability to yield high recognition and verification rates.
2. PREPROCESSING
Rule: if the no of the 8 neighbors of a pixel that have the same color with
the central pixel is less than two ,we reverse the color of the central pixel .
10
2.2 Data Area Cropping
The signature area is separated from the background by using the well
known segmentation methods of vertical and horizontal projection.
Thus, the white space surrounding the signature is discarded.
The image size is adjusted so that the width reaches a default value while
the height to width ratio remains unchanged .
2.4 Skeletonization
A simplified version of the skeletonization technique is used. The
simplified algorithm is used here consists of the following three steps:
Step 1:
Mark all the points of the signature that are candidates for removing (black
pixels that have at least one white 8-neighbor and at least two black 8-
neighborspixels).
Step 2:
Examine one by one all of them, following the contour lines of the
signature image, and remove these as their removal will not cause a break in the
resulting pattern.
Step 3:
If at least one point was deleted go again to Step 1 and repeat the process
once more. Fig.2. shows an example of this skeletonization technique.
Skeletonization makes
the extracted features invariant to image characteristics like the qualities
of the pen and the paper the signer used, and the digitizing method and quality.
11
Figure 2. Example of the skeletonization algorithm.
3. Feature Extraction
12
segmented in only six rectangular areas, while, for each area, information about
the transition of black and white pixels in the four deferent directions is used.
13
The vertical projection of the skeletonized signature image is calculated.
The highest value of the projection histogram is taken as the maximum vertical
projection.
An edge point is defined as a signature point that has only one 8-neighbor.
Number of cross points. Cross point is a signature point that has at least three 8-
neighbors.
14
Figure 3: Examples of corner (C1, C2, C3, C4) and edge (E1,
E2, E3, E4) points.
15
Figure 4: The grid feature vector of a signature.
The signatures have been taken from 25 persons (2 signatures from each).
For training the system ,sets taken from the master set, of 50 signatures were
used.
16
The performance of the system has been checked by the use of the
signatures used. In order to make the system robust to intra-personal variations
and to extract worst-case classification rates, the signers were asked to use as
much variation in their signature
sizes and shapes as they should ever use in real circumstances.
5. Classification
Multi-layer Perceptron (MLP) neural networks are among the most
commonly used classifiers for pattern recognition problems. Despite their
advantages, they suffer from some very serious limitations that make their use, for
some problems, impossible. The first limitation is the size of the neural network.
It is very dificult, for very large neural networks, to get trained. As the amount of
the training data increases, this dificulty becomes a serious obstacle for the
training process.
The second dificulty is that the geometry, the size of the network, the
training method used and the training parameters depend substantially on the
amount of the training data. Also, in order to specify the structure and the size of
the neural network, it is necessary to know a priori the number of the classes that
the neural network will have to deal with. Unfortunately, when talking about a
useful SRVS, a priori knowledge about the number of signatures and the number
of the signature owners is not available. The proposed SRVS confronts these
problems by reducing the training computation time and the size of the neural
networks used. This is achieved by:
Reduction of the feature space. The feature set is split to two different groups,
i.e., global features and grid features. Due to the different nature and the
uncorrelation of the two feature sets, the combination of the two feature vectors
covers the required feature information.
17
Reduction of the necessary training samples. This is achieved because each
neural network corresponds to only one signature owner. Specifically, during the
first stage of classification, multiple but fixed-size neural networks are used (Figs.
1 and 6). In Fig. 1, each one of the neural-networks NN1 and NN2 specializes in
signatures of only one person. For practical systems, this approach offers another
significant advantage: each time we want to add a set of signatures (a new person)
to the systems database,we only have to train two new small neural networks
(one for each set of features). It is not necessary to retrain a very large neural
network, which is of course a much more dificult situation. Due to the use of
many neural networks, it is necessary to apply a training algorithm that can train
them eficiently, avoiding local minima. Due to its stochastic nature, it presents a
18
(fed by the outputs of the first-stage classifiers), a hidden layer with two non-
linear neurons and a simple output linear neuron.
6.Training Phase
7. Implementation in Java
Ten classes has been developed for implementing signature verification
and their coding is as following:
19
MPanel PanelImg1,PanelImg2;
Button bLoadImage1,bLoadImage2,bPreProcessing1,bPreProcessing2,
bFExtraction1,bFExtraction2,bMatch;
Button neuralNetwork;
int edgepoint1,crosspoint1,blackpixels1;
int verticalhistogram1[],horizontalhistogram1[];
int verticalcentre1,horizontalcentre1;
int vprojectionpeak1,hprojectionpeak1,gridfeatures1[],closedloop1;
int edgepoint2,crosspoint2,blackpixels2;
int verticalhistogram2[],horizontalhistogram2[];
int verticalcentre2,horizontalcentre2;
int vprojectionpeak2,hprojectionpeak2,gridfeatures2[],closedloop2;
MProj()
{
setSize(800,500);
setLayout(null);
setBackground(Color.gray);
addWindowListener(new WindowAdapter(){
public void windowClosing(WindowEvent e){System.exit(0);}}
);
PanelImg1=new MPanel(0,20);
PanelImg2=new MPanel(450,20);
PanelImg1.setBackground(Color.black);
PanelImg2.setBackground(Color.black);
20
bLoadImage2=new Button("Load2");
bPreProcessing1=new Button("Preprocessing1");
bPreProcessing2=new Button("Preprocessing2");
bFExtraction1=new Button("Feature Extraction1");
bFExtraction2=new Button("Feature Extraction2");
bMatch=new Button("Match");
add(PanelImg1);
add(PanelImg2);
add(neuralNetwork);
add(bLoadImage1);
add(bLoadImage2);
add(bPreProcessing1);
add(bPreProcessing2);
add(bFExtraction1);
add(bFExtraction2);
add(bMatch);
bLoadImage1.setBounds(100,400,50,30);
bLoadImage2.setBounds(570,400,50,30);
bPreProcessing1.setBounds(350,100,100,30);
bPreProcessing2.setBounds(350,150,100,30);
bFExtraction1.setBounds(350,200,110,30);
bFExtraction2.setBounds(350,250,110,30);
neuralNetwork.setBounds(350,320,100,30);
bMatch.setBounds(380,410,50,30);
bLoadImage1.addActionListener(this);
bLoadImage2.addActionListener(this);
bFExtraction1.addActionListener(this);
bPreProcessing1.addActionListener(this);
21
bFExtraction2.addActionListener(this);
bPreProcessing2.addActionListener(this);
bMatch.addActionListener(this);
neuralNetwork.addActionListener(this);
bFExtraction1.setEnabled(false);
bPreProcessing1.setEnabled(false);
bFExtraction2.setEnabled(false);
bPreProcessing2.setEnabled(false);
bMatch.setEnabled(false);
neuralNetwork.setEnabled(false);
setVisible(true);
}
public void actionPerformed(ActionEvent e)
{
String dname,fname;
String cmd=e.getActionCommand();
if(cmd.equals("Load1"))
{
try{
g1=PanelImg1.getGraphics();
FileDialog fd= new FileDialog(this,"open",FileDialog.LOAD);
fd.setVisible(true);
dname=fd.getDirectory();
fname=fd.getFile();
System.out.println(dname+fname);
img1=Toolkit.getDefaultToolkit().getImage(dname+fname);
MediaTracker mt = new MediaTracker(PanelImg1);
mt.addImage(img1,0);
22
try{
mt.waitForID(0);
} catch(Exception e1){}
w=img1.getWidth(null);
h=img1.getHeight(null);
if(w<=0||h<=0)
{
new DD(this,true,"NO FILE SELECTED");
}
w1=(3*w)/4;
h1=(3*h)/4;
p=new int[w1*h1];
img2=img1.getScaledInstance(w1,h1,Image.SCALE_SMOOTH);
}
else if(cmd.equals("Preprocessing1"))
{
Preprocessing pp=new Preprocessing();
System.out.println("we r in preprocessing block1");
thinp=pp.imagePreprocessing(p,w1,h1);
24
img7 =createImage(new
MemoryImageSource(w1,h1,thinp,0,w1));
g1.drawImage(img7,20,40,w1,h1,PanelImg1);
PanelImg1.setImage(img7,w1,h1,g1);
bFExtraction1.setEnabled(true);
bPreProcessing1.setEnabled(false) ;
}
else if(cmd.equals("Preprocessing2"))
{
Preprocessing pp1=new Preprocessing();
System.out.println("we r in preprocessing block2");
thinq=pp1.imagePreprocessing(q,w2,h2);
img8 =createImage(new
MemoryImageSource(w2,h2,thinq,0,w2));
g2.drawImage(img8,20,40,w2,h2,PanelImg2);
PanelImg2.setImage(img8,w2,h2,g2);
bFExtraction2.setEnabled(true);
bPreProcessing2.setEnabled(false) ;
}
else if(cmd.equals("Feature Extraction1"))
{
ExtractFeature fe1=new ExtractFeature(thinp,w1,h1);
verticalhistogram1=new int[w1];
horizontalhistogram1=new int[h1];
gridfeatures1=new int[90];
edgepoint1=fe1.edgePoint();
crosspoint1=fe1.crossPoint();
blackpixels1=fe1.blackPixels();
25
verticalhistogram1=fe1.verticalHistogram();
horizontalhistogram1=fe1.horizontalHistogram();
verticalcentre1=fe1.verticalCentre();
horizontalcentre1=fe1.horizontalCentre();
vprojectionpeak1=fe1.vprojectionPeaks();
hprojectionpeak1=fe1.hprojectionPeak();
closedloop1=fe1.closedLoop();
int tmp[];
tmp=fe1.grid();
for(int i=0;i<96;i++)
gridvalue1[i]=tmp[i];
neuralNetwork.setEnabled(true);
}
else if(cmd.equals("Feature Extraction2"))
{
ExtractFeature fe2=new ExtractFeature(thinq,w2,h2);
verticalhistogram2=new int[w1];
horizontalhistogram2=new int[h1];
gridfeatures2=new int[90];
edgepoint2=fe2.edgePoint();
crosspoint2=fe2.crossPoint();
blackpixels2=fe2.blackPixels();
verticalhistogram2=fe2.verticalHistogram();
horizontalhistogram2=fe2.horizontalHistogram();
verticalcentre2=fe2.verticalCentre();
horizontalcentre2=fe2.horizontalCentre();
vprojectionpeak2=fe2.vprojectionPeaks();
hprojectionpeak2=fe2.hprojectionPeak();
closedloop2=fe2.closedLoop();
//gridvalue2=fe2.grid();
26
}
else if(cmd.equals("Neural Network"))
{
ne=new NeuralPanel(gridvalue1,this);
if( bFExtraction2.isEnabled()==true)
bMatch.setEnabled(true);
}
else if(cmd.equals("Match"))
{
bp=ne.retBackProp();
double resultID[] = bp.propagate(gridvalue1);
double gridResult=(1-resultID[0])*100;
System.out.println("result"+resultID[0]);
double edgepointdiff,crosspointdiff,blackpixeldiff,
verticalcentrediff,horizontalcentrediff,
vprojectionpeakdiff,hprojectionpeakdiff;
edgepointdiff=((Math.abs(edgepoint1-edgepoint2)/edgepoint1)*100);
crosspointdiff=((Math.abs(crosspoint1-crosspoint2)/crosspoint1)*100);
blackpixeldiff=((Math.abs(blackpixels1-blackpixels2)/blackpixels1)*100);
verticalcentrediff=((Math.abs(verticalcentre1-verticalcentre2)/verticalcentre1)*100);
horizontalcentrediff=((Math.abs(horizontalcentre1-
horizontalcentre2)/horizontalcentre1)*100);
vprojectionpeakdiff=((Math.abs(vprojectionpeak1-
vprojectionpeak2)/vprojectionpeak1)*100);
hprojectionpeakdiff=((Math.abs(hprojectionpeak1-
hprojectionpeak2)/hprojectionpeak1)*100);
System.out.println("edgepoint difference"+edgepointdiff);
if(edgepointdiff<=15 && crosspointdiff<=15
&& vprojectionpeakdiff<=15
&& hprojectionpeakdiff<=15
&& verticalcentrediff<=15
&& horizontalcentrediff<=15
27
&& gridResult<=15)
{
new DD(this,true,"SIGNATURE MATCHED");
}
}
}//end of Action Listener
public static void main(String ar[])
{
MProj mp = new MProj();
}
}
class BackProp
{
private double inputA[];
private double hiddenA[];
private double hiddenN[];
private double hiddenD[];
private double hiddenW[][];
private double outputA[];
private double outputN[];
private double outputD[];
private double oldD[];
private double outputW[][];
private double biasH[];
private double biasO[];
private int numInput;
private int numHidden;
private int numOutput;
28
private int epoch;
private double momentum;
private double alpha;
private double absError;
private Random rand;
public BackProp(int input, int hidden, int output, double alpha, double mom)
{
29
this(input, hidden, output);
this.alpha = alpha;
momentum = mom;
}
private void init()
{
epoch = 0;
for(int i = 0; i < numInput; i++)
{
inputA[i] = frandom(-1D, 1.0D);
}
for(int i = 0; i < numHidden; i++)
{
hiddenA[i] = frandom(-1D, 1.0D);
biasH[i] = frandom(-1D, 1.0D);
for(int m = 0; m < numInput; m++)
{
hiddenW[i][m] = frandom(-1D, 1.0D);
}
}
for(int i = 0; i < numOutput; i++)
{
biasO[i] = frandom(-1D, 1.0D);
for(int m = 0; m < numHidden; m++)
{
outputW[i][m] = frandom(-1D, 1.0D);
}
}
private double sigmoid(double x)
30
{
return 1.0D / (1.0D + Math.exp(-x));
}
private double sigmoidDeriv(double x)
{
return sigmoid(x) * ((double)1 - sigmoid(x));
}
private void feedForward()
{
double sum2 = 0.0D;
for(int i = 0; i < numHidden; i++)
{
sum2 = biasH[i];
for(int j = 0; j < numInput; j++)
{
sum2 += hiddenW[i][j] * inputA[j];
}
hiddenN[i] = sum2;
hiddenA[i] = sigmoid(sum2);
}
for(int i = 0; i < numOutput; i++)
{
sum2 = biasO[i];
for(int j = 0; j < numHidden; j++)
{
sum2 += outputW[i][j] * hiddenA[j];
}
outputN[i] = sum2;
}
}
private void computeDelta(int m)
31
{
outputD[m] = (outputA[m] - sigmoid(outputN[m])) * sigmoidDeriv(outputN[m]);
for(int i = 0; i < numHidden; i++)
{
outputW[m][i] += outputD[m] * hiddenA[i] * alpha;
}
for(int i = 0; i < numOutput; i++)
{
biasO[i] += outputD[m] * alpha;
}
}
32
}
33
public double learnVector(double in[], double out[])
{
for(int i =0;i<numInput;i++)
{
inputA[i] = in[i];
}
for(int i = 0; i < numOutput; i++)
{
outputA[i] = out[i];
}
feedForward();
absError = 0.0D;
for(int j = 0; j < numOutput; j++)
{
computeDelta(j);
absError += Math.pow(outputA[j] - sigmoid(outputN[j]), 2D);
}
updateWeights();
alpha *= momentum;
return absError;
}
34
public int numOutput()
{
return numOutput;
}
35
{
return momentum;
}
36
{
biasO[output] = bias;
}
}
** 3. ……………………..Class DD For Dialog Box………………………………….
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
37
{
int tempindex,edgeblack,countblack=0;
int vh[],hh[],count,vpeak,hpeak;
int temp[]=new int[9],thin[];
int edge=0,crosspoint=0;
int CH,CY;
int el[]=new int[700],edeparture,closedloop;
int sum1=0;
int sum2=0;
int gridval[]=new int [96];
public ExtractFeature(int thin[],int w1,int h1)
{
this.thin=thin;
for(int y=0;y<(h1-2);y++)
{
for(int x=0;x<(w1-2);x++)
{
tempindex=0;
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
temp[tempindex]=thin[(y+i)*w1+x+j];
tempindex++;
}
}
edgeblack=0;
if(temp[4]==0)
{
for(int a=0;a<9;a++)
{
if(a==4)
38
{
}
else
{
if(temp[a]==0)
edgeblack++;
}
}
}
if(edgeblack==1)
{
edge ++;
}
if(edgeblack>=3)
{
crosspoint++;
el[crosspoint-1]=edgeblack;
}
}
}
//................extra departure..........;........
for(int i=0;i<crosspoint;i++)
edeparture+=(el[i]-2);
closedloop=1+(edeparture-edge)/2;
39
for(int i=0;i<thin.length;i++)
{
if(thin[i]==0)
{
countblack++;
}
}//COUNTING END
//System.out.println("total pixels are"+i+"\nNo. of black pixels are"+countblack);
//............FOR VERTICAL PROJECTION HISTOGRAM'......
vh=new int[w1];
hh=new int[h1];
for(int x=0;x<w1;x++)
{
count=0;
for(int y=0;y<h1;y++)
{
if(thin[w1*y+x]==0)
count++;
}
vh[x]=count;
}
//.........FOR HORIZONTAL HISTOGRAM.........
for(int y=0;y<h1;y++)
{
count=0;
for(int x=0;x<w1;x++)
{
if(thin[w1*y+x]==0)
count++;
}
40
hh[y]=count;
}
//...............FOR VERTICAL PROJECTION PEAKS..........
vpeak=0;
for(int x=0;x<w1-1;x++)
{
if(x==0 && vh[x]>vh[x+1])
vpeak++;
if(vh[x]>vh[x+1] && vh[x]>vh[x-1])
vpeak++;
if(x==(w1-1) && vh[x]>vh[x-1])
vpeak++;
}
//.................FOR HORIZONTAL PROJECTION PEAKS..........
hpeak=0;
for(int x=0;x<h1-1;x++)
{
if(x==0 && hh[x]>hh[x+1])
hpeak++;
//.....................Vertical Centre..................
for(int y=0;y<h1;y++)
{
for(int x=0;x<w1;x++)
41
{
if(thin[w1*y+x]==0)
sum1+=w1*y+x;
}
int y1=y+1;
sum1=sum1*y1;
}
for(int x=0;x<w1;x++)
{
for(int y=0;y<h1;y++)
{
if(thin[w1*y+x]==0)
sum2+=w1*y+x;
}
}
CY=sum1/sum2;
//.............for horizontal centre of signature.......
sum1=0;
sum2=0;
for(int x=0;x<w1;x++)
{
for(int y=0;y<h1;y++)
{
if(thin[w1*y+x]==0)
sum1+=w1*y+x;
}
sum1=sum1*x;
}
for(int x=0;x<w1;x++)
{
for(int y=0;y<h1;y++)
42
{
if(thin[w1*y+x]==0)
sum2+=w1*y+x;
}
}
CH=sum1/sum2;
//..............................grid information Feature.................
int gwidth,gheight,indextemp=0,gridblack,blackavg;
int gblack[]=new int[96];
gwidth=w1/12;gheight=h1/8;
for(int y=0;y<h1-gheight;y=y+gheight)
{
for(int x=0;x<w1;x=x+gwidth,indextemp++)
{
gridblack=0;
for(int i=0;i<gheight;i++)
{
for(int j=0;j<gwidth;j++)
{
try{
if(thin[w1*(y+i)+x+j]==0)
gridblack++;
}catch(Exception e){System.out.println("y"+y+"i"+i+"j"+j);}
}
}
gblack[indextemp]=gridblack;
}
}
blackavg=countblack/96;indextemp=0;
System.out.println("grid width="+gwidth+"gridheight="+gheight);
43
for(int y=0;y<h1-gheight;y=y+gheight)
{
for(int x=0;x<w1;x=x+gwidth,indextemp++)
{
System.out.println("No.of black pixel in grid no."+indextemp+"is"
+gblack[indextemp]);
if(gblack[indextemp]>blackavg)
{
for(int i=0;i<gheight;i++)
for(int j=0;j<gwidth;j++)
thin[w1*(y+i)+x+j]=0;
gridval[indextemp]=1;
}
if(gblack[indextemp]<=blackavg)
{
for(int i=0;i<gheight;i++)
for(int j=0;j<gwidth;j++)
thin[w1*(y+i)+x+j]=-1;
gridval[indextemp]=-1;
}
}
}
}
public int edgePoint( )
{
return edge;
}
public int crossPoint( )
{
return crosspoint;
}
public int closedLoop()
44
{
System.out.println("no of closed loop"+closedloop);
return closedloop;
}
public int blackPixels()
{
System.out.println("black pixels"+countblack);
return countblack;
}
public int[] verticalHistogram()
{
// System.out.println("black pixels"+vh.lengt);
return vh;
}
public int[] horizontalHistogram()
{
return hh;
}
public int verticalCentre()
{
System.out.println("vertical centre"+CY);
return CY;
}
public int horizontalCentre()
{
System.out.println("horizontal centre"+CH);
return CH;
}
public int vprojectionPeaks()
{
System.out.println("vertical peak"+vpeak);
return vpeak;
45
}
public int hprojectionPeak()
{
System.out.println("horizontal peak"+hpeak);
return hpeak;
}
public int[] grid()
{
for(int i=0;i<96;i++)
System.out.println("gridvalue="+gridval[i]);
return gridval;
}
}
** 5. …………………………..Class MPanel ………………………………….
import java.awt.*;
import java.awt.image.*;
class MPanel extends Panel
{
Image img;
int w,h;
boolean error=false;
//Graphics g;
MPanel(int x,int y)
{
setBounds(x,y,340,340);
}
public void setImage(Image img1,int w1,int h1,Graphics g1)
{
img=img1;
w=w1;
h=h1;
46
// g=g1;
}
public void paint(Graphics g)
{
if(error){g.drawString("Error ",50,50);}
if(img!=null)
g.drawImage(img,10,40,w,h,this);
}
public void update(Graphics g)
{
paint(g);
}
public boolean imageUpdate(Image img,int flags,int x,int y,int w,int h)
{
if((flags& SOMEBITS)!=0)
{
repaint(x,y,w,h);
}
else if((flags&(ABORT))!=0)
{
error=true;
repaint();
}
return (flags &(ALLBITS |ABORT))==0;
}
}
vii…………………………………………..
import java.awt.*;
import java.awt.event.*;
public class NeuralPanel extends Dialog implements ActionListener
{
47
TextField ipText,opText,hnText,learnText,momentumText,stepsText;
Label ipLabel,opLabel, hnLabel,learnLabel,momentumLabel,stepsLabel;
Button trainNetwork,close;
BackProp nNetwork2;
double ivoriginal[],ov[]={1};
MProj mproj1;
ipLabel=new Label("Input");
opLabel=new Label("Output");
hnLabel=new Label("Hidden");
learnLabel=new Label("Learn Rate");
momentumLabel=new Label("Momentum");
stepsLabel=new Label("Steps");
trainNetwork.addActionListener(this);
close.addActionListener(this);
48
add(trainNetwork);
add(close);
add(ipText);
add(opText);
add(hnText);
add(learnText);
add(momentumText);
add(stepsText);
add(ipLabel);
add(opLabel);
add(hnLabel);
add(learnLabel);
add(momentumLabel);
add(stepsLabel);
ipLabel.setBounds(10,30,100,30);
ipText.setBounds(80,30,100,30);
opLabel.setBounds(10,70,100,30);
opText.setBounds(80,70,100,30);
hnLabel.setBounds(10,110,100,30);
hnText.setBounds(80,110,100,30);
learnLabel.setBounds(10,150,100,30);
learnText.setBounds(80,150,100,30);
momentumLabel.setBounds(10,190,100,30);
momentumText.setBounds(80,190,100,30);
stepsLabel.setBounds(10,220,100,30);
stepsText.setBounds(80,220,100,30);
trainNetwork.setBounds(100,270,50,30);
close.setBounds(100,320,100,30);
49
setVisible(true);
}
public void actionPerformed(ActionEvent ae)
{
String st=ae.getActionCommand();
if(st.equals("Train"))
{
try{
int inputs = Integer.parseInt(ipText.getText());
int hiddens = Integer.parseInt(hnText.getText());
int outputs = Integer.parseInt(opText.getText());
double learnRate = Double.parseDouble(learnText.getText());
double momentum = Double.parseDouble(momentumText.getText());
int steps = Integer.parseInt(stepsText.getText());
50
{
return nNetwork2;
}
}
** 6. …………………………..Class NeuralPanel ………………………………….
import java.awt.*;
import java.awt.event.*;
trainNetwork.addActionListener(this);
close.addActionListener(this);
add(trainNetwork);
add(close);
add(ipText);
add(opText);
add(hnText);
add(learnText);
add(momentumText);
add(stepsText);
add(ipLabel);
add(opLabel);
add(hnLabel);
add(learnLabel);
add(momentumLabel);
add(stepsLabel);
ipLabel.setBounds(10,30,100,30);
ipText.setBounds(80,30,100,30);
opLabel.setBounds(10,70,100,30);
opText.setBounds(80,70,100,30);
hnLabel.setBounds(10,110,100,30);
52
hnText.setBounds(80,110,100,30);
learnLabel.setBounds(10,150,100,30);
learnText.setBounds(80,150,100,30);
momentumLabel.setBounds(10,190,100,30);
momentumText.setBounds(80,190,100,30);
stepsLabel.setBounds(10,220,100,30);
stepsText.setBounds(80,220,100,30);
trainNetwork.setBounds(100,270,50,30);
close.setBounds(100,320,100,30);
setVisible(true);
}
public void actionPerformed(ActionEvent ae)
{
String st=ae.getActionCommand();
if(st.equals("Train"))
{
try{
int inputs = Integer.parseInt(ipText.getText());
int hiddens = Integer.parseInt(hnText.getText());
int outputs = Integer.parseInt(opText.getText());
double learnRate = Double.parseDouble(learnText.getText());
double momentum =
Double.parseDouble(momentumText.getText());
int steps = Integer.parseInt(stepsText.getText());
53
double sumError = 0.0D;
sumError += nNetwork.learnVector(ivoriginal,ov );
System.out.println("out put from Trainning"+sumError);
}
}catch(NumberFormatException e){new DD(mproj1,true,"Invalid Input");}
}
else if(st.equals("Close"))
{
dispose();
}
}
public BackProp retBackProp()
{
return nNetwork2;
}
}
** 7. …………………………..Class Preprocessing………………………………….
import java.awt.*;
public class Preprocessing
{
int p[],q[],thin[];
int rgb,rgb1[],countblack=0;
int tr,tg,tb;
public int[] imagePreprocessing(int pixel[],int w1,int h1)
{
p=new int[w1*h1];
q=new int[w1*h1];
p=pixel;
for(int i=0;i<p.length;i++)
{
rgb=p[i];
tr=(rgb>>16)& 0xff;
54
tg=(rgb>>8)& 0xff;
tb=rgb & 0xff;
rgb=(int) (tr+tg+tb)/3;
q[i]=rgb; //gray image array
}
// .................... Binary Conversion.................
int white=0,black=0;
//loop for black and white assignment start
int sum,h2=h1-(h1%4);
int w2=w1-(w1%4);
for(int y=0;y<h2;y=y+4)
{
for(int x=0;x<w2;x=x+2)
{
sum=0;
for(int i=0;i<4;i++)
for(int j=0;j<4;j++)
sum+=q[x+w1*(i+y)+j]&(0x000000ff);
int avg=sum/16;
//Black and White assignment loop start
for(int a=0;a<4;a++)
for(int b=0;b<2;b++)
{
if(( q[x+w1*(a+y)+b]&0x000000ff)<avg)
q[x+w1*(a+y)+b]=0;
else
q[x+w1*(a+y)+b]=-1;
}
//Black and White assignment loop closed
}
}
55
//...................Noise Reduction of Binary Image..................
int black1,white1;
for(int y=0;y<h1-2;y++)
{
for(int x=0;x<w1-2;x++)
{
black1=0;white1=0;
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
if(i==1 && j==1){}
else
{
if(q[w1*(y+i)+x+j]==0)
black1++;
else
white1++;
}
}
}
if(q[w1*y+x]==0)
{
if(black1<2)
q[w1*y+x]=-1;
}
else
{
if(white1<2)
q[w1*y+x]=0;
}
56
}
}
//....................Thinning of Image...........................
** 8. …………………………..Class Thin1………………………………….
import java.awt.*;
import java.awt.image.*;
import java.net.*;
import java.util.*;
import java.io.*;
import java.lang.Math;
57
private static final int COLOR_REMOVABLE = 2;//0xff0000ff;
// Color of the removable pixels
public static int[] makeThin(int[] src_1d,int ww,int hh)
{
boolean flag1;
do
{
flag1 = false;
for(int y = 0; y < hh; y++)
{
for(int x = 0; x < ww; x++)
if(getPixel(src_1d,ww,hh,x, y) == FRONT)
{
int i = nbors(src_1d,ww,hh,x, y);
/**
* originally (i>2&&i<7..), modified here to avoid cut too
much for 'y' or 'k'
* however, spur produced by this modification
* New modification:
* Add matchPatterns() to stop removing p if:
* 000
* 0p1
* 011
*/
if(i >2 && i < 7 && cindex(src_1d,ww,hh,x, y) == 1){
if(!matchPatterns(src_1d,ww,hh,x,y))
src_1d[x+ww*y] = COLOR_REMOVABLE;
}
}
}
58
{
for(int x = 0; x < ww; x++)
if(getPixel(src_1d,ww,hh,x, y) == COLOR_REMOVABLE)
{
src_1d[x+ww*y]=BKGRND;
flag1 = true;
}
}
}while(flag1);
return src_1d;
}//end of makeThin()
/**
* get number of neighbors which is not bkgrnd
*/
static int nbors(int[] src_1d,int ww,int hh,int i, int j)
{
int k = 0;
int ai[] = {-1, -1, 0, 1, 1, 1, 0, -1};
int ai1[] = {0, -1, -1, -1, 0, 1, 1, 1};
for(int l = 0; l< 8; l++)
if(getPixel(src_1d,ww,hh,i + ai[l], j + ai1[l]) !=BKGRND)
k++;
return k;
}//end of nbors()
/*
* get the number of pair of neighbor that is different
*/
static int cindex(int[] src_1d,int ww,int hh,int i, int j)
{
int k = 0;
int i1 = 0;
59
boolean flag1 = true;
int ai[] = {-1, -1, 0, 1, 1, 1, 0, -1};
int ai1[] = {0, -1, -1, -1, 0, 1, 1, 1};
for(int k1 = 0; k1<8; k1++)
{
if(flag1)
i1 = getPixel(src_1d,ww,hh,i + ai[k1], j + ai1[k1]);
int l = k1 + 1;
if(l == 8)
l = 0;
int j1 = getPixel(src_1d,ww,hh,i + ai[l], j + ai1[l]);
flag1 = true;
if(j1 != FRONT && l % 2 == 1)
flag1 = false;
else if(i1 == FRONT && j1 != FRONT || i1 != FRONT && j1 == FRONT)
k++;
}
return k / 2;
}//end of cindex()
static int getPixel(int[] src_1d,int ww,int hh,int i, int j)
{
if(i < 0 || i > ww - 1 || j< 0 || j > hh - 1)
return 0;
else
return src_1d[i+ww*j];
}
public static BufferedImage myThin(BufferedImage src)
{
int ai[] = {-1, -1, 0, 1, 1, 1, 0, -1 };
int ai1[] = { 0, -1, -1, -1, 0, 1, 1, 1 };
int ww=src.getWidth();
int hh=src.getHeight();
60
int[] src_1d=new int[ww*hh];
src.getData().getSamples(0,0,ww,hh,0,src_1d);
boolean changed;
//now remove those points in edgeList who has any neighbor in centerList
for(int i=0;i<edgeList.size();i++)
{
boolean remove=false;
61
Point pt=(Point)edgeList.elementAt(i);
for(int j = 0; j< 8; j++)
{
Point nbor=new Point(pt.x + ai[j],pt.y + ai1[j]);
if (centerList.contains(nbor))
{
remove=true;
break;
}
}//endfor
if(remove)
{
isSafe(src_1d,ww,hh,pt);
src_1d[ww * pt.y + pt.x] = BKGRND;
changed=true;
}
}//endfor
centerList.clear();
edgeList.clear();
}while(changed);
62
*/
private static boolean isSafe(int[] src_1d,int ww,int hh,Point pt)
{
int[][] m_Matrix22=new int[3][3];
k_ThinningSearchNeighbors(pt.x,pt.y,src_1d,ww,hh,m_Matrix22);
boolean safe=true;
if(m_Matrix22[0][0]==FRONT)
{
if(m_Matrix22[0][1]!=FRONT && m_Matrix22[1][0]!=FRONT)
safe=false;
}
if(m_Matrix22[0][1]==FRONT)
{
if(m_Matrix22[0][0]!=FRONT && m_Matrix22[0][2]!=FRONT
&& m_Matrix22[1][0]!=FRONT && m_Matrix22[1][2]!=FRONT)
safe=false;
}
if(m_Matrix22[0][2]==FRONT)
{
if(m_Matrix22[0][1]!=FRONT && m_Matrix22[1][2]!=FRONT)
safe=false;
}
if(m_Matrix22[1][2]==FRONT)
{
if(m_Matrix22[0][1]!=FRONT && m_Matrix22[0][2]!=FRONT
&& m_Matrix22[2][2]!=FRONT
&& m_Matrix22[2][1]!=FRONT)
safe=false;
}
if(m_Matrix22[2][2]==FRONT)
63
{
if(m_Matrix22[1][2]!=FRONT && m_Matrix22[2][1]!=FRONT)
safe=false;
}
if(m_Matrix22[2][1]==FRONT)
{
if(m_Matrix22[1][2]!=FRONT && m_Matrix22[2][2]!=FRONT
&& m_Matrix22[2][0]!=FRONT && m_Matrix22[1][0]!=FRONT)
safe=false;
}
if(m_Matrix22[2][0]==FRONT)
{
if(m_Matrix22[2][1]!=FRONT && m_Matrix22[1][0]!=FRONT)
safe=false;
}
if(m_Matrix22[1][0]==FRONT)
{
if(m_Matrix22[0][0]!=FRONT && m_Matrix22[0][1]!=FRONT
&& m_Matrix22[2][0]!=FRONT && m_Matrix22[2][1]!=FRONT)
safe=false;
}
return safe;
}
/**
* matchPatterns()
* special method that checks if a specified pattern
* matches the actual position x,y of the array.
* this method is mainly used by the thinning() algorithm
*/
private static boolean matchPatterns(int[] src_1d,int ww,int hh,int x, int y)
64
{
if (x >ww - 1)
x = ww - 1;
if (x < 0)
x = 0;
if (y>= hh - 1)
y = hh - 1;
if (y < 0)
y = 0;
if(neighbour(src_1d, ww, hh, x, y, 0) == COLOR_FOREGROUND &&
neighbour(src_1d, ww, hh, x, y, 1) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 2) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 3) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 4) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 5) == COLOR_BACKGROUND &&
neighbour(src_1d, ww, hh, x, y, 6) == COLOR_FOREGROUND &&
neighbour(src_1d, ww, hh, x, y, 7) == COLOR_FOREGROUND)
{
return true;
}
else
{
return false;
}
} //match pattern closed
private static int neighbour(int[] src_1d,int ww,int hh,int x, int y, int j)
{
switch (j)
{
case 0:
x++;
break;
65
case 1:
x++;
y--;
break;
case 2:
y--;
break;
case 3:
x--;
y--;
break;
case 4:
x--;
break;
case 5:
x--;
y++;
break;
case 6:
y++;
break;
case 7:
x++;
y++;
break;
}
if (x >= ww - 1)
return -1;
// x = ww - 1;
if (x < 0)
return BKGRND;
// x = 0;
66
if (y >= hh - 1)
return -1;
// y = hh - 1;
if (y< 0)
return -1;
// y = 0;
67
{
++BlackNeighbor;
}
if((m_Matrix22[0][2]=getPixel(src_1d,ww,hh,x-1,y+1)) == FRONT)
{
++BlackNeighbor;
}
if((m_Matrix22[1][2]=getPixel(src_1d,ww,hh,x ,y+1)) == FRONT)
{
++BlackNeighbor;
}
if((m_Matrix22[2][2]=getPixel(src_1d,ww,hh,x+1,y+1)) == FRONT)
{
++BlackNeighbor;
}
if((BlackNeighbor>=2) && (BlackNeighbor<=6))
return true;
else
return false;
}
}//end of Thin1
……………………………………………………………………………………………
The outputs of the three neural networks and the average Euclidean distance
are taken as the inputs of the second-stage classifier (the RBF neural network).
The RBF neural network is then ran forward. If the output is positive, the given
signature belongs to the candidate person. If not, it does not belong to the candidate
person.
For the performance testing of the system, the remaining 500 signatures in the
master set are used (unknown both to the first- and to the second-stage classifiers).
This set is called TS and the system is tested by two deferent scenarios: the
verification scenario and the recognition scenario.
8.1. The Verification Scenario
For each signature in TS, we queried the system 115 times, one time for
each owner. The TS contained 500 signature images and that made
115_500=57,500 testing cases.
The possible cases are
Correct acceptation :
The system was asked if the signature belonged to the correct owner and the
response was positive.
False rejection.
69
The system was asked if the signature belonged to the correct owner and the
response was negative.
False acceptation.
The system was asked if the signature belonged to a false owner and the
response was positive into groups and the adoption of a two-stage structure. We
showed that such a structure leads to small, easily trained classifiers without
hazarding performance by leaving out features that may be useful to the system .
Correct rejection.
The system was asked if the signature belonged to a false owner and the
response was negative.
8.2. The recognition scenario
For each signature in TS, we queried the system 115 times (one time for
each owner). The system proposes as the signature owner the owner that gives the
maximum output value of the Back Propagation neural network.
9. Conclusion &Remarks
71
72
73