Académique Documents
Professionnel Documents
Culture Documents
INTRODUCTION
1.2 OBJECTIVE
Intrusion detection (ID) is a type of security management system for
computers and networks. An ID system gathers and analyzes information from
various areas within a computer or a network to identify possible security
breaches, which include both intrusions (attacks from outside the organization)
and misuse (attacks from within the organization). ID uses vulnerability
assessment (sometimes refered to as scanning), which is a technology
developed to assess the security of a computer system or network.
1
Intrusion detection functions include:
The physical and data link layers are vulnerable to intrusions specific to
these communication layers. The mainstay of this project is to design a tool
which identifies the intruder in a system. It is efficient in not allowing the
intruder to work and it captures the intruder activity during system log. It also
determines the hidden layer of ANN. It is, therefore, essential to take into
account these considerations when designing and deploying an intrusion
detection system. Load which are lacking in the existing studies, concludes this
study and outlines some directions of future research work.
2
Distributed systems are networked computers operating with same
processors. The terms "concurrent computing", "parallel computing", and
"distributed computing" have a lot of overlap, and no clear distinction exists
between them. The same system may be characterized both as "parallel" and
"distributed"; the processors in a typical distributed system run concurrently in
parallel.[14]Parallel computing may be seen as a particular tightly-coupled form
of distributed computing,[15] and distributed computing may be seen as a
loosely-coupled form of parallel computing.[5] Nevertheless, it is possible to
roughly classify concurrent systems as "parallel" or "distributed" using the
following criteria:
3
from the output of the combination function. An artificial neural network is
composed of a set of neurons grouped in layers that are connected by synapses.
and compression.
4. Robotics, including directing manipulators, Computer numerical control.
4
CHAPTER 2
SYSTEM ANALYSIS
2.1.1 DRAWBACKS
1. De-authentication attacks
5
side transmit the information to the management station, therefore it is efficient
in not allowing the intruder to work and it capture the intruder activity during
system log and we show the implementation of artificial neural networks that
determines the hidden layer in the Artificial neural networks.
7
CHAPTER 3
SYSTEM SPECIFICATION
3.1 HARDWARE SPECIFICATION
Processor Type : Intel Core 2 Duo
Ram : 2 GB RAM
Hard disk : 40 GB
Facility : Net
8
CHAPTER 4
PROJECT DESCRIPTION
9
the intruder to work and it capture the intruder activity during system log and
we show the implementation of artificial neural networks that determines the
hidden layer in the Artificial neural networks.
4.3.2 PERCEPTRON
There are three types of layers: input, hidden, and output layers. The
input layer is composed of input neurons that receive their values from external
devices such as data files or input signals. The hidden layer is an intermediary
layer containing neurons with the same combination and transfer functions.
10
Finally, the output layer provides the output of the computation to the external
applications.
11
The Input layer is composed of input neurons that receive their values from
external devices .The hidden layer is an intermediary layer containing neurons.
Output layer provides the output of the computation to the external applications.
12
Training can be done by calling the Train function of the network. The input
to the train network function is a Training Data object. A TrainingData object
consists of two array lists - Inputs and Outputs. The number of elements in
TrainingData.Inputs should match exactly with the number of neurons in your
input layer. The number of elements in TrainingData.Outputs should match
exactly with the number of neurons in your output layer.
You can call the Run Network function of the network, to run the
network after training it. The input parameter to the Run function is an array list
which consists of the inputs to the input layer. Again, the number of elements in
this array list should match the number of neurons in input layer. The Run
function will return an array list which consists of the output values. The
number of elements in this array list will be equal to the number of elements in
the output layer.
To test the digital neural gate, let us create a simple interface which can
create a gate, read the inputs to train the gate, and obtain the output to display it.
Here, we create a new object of our DigitalNeuralGate when the form loads.
Also, the user can create a new DigitalNeuralGate by clicking the 'Reset Gate'
button. In the beginning, the Truth Table provided in the training text boxes is
initialized to match the Truth Table of XOR gate
However, you can change the truth table by clicking the links, or you can
provide custom truth table by entering directly in the text boxes. Run the project
and see. To begin with, Reset the Gate by clicking 'Reset Gate', and just click
13
the 'Run Network' button and see the output. The output doesn't match the truth
table output. Now, we can train the network using the values in the truth table.
Click the 'Train 1000 Times' button and click the 'Run Network' button. You
can see the output is getting closer to the expected output - that is, the network
is learning. Do this a couple of times, and see the improvement in accuracy.
Brain Net offers built in support for persistence of neural networks. For
example, in the above case, after training a Gate, we may need to save its state
to load it later. For this, we can use the NetworkSerializer class in the BrainNet
library. To demonstrate this feature, let us add two functions to our
DigitalNeuralGate class. The Save Network method within NetworkSerializer
class will save the network to a specified path, and the Load Network function
will load the network back. The steps are of learning algorithms are
3. Set wi(1) to small random values), thus initializing the weights. We take a
firing threshold at y = 0.
Present from our training samples D the input and desired output dj for this
training set.
14
Steps are repeated until the iteration error dj − y(t) is less than a user-specified
error threshold.
Squashing: 1
= 0.3775
1 +e 0.5
15
The MLP Network implemented for the purpose of this project is
composed of 3 layers, one input, one hidden and one output(Fig.4.1). The input
layer constitutes of 150 neurons which receive pixel binary data from a 10x15
simple pixel matrix. The size of this matrix was decided taking into
consideration the average height and width of character image that can be
mapped without introducing any significant pixel noise. The hidden layer
constitutes of 250 neurons whose number is decided on the basis of optimal
results on a trial and error basis. The output layer is composed of 16 neurons
corresponding to the 16-bits of Unicode encoding.
In general, their most important use has been in the growing field of artificial
intelligence, although the multilayer perceptron does not have connections
with biological neural networks as initial neural based networks.
16
Fig 4.3 MLP Network
17
Enumeration of character lines in a character image (page) is essential in
delimiting the bounds within which the detection can proceed. Thus detecting
the next character in an image does not necessarily involve scanning the whole
image all over again.
1. start at the first x and first y pixel of the image pixel(0,0), Set number of
lines to 0
2. scan up to the width of the image on the same y-component of the image
a. if a black pixel is detected register y as top of the first line
b. if not continue to the next pixel
c. if no black pixel found up to the width increment y and reset x to scan
the next horizontal line
3. start at the top of the line found and first x-component pixel(0,line_top)
4. scan up to the width of the image on the same y-component of the image
a. If no black pixel is detected register y-1 as bottom of the first line.
Increment number of lines
b. If a black pixel is detected increment y and reset x to scan the next
horizontal line
5. start below the bottom of the last line found and repeat steps 1-4 to detect
subsequent lines
6. If bottom of image (image height) is reached stop.
18
b. if not continue to the next pixel
3. start at the top of the character found and first x-component,
pixel(0,character_top)
4. scan up to the line bottom on the same x-component
a. if black pixel found register x as the left of the symbol
b. if not continue to the next pixel
c. if no black pixels are found increment x and reset y to scan the next
vertical line
5. start at the left of the symbol found and top of the current line,
pixel(character_left, line_top)
6. scan up to the width of the image on the same x-component
a. if no black characters are found register x-1 as right of the symbol
b. if a black pixel is found increment x and reset y to scan the next
vertical line
7. start at the bottom of the current line and left of the symbol,
pixel(character_left,line_bottom)
8. scan up to the right of the character on the same y-component
a. if a black pixel is found register y as the bottom of the character
b. if no black pixels are found decrement y and reset x to scan the next
vertical line
From the procedure followed and the above figure it is obvious that the detected
character bound might not be the actual bound for the character in question.
19
Fig : 4.4 Line and Character boundary detection
This is an issue that arises with the height and bottom alignment irregularity
that exists with printed alphabetic symbols. Thus a line top does not necessarily
mean top of all characters and a line bottom might not mean bottom of all
characters as well. An optional confirmation algorithm implemented in the
project is:
A. start at the top of the current line and left of the character
B. scan up to the right of the character
1. if a black pixels is detected register y as the confirmed top
2. if not continue to the next pixel
3. if no black pixels are found increment y and reset x to scan the next
horizontal line
The next step is to map the symbol image into a corresponding two
dimensional binary matrix.. If all the pixels of the symbol are mapped into the
matrix, one would definitely be able to acquire all the distinguishing pixel
features of the symbol and minimize overlap with other symbols. However this
strategy would imply maintaining and processing a very large matrix (up to
1500 elements for a 100x150 pixel image). Since the height and width of
individual images vary, an adaptive sampling algorithm was implemented. The
algorithm is listed below:
1. Map the first (0,y) and last (width,y) pixel components directly to the
first (0,y) and last (20,y) elements of the matrix
2. Map the middle pixel component (width/2,y) to the 10th matrix element
1. Map the first x,(0) and last (x,height) pixel components directly to the
21
Fig : 4.6 Mapping symbol images onto a binary matrix
In order to be able to feed the matrix data to the network (which is of a single
dimension) the matrix must first be linearized to a single dimension. This is
accomplished with a simple routine with the following steps:
Hence the linear array is our input vector for the MLP Network. In a training
phase all such symbols from the trainer set image file are mapped into their own
linear array and as a whole constitute an input space. The trainer set would also
contain a file of character strings that directly correspond to the input symbol
images to serve as the desired output of the training.
22
Once the network has been initialized and the training input space prepared the
network is ready to be trained. Some issues that need to be addressed upon
training the network are:
1. How chaotic is the input space? A chaotic input varies randomly and in
extreme range without any predictable flow among its members.
2. How complex are the patterns for which we train the network? Complex
patterns are usually characterized by feature overlap and high data size.
3. What should be used for the values of:
a. Learning rate
b. Sigmoid slope
c. Weight bias
4. How many Iterations (Epochs) are needed to train the network for a given
number of input sets?
5. What error threshold value must be used to compare against in order to
prematurely stop iterations if the need arises?
Alphabetic optical symbols are one of the most chaotic input sets in pattern
recognitions studies. This is due to the unpredictable nature of their pictorial
representation seen from the sequence of their order. For instance the Latin
alphabetic consecutive character A and B have little similarity in feature when
represented in their pictorial symbolic form. The figure below demonstrates the
point of chaotic and non-chaotic sequence with the Latin and some factious
character set:
23
Fig: 4.7 Example of chaotic and non-chaotic symbol sequences
24
capable of identifying linear and nonlinear correlation between the input and
output vectorsIn the above process, we developed a simple application - a two
input gate that can be trained to perform the function of any digital gate - using
Brian Net library. Now it is time to go for something more exciting and
powerful - a pattern/image detection program using BrainNet library. We
provide a set of images as input to the network along with an ASCII character
that corresponds to each input - and we will examine whether the network can
predict a character.
BrainNet library
2. Built in support for advanced training using Training Queues in BrainNet
library.
Before going to the code and explanation, let us see what the application really
does. You can find the application and source code in the attached zip file. Load
the solution in Microsoft Visual Studio.NET, set the startup project as
PatternDetector, and run the project.
To train the network, after adding the images to the training queue as explained
earlier, click 'Start Training' button. Train the network at least 1000 times, for a
below average accuracy. When we click the 'Start Training' button, training will
start. To detecting a pattern, once the training is completed; go to the 'Detect
using Network' pane. Load an image by clicking the browse button, and click
25
'Detect This Image Now' button to detect the pattern .If we trained the Network
sufficient number of times, and if we provided enough samples, we will get the
correct output.
Click 'Browse' to load an image to the picture box (we can find some
images in the 'bin' folder of Pattern Detector - Also, you can create 20 x 20
monochrome images in Paintbrush if you want). Enter the ASCII character that
corresponds to the image - for example, if we are loading image of character
'A', enter 'A' in the text box. Click 'Add To Queue' button
After adding the images to the training queue as explained earlier, click
'Start Training' button. Train the network at least 1000 times, for a below
average accuracy. When we click the 'Start Training' button, training will start.
Once the training is completed, go to the 'Detect using Network' pane. Load an
image by clicking the browse button, and click 'Detect This Image Now' button
to detect the pattern .If we trained the Network sufficient number of times, and
if we provided enough samples, we will get the correct output.
26
4.4. DATA FLOW DIAGRAM
LEVEL 0:
Access
Validation
User Login information
Camera enabled
Mail image to
user
27
LEVEL 2:
Start
User
authentication
If
username Yes
&passwor
d IIIII
is correct
no
Webcam activation
Perceptron
Multilayer Perceptron
Stop
28
4.5 E-R DIAGRAM
4.5.1. SYSTEM ARCHITECTURE
29
4.5.2. USE CASE DIAGRAM
Perceptro
n
Service
Binary
Login Gate
Service
Pattern
Detection
Service
MLP
Service
Hybrid
User Web Web Info Service
Cam Server
Process
30
4.5.3. SEQUENCE DIAGRAM:
31
4.5.4. CLASS DIAGRAM
CHAPTER 5
32
SYSTEM TESTING
5.1. TESTING
Testing is a process of executing a program with the intent of finding an
error. A good test case is one that has a high probability of finding an as-yet –
undiscovered error. A successful test is one that uncovers an as-yet-
undiscovered error. System testing is the stage of implementation, which is
aimed at ensuring that the system works accurately and efficiently as expected
before live operation commences. It verifies that the whole set of programs
hang together. System testing requires a test consists of several key activities
and steps for run program, string, system and is important in adopting a
successful new system. This is the last chance to detect and correct errors
before the system is installed for user acceptance testing.
The software testing process commences once the program is created and
the documentation and related data structures are designed. Software testing is
essential for correcting errors. Otherwise the program or the project is not said
to be complete. Software testing is the critical element of software quality
assurance and represents the ultimate the review of specification design and
coding. Testing is the process of executing the program with the intent of
finding the error. A good test case design is one that as a probability of finding
a yet undiscovered error. A successful test is one that uncovers a yet
undiscovered error.
33
Unit testing is conducted to verify the functional performance of each
modular component of the software. Unit testing focuses on the smallest unit of
the software design (i.e.), the module. The white-box testing techniques were
heavily employed for unit testing
34
and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.
This testing is also called as Glass box testing. In this testing, by knowing
the specific functions that a product has been design to perform test can be
conducted that demonstrate each function is fully operational at the same time
35
searching for errors in each function. It is a test case design method that uses
the control structure of the procedural design to derive test cases. Basis path
testing is a white box testing. It includes
1. Flow graph notation
2. cyclometric complexity
CHAPTER 6
36
SYSTEM IMPLEMENTATION
NET (dot-net) is the name Microsoft gives to its general vision of the
future of computing, the view being of a world in which many applications run
in a distributed manner across the Internet. We can identify a number of
different motivations driving this vision.
37
C# language is intended to be a simple, modern, general-purpose, object-
oriented programming language. The language, and implementations thereof,
should provide support for software engineering principles such as checking, array,
detection of attempts to use uninitialized variables, and automatic garbage
collection. Software robustness, durability, and programmer productivity are
important.
A mail server (also known as a mail transfer agent or MTA, a mail transport
agent, a mail router or an Internet mailer) is an application that receives
incoming e-mail from local users (people within the same domain) and remote
senders and forwards outgoing e-mail for delivery. A computer dedicated to
running such applications is also called a mail server. Microsoft Exchange,
qmail, Exim and sendmail are among the more common mail server programs.
38
An email client or email program allows a user to send and receive email by
communicating with mail servers. There are many types of email clients with
differing features, but they all handle email messages and mail servers in the
same basic way.
The mail server works in conjunction with other programs to make up
what is sometimes referred to as a messaging system. A messaging system
includes all the applications necessary to keep e-mail moving as it should.
When you send an e-mail message, your e-mail program, such as Outlook or
Eudora, forwards the message to your mail server, which in turn forwards it
either to another mail server or to a holding area on the same server called
a message store to be forwarded later. As a rule, the system uses SMTP (Simple
Mail Transfer Protocol) or ESMTP (extended SMTP) for sending e-mail, and
either POP3 (Post Office Protocol 3) or IMAP (Internet Message Access
Protocol) for receiving e-mail.
39
CHAPTER 7
7.1 CONCLUSION
More than one camera can be used in future which accurately identifies the
intruder. The ANN can be applied in bio medical environment to determine the
brain diseases. In military scenarios, sensor networks ANN application can be
used. Feature selection was proven to have a significant impact on the
performance of the classifiers.
40
CHAPTER 8
APPENDIX
8.1. SOURCE CODE
8.1.1. USER AUTHENTICATION
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Net.Mail;
namespace Intrusion_Detection
{
public partial class FrmLogin : Form
{
public MailMessage MlMsg = new MailMessage();
public System.Net.Mail.SmtpClient SMTPclnt = new
System.Net.Mail.SmtpClient("smtp.gmail.com", 587);
public string myEmailAddress;
public string[] toEmailAddress= new string[1];
public string myPassword;
public string filenm;
public FrmLogin()
{
InitializeComponent();
}
41
SMTPclnt.UseDefaultCredentials = false;
SMTPclnt.Credentials = new
System.Net.NetworkCredential(myEmailAddress, myPassword);
SMTPclnt.EnableSsl = true;
MlMsg.From = new MailAddress(myEmailAddress);
MlMsg.Subject = "Intrusion Detected";
MlMsg.Body = "An intrusion was Detected at " + DateTime.Now + ",
Tried to Authenticate with User name: '" + TBusrnm.Text + "'";
toEmailAddress[0] = "dhana.it90@gmail.com";
}
catch (Exception ex)
{
//INFO>> ERROR show
MessageBox.Show(ex.Message.ToString());
}
}
42
private void Blogin_Click(object sender, EventArgs e)
{
if (TBusrnm.Text != "")
{
if (TBpwd.Text != "")
{
if (TBpwd.Text == "admin" & TBpwd.Text == "admin")
{
MessageBox.Show("You have successfully logged in ");
Form frm = (Form)this.MdiParent;
MenuStrip ms = (MenuStrip)frm.Controls["menuStrip"];
ToolStripMenuItem tsmLO =
(ToolStripMenuItem)ms.Items["logOutToolStripMenuItem"];
ToolStripMenuItem tsmLI = (ToolStripMenuItem)ms.Items["fileMenu"];
ToolStripMenuItem tsmperc =
(ToolStripMenuItem)ms.Items["perceptronToolStripMenuItem"];
ToolStripMenuItem tsmmulti =
(ToolStripMenuItem)ms.Items["multiLayerPerceptronToolStripMenuItem"];
ToolStripMenuItem tsmhybrid =
(ToolStripMenuItem)ms.Items["hybridToolStripMenuItem"];
tsmLO.Name = "Log&Out";
tsmLI.Visible = false;
tsmperc.Enabled = true;
tsmmulti.Enabled = true;
tsmhybrid.Enabled = true;
this.Close();
}
else
{
TBpwd.Text = "";
TBpwd.Focus();
MessageBox.Show("Enter correct Password");
if (Intrusion_Detection.Globvar.GlobalVar == "")
Intrusion_Detection.Globvar.GlobalVar = "0";
int tempi = int.Parse(Intrusion_Detection.Globvar.GlobalVar) +
1;
43
Intrusion_Detection.Globvar.GlobalVar =
Convert.ToString(tempi);
if (int.Parse(Intrusion_Detection.Globvar.GlobalVar) > 3)
{
sendmail();
}
}
}
else
{
MessageBox.Show("Enter Password");
TBpwd.Focus();
if (Intrusion_Detection.Globvar.GlobalVar == "")
Intrusion_Detection.Globvar.GlobalVar = "0";
int tempi = int.Parse(Intrusion_Detection.Globvar.GlobalVar) + 1;
Intrusion_Detection.Globvar.GlobalVar =
Convert.ToString(tempi);
if (int.Parse(Intrusion_Detection.Globvar.GlobalVar) > 3)
{
sendmail();
}
}
}
else
{
MessageBox.Show("Enter Username");
TBusrnm.Focus();
if (Intrusion_Detection.Globvar.GlobalVar == "")
Intrusion_Detection.Globvar.GlobalVar = "0";
int tempi = int.Parse(Intrusion_Detection.Globvar.GlobalVar) + 1;
Intrusion_Detection.Globvar.GlobalVar = Convert.ToString(tempi);
if (int.Parse(Intrusion_Detection.Globvar.GlobalVar) > 3)
{
sendmail();
}
}}
44
EXPLANATION
This code checks the condition whether the user is an authenticated person
or not . If the user is not authenticated person , the code enables the camera and
Using Sendmail() function , the intruder image can be captured and mailed to
owner mail id with date and time of intrusion occurred.
8.1.2. PERCEPTRON
using System.Collections;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Microsoft.VisualBasic;
using System.Diagnostics;
namespace Intrusion_Detection
{
public partial class frmGate : Form
{
private DigitalNeuralGate gate;
public frmGate()
{
InitializeComponent();
}
//Form overrides dispose to clean up the component list.
private void MsgBox(string p)
{
throw new NotImplementedException();
}
//Run the network to get the output, and show it in the text boxes
private void cmdRun_Click(object sender, EventArgs e)
{
45
double t1, t2, t3, t4;
try
{
//rout1, rinp11, rinp12 etc are textbox names
t1= gate.Run(Convert.ToInt64(this.rinp11.Text),
Convert.ToInt64(this.rinp12.Text));
this.rout1.Text = t1.ToString();
t2 = gate.Run(Convert.ToInt64(this.rinp21.Text),
Convert.ToInt64(this.rinp22.Text));
this.rout2.Text = t2.ToString();
t3 = gate.Run(Convert.ToInt64(this.rinp31.Text),
Convert.ToInt64(this.rinp32.Text));
this.rout3.Text = t3.ToString();
t4 = gate.Run(Convert.ToInt64(this.rinp41.Text),
Convert.ToInt64(this.rinp42.Text));
this.rout4.Text = t4.ToString();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}}
//Train only once
private void cmdTrainOnce_Click(object sender, EventArgs e)
{
try
{
TrainOnce();
}
catch (Exception ex)
{
MsgBox("Error. Check whether the input is valid - " + ex.Message);
}
}
46
private void cmdSave_Click(object sender, EventArgs e)
{
gate.Save("c:\\test.xml");
}
EXPLANATION
Brain Net offers built in support for persistence of neural networks. For
example, in the above case, after training a Gate, we may need to save its state
to load it later. For this, we can use the NetworkSerializer class in the BrainNet
library. To demonstrate this feature, let us add two functions to our
DigitalNeuralGate class. The Save Network method within NetworkSerializer
class will save the network to a specified path, and the Load Network function
will load the network back.
}
}
}
private void Form1_Paint(object sender,
System.Windows.Forms.PaintEventArgs e)
49
{
}
public int confirm_top()
{
int local_top = top;
for (int j = top; j <= bottom; j++)
for (int i = left; i <= right; i++)
if (Convert.ToString(input_image.GetPixel(i, j)) == "Color
[A=255, R=0, G=0, B=0]")
{
local_top = j;
return local_top;
}
return local_top;
}
public int confirm_bottom()
{
int local_bottom = bottom;
for (int j = bottom; j >= 0; j--)
for (int i = left; i <= right; i++)
if (Convert.ToString(input_image.GetPixel(i, j)) != "Color [A=255,
R=255, G=255, B=255]")
{
local_bottom = j;
return local_bottom;
}
return local_bottom;
}
51
reset_controls();
//label27.Text = "Analyzing Image. Please Wait . . .";
//label27.Update();
form_network();
initialize_weights();
form_input_set();
form_desired_output_set();
right = 1;
}
void startProgress()
{
}
public void form_network()
{
layers[0] = number_of_input_nodes;
layers[number_of_layers - 1] = number_of_output_nodes;
for (int i = 1; i < number_of_layers - 1; i++)
layers[i] = maximum_layers;
}
public void initialize_weights()
}
public void ShowProgress(long CurrentRound, long MaxRound, ref bool
cancel)
{
this.pbTrain.Maximum =System.Convert.ToInt32(MaxRound);
52
this.pbTrain.Value = System.Convert.ToInt32(CurrentRound);
input = imgHelper.ArrayListFromImage(this.picImgDetect.Image);
BrainNet.NeuralFramework.PatternProcessingHelper patternHelper =
new BrainNet.NeuralFramework.PatternProcessingHelper();
string character = Chr(patternHelper.NumberFromArraylist(output));
string bitpattern = patternHelper.PatternFromArraylist(output);
try
{
if (!string.IsNullOrEmpty(dlg.FileName))
{
ser.SaveNetwork(dlg.FileName, network);
MsgBox("Saved to file " + dlg.FileName);
}
}
catch (Exception ex)
{
MsgBox("Error: Invalid File? " + ex.Message);
}
try
{
if (!string.IsNullOrEmpty(dlg.FileName))
{
ser.LoadNetwork(dlg.FileName, ref network);
54
MsgBox("File " + dlg.FileName + " loaded");
}
}
catch (Exception ex)
{
MsgBox("Error: Invalid File? " + ex.Message);
}
EXPLANATION
In the above process, we developed a simple application - a two input gate that
can be trained to perform the function of any digital gate - using Brian Net
library. Now it is time to go for something more exciting and powerful - a
pattern/image detection program using BrainNet library. We provide a set of
images as input to the network along with an ASCII character that corresponds
to each input - and we will examine whether the network can predict a character
when an arbitrary image is given.
55
8.2 SCREEN SHOTS
USER AUTHENTICATION
56
Fig. 4.12 Tool identifies the intruder and enables the camera
57
Fig. 4.13. Intruder image was captured and mailed to owner’s mail id
PERCEPTRON
Fig. 4.14 Module 2- Perceptron . Use run network function to find hidden layer
58
XML OUTPUT FILE:
59
MULTILAYER PERCEPTRON :
60
HYBRID MULTILAYER PERCEPTRON:
61
XML OUTPUT FILE:
Fig. 4.19 Output of hybrid multilayer perceptron shows the hidden layer
62
CHAPTER 9
REFERENCES
[1] Hofmann, T. Horeis, and B. Sick, “Feature Selection for Intrusion Detection:
An Evolutionary Wrapper Approach,” Proc. IEEE Int’l Joint Conf. Neural
Networks, July 2004.
[2] A.H. Sung and S. Mukkamala, “Identifying Important Features for Intrusion
Detection Using Support Vector Machines and Neural Networks,” Proc.
Symp. Applications and the Internet (SAINT ’03), Jan. 2003.
[3] Mouhcine Guennoun1, Aboubakr Lbekkouri1 and Khalil El-Khatib2,”
63