Vous êtes sur la page 1sur 19

Chapter 6

SYSTEM SECURITY & CASE STUDIES


Intruders, Viruses, Worms, firewall design, antivirus techniques, digital Immune systems,
Certificate based & Biometric authentication, Secure Electronic Payment System.
1. Introduction to firewall
Firewalls can be an effective means of protecting a local system or network of systems from
network based security threats while at the same time affording access to the outside world via
wide area networks and the Internet. A firewall forms a barrier through which the traffic going in
each direction must pass. A firewall security policy dictates which traffic is authorized to pass in
each direction. A firewall may be designed to operate as a filter at the level of IP packets, or may
operate at a higher protocol layer.
1.1 Firewall Design Principles
Information systems in corporations, government agencies, and other organizations have
undergone a steady evolution:
Centralized data processing system, with a central mainframe supporting a number of directly
connected terminals
Local area networks (LANs) interconnecting PCs and terminals to each other and the mainframe
Premises network, consisting of a number of LANs, interconnecting PCs, servers, and perhaps a
mainframe or two
Enterprise-wide network, consisting of multiple, geographically distributed premises networks
interconnected by a private wide area network (WAN)
Internet connectivity, in which the various premises networks all hook into the Internet and may
or may not also be connected by a private WAN
Internet connectivity is no longer optional for organizations. The information and services
available are essential to the organization. Moreover, individual users within the organization want
and need Internet access, and if this is not provided via their LAN, they will use dial-up capability
from their PC to an Internet service provider (ISP). However, while Internet access provides
benefits to the organization, it enables the outside world to reach and interact with local network
assets. This creates a threat to the organization. While it is possible to equip each workstation and
server on the premises network with strong security features, such as intrusion protection, this is
not a practical approach. Consider a network with hundreds or even thousands of systems, running
a mix of various versions of UNIX, plus Windows. When a security flaw is discovered, each
potentially affected system must be upgraded to fix that flaw. The alternative, increasingly
accepted, is the firewall. The firewall is inserted between the premises network and the Internet to
establish a controlled link and to erect an outer security wall or perimeter. The aim of this
perimeter is to protect the premises network from Internet-based attacks and to provide a single
choke point where security and audit can be imposed. The firewall may be a single computer
system or a set of two or more systems that cooperate to perform the firewall function.
1.2 Firewall Characteristics
1. All traffic from inside to outside, and vice versa, must pass through the firewall. This is achieved
by physically blocking all access to the local network except via the firewall.
2. Only authorized traffic, as defined by the local security policy, will be allowed to pass.
3. The firewall itself is immune to penetration. This implies that use of a trusted system with a
secure operating system.

Following is the list for four general techniques that firewalls use to control access and
enforce the site's security policy. Originally, firewalls focused primarily on service control, but they
have since evolved to provide all four:
Service control: Determines the types of Internet services that can be accessed, inbound or
outbound. The firewall may filter traffic on the basis of IP address and TCP port number; may
provide proxy software that receives and interprets each service request before passing it on; or
may host the server software itself, such as a Web or mail service.
Direction control: Determines the direction in which particular service requests may be initiated
and allowed to flow through the firewall.
User control: Controls access to a service according to which user is attempting to access it. This
feature is typically applied to users inside the firewall perimeter (local users). It may also be
applied to incoming traffic from external users; the latter requires some form of secure
authentication technology, such as is provided in IPSec.
Behavior control: Controls how particular services are used. For example, the firewall may filter
e-mail to eliminate spam, or it may enable external access to only a portion of the information on a
local Web server.
1.2.1 Firewall capabilities:
1. A firewall defines a single choke point that keeps unauthorized users out of the protected
network, prohibits potentially vulnerable services from entering or leaving the network, and
provides protection from various kinds of IP spoofing and routing attacks. The use of a single choke
point simplifies security management because security capabilities are consolidated on a single
system or set of systems.
2. A firewall provides a location for monitoring security-related events. Audits and alarms can be
implemented on the firewall system.
3. A firewall is a convenient platform for several Internet functions that are not security related.
These include a network address translator, which maps local addresses to Internet addresses, and
a network management function that audits or logs Internet usage.
4. A firewall can serve as the platform for IPSec. Using the tunnel mode capability, the firewall can
be used to implement virtual private networks.
1.2.2 Firewall limitations:
1. The firewall cannot protect against attacks that bypass the firewall. Internal systems may have
dial-out capability to connect to an ISP. An internal LAN may support a modem pool that provides
dial-in capability for traveling employees and telecommuters.
2. The firewall does not protect against internal threats, such as a disgruntled employee or an
employee who unwittingly cooperates with an external attacker.
3. The firewall cannot protect against the transfer of virus-infected programs or files. Because of the
variety of operating systems and applications supported inside the perimeter, it would be
impractical and perhaps impossible for the firewall to scan all incoming files, e-mail, and messages
for viruses.
1.3 Types of Firewalls
Figure 1 illustrates the three common types of firewalls: packet filters, application-level gateways,
and circuit-level gateways.
1. Packet-Filtering Router: A packet-filtering router applies a set of rules to each incoming and
outgoing IP packet and then forwards or discards the packet. The router is typically configured to
filter packets going in both directions (from and to the internal network). Filtering rules are based
on information contained in a network packet:
Source IP address: The IP address of the system that originated the IP packet (e.g., 192.178.1.1)

Destination IP address: The IP address of the system the IP packet is trying to reach (e.g.,
192.168.1.2)
Source and destination transport-level address: The transport level (e.g., TCP or UDP) port
number, which defines applications such as SNMP or TELNET
IP protocol field: Defines the transport protocol
Interface: For a router with three or more ports, which interface of the router the packet came
from or which interface of the router the packet is destined for
The packet filter is typically set up as a list of rules based on matches to fields in the IP or
TCP header. If there is a match to one of the rules, that rule is invoked to determine whether to
forward or discard the packet. If there is no match to any rule, then a default action is taken. Two
default policies are possible:
Default = discard: That which is not expressly permitted is prohibited.
Default = forward: That which is not expressly prohibited is permitted.

The default discard policy is more conservative. Initially, everything is blocked, and services
must be added on a case-by-case basis. This policy is more visible to users, who are more likely to
see the firewall as a hindrance. The default forward policy increases ease of use for end users but
provides reduced security; the security administrator must, in essence, react to each new security
threat as it becomes known.

2. Application-Level Gateway : An application-level gateway, also called a proxy server, acts as a


relay of application-level traffic (Figure 1b). The user contacts the gateway using a TCP/IP
application, such as Telnet or FTP, and the gateway asks the user for the name of the remote host to
be accessed. When the user responds and provides a valid user ID and authentication information,
the gateway contacts the application on the remote host and relays TCP segments containing the
application data between the two endpoints. If the gateway does not implement the proxy code for
a specific application, the service is not supported and cannot be forwarded across the firewall.
Further, the gateway can be configured to support only specific features of an application that the
network administrator considers acceptable while denying all other features.
Application-level gateways tend to be more secure than packet filters. Rather than trying to
deal with the numerous possible combinations that are to be allowed and forbidden at the TCP and
IP level, the application-level gateway need only scrutinize a few allowable applications. In addition,
it is easy to log and audit all incoming traffic at the application level.
A prime disadvantage of this type of gateway is the additional processing overhead on each
connection. In effect, there are two spliced connections between the end users, with the gateway at
the splice point, and the gateway must examine and forward all traffic in both directions.
3. Circuit-Level Gateway: A third type of firewall is the circuit-level gateway (Figure 1c). This can
be a stand-alone system or it can be a specialized function performed by an application-level
gateway for certain applications. A circuit-level gateway does not permit an end-to-end TCP
connection; rather, the gateway sets up two TCP connections, one between itself and a TCP user on
an inner host and one between itself and a TCP user on an outside host. Once the two connections
are established, the gateway typically relays TCP segments from one connection to the other
without examining the contents. The security function consists of determining which connections
will be allowed.

A typical use of circuit-level gateways is a situation in which the system administrator trusts
the internal users. The gateway can be configured to support application-level or proxy service on
inbound connections and circuit-level functions for outbound connections. In this configuration, the
gateway can incur the processing overhead of examining incoming application data for forbidden
functions but does not incur that overhead on outgoing data.
1.4 Firewall Configurations
In addition to the use of a simple configuration consisting of a single system, such as a single
packetfiltering router or a single gateway, more complex configurations are possible and indeed
more common. Figure 2 illustrates three common firewall configurations.

1. Screened host firewall, single-homed bastion configuration: In this, the firewall consists of
two systems: a packet-filtering router and a bastion host. Typically, the router is configured so that
1. For traffic from the Internet, only IP packets destined for the bastion host are allowed in.
2. For traffic from the internal network, only IP packets from the bastion host are allowed out.
The bastion host performs authentication and proxy functions. This configuration has
greater security than simply a packet-filtering router or an application-level gateway alone, for two

reasons. First, this configuration implements both packet-level and application-level filtering,
allowing for considerable flexibility in defining security policy. Second, an intruder must generally
penetrate two separate systems before the security of the internal network is compromised.
This configuration also affords flexibility in providing direct Internet access. For example,
the internal network may include a public information server, such as a Web server, for which a
high level of security is not required. In that case, the router can be configured to allow direct traffic
between the information server and the Internet. In the single-homed configuration just described,
if the packet-filtering router is completely compromised, traffic could flow directly through the
router between the Internet and other hosts on the private network.

2. The screened host firewall, dual-homed bastion configuration: This configuration physically
prevents such a security breach (Figure 2b). The advantages of dual layers of security that were
present in the previous configuration are present here as well. Again, an information server or
other hosts can be allowed direct communication with the router if this is in accord with the
security policy.
3. The screened subnet firewall configuration: This type of configuration shown in Figure 2c is
the most secure of those we have considered. In this configuration, two packet-filtering routers are
used, one between the bastion host and the Internet and one between the bastion host and the
internal network. This configuration creates an isolated subnetwork, which may consist of simply
the bastion host but may also include one or more information servers and modems for dial-in
capability. Typically, both the Internet and the internal network have access to hosts on the
screened subnet, but traffic across the screened subnet is blocked.
This configuration offers several advantages:
There are now three levels of defense to thwart intruders.
The outside router advertises only the existence of the screened subnet to the Internet; therefore,
the internal network is invisible to the Internet.
Similarly, the inside router advertises only the existence of the screened subnet to the internal
network; therefore, the systems on the inside network cannot construct direct routes to the
Internet.

2. Biometric Authentication
2.1 Introduction to Biometric Authentication System (BAS):
Biometric authentication is one of the most exciting technical improvements of recent
history and looks set to change the way in which the majority of individuals live. Security is now
becoming a more important issue for business, and the need for authentication has therefore
become more important than ever. The use of biometric systems for personal authentication is a
response to the rising issue of authentication and security. The most widely used method of
biometric authentication is fingerprint recognition.
The term biometrics comes from the Greek words bios, meaning life, and metrics, meaning
measure. Biometrics can be defined as measurable physiological and/or behavioural characteristics
that can be utilized to verify the identity of an individual, and include fingerprint verification, hand
geometry, retinal scanning, iris scanning, facial recognition and signature verification. Biometric
authentication is considered the automatic identification, or identity verification, of an individual
using either a biological feature they possess physiological characteristic like a fingerprint or
something they do behaviour characteristic, like a signature. In practice, the process of
identification and authentication is the ability to verify and confirm an identity.
Any human physiological or behavioural characteristic that is universal, unique, permanent,
and collectable could be used as a biometric. From a practical point of view, a biometric access
control system should perform accurately, be acceptable by the society, be robust and should not
be tampered. Authentication (or verification) is closely related to recognition (or identification).
However, the evaluation criteria for identity recognition are different from those used in
authentication systems. The performance of identity recognition systems is quantified in terms of
the cumulative match score, i.e., the percentage of correctly identified subjects within the N best
matches versus N. Recall-precision curves could also be used to evaluate identification algorithms.
The performance of identity authentication systems is measured in terms of the false rejection rate
(FRR) achieved at a fixed false acceptance rate (FAR) or vice versa. By varying FAR, the receiver
operating characteristic (ROC) curve is obtained. A scalar figure of merit used to judge the
performance of an authentication algorithm, is the so-called equal error rate (EER), corresponding
to the ROC operating point having FAR=FRR.
A number of biometrics has been evaluated for identification and authentication
applications. For example, voice, fingerprints, face, iris, infrared facial and hand vein thermograms,
ear, retinal scans, hand and finger geometry are based on a physical characteristic, whereas
signature and acoustic emissions emitted during a signature scribble, gait, keystroke dynamics are
related to a behavioural characteristic. Authentication is the act of establishing or confirming
something (or someone) as authentic, that is, that claims made by or about the thing are true.
The following table1 outlines a comparison between passwords vs. tokens vs. biometrics.
Table 1: Passwords vs. Tokens. vs. Biometrics
Tokens

- Can be forged and used without the knowledge of the original holder. For example, a
forger can "steal an identity" and create a fake ID document using another person's
information.
- Can be lost, stolen or given to someone else.

Passwords

- Can be obtained or "cracked" using a variety of techniques such as using


programs/tools to crack the password.
- Can be disclosed. If the password is disclosed to a person they will be able to gain
access to information for which they are not authorized.
- Can be forgotten which will place a further burden upon an organizations
administration.

Biometrics - Cannot be forged


- Can be destroyed, and a biometric characteristic's ability to be read by a system can
be reduced. An individual's fingerprints, for example, can be affected by cuts and
bruises and can even be destroyed by excessive rubbing on an abrasive surface. Also,
Accuracy of Biometrics depends mainly on the software that is dealing with them.

Biometric characteristics can be separated into two main categories (Figure 3):
1. Physiological characteristics are related to the shape of the body. The trait that has
been used the longest, for over one hundred years, are fingerprints; other examples are face
recognition, hand geometry and iris recognition.
2. Behavioural characteristics are related to the behaviour of a person. The first
characteristic to be used that is still widely used today is the signature.

Figure 3: physical and behavioural characteristics used by biometrics


A simple biometric system consists of four basic components:
1) Sensor module which acquires the biometric data;
2) Feature extraction module where the acquired data is processed to extract feature vectors;
3) Matching module where feature vectors are compared against those in the template;
4) Decision-making module in which the user's identity is established or a claimed identity is
accepted or rejected.
Any human physiological or behavioral trait can serve as a biometric characteristic as long
as it satisfies the following requirements:

1) Universality. Everyone should have it;


2) Distinctiveness. No two should be the same;
3) Permanence. It should be invariant over a given period of time;
4) Collectability.
In real life applications, three additional factors should also be considered: performance
(accuracy, speed, resource requirements), acceptability (it must be harmless to users), and
circumvention (it should be robust enough to various fraudulent methods).

Figure 4: Functional schematic of biometric authentication system (BAS)


2.2 Functional model of biometric authentication system (BAS):
Figure 4 depicts functional schematic of a biometric authentication system (BAS). Biometric
samples are collected using an appropriate sensor. The samples are then processed to correct the
deterministic variations like translational and rotational shifts due to the interaction of a sensor
with the external world. This leads to a set of discriminatory attributes that are invariant to
irrelevant transformation of the input at the sensor. In the case of palmprint biometrics, for
example, such variation may occur in the line attributes of an acquired palm image. This can be

corrected by matrix transformation. These variations can be largely avoided by using pegs to fix the
position of a palm with respect to the sensor. However, the poor quality of the image due to sensor
limitations may have to be enhanced by preprocessing. Following this, segmentation/identification
is performed to extract/recognize the desired attributes from the biometric samples. In the same
example, the image of a palm is segmented to obtain line attributes. Measurements performed on
these attributes give features depending upon the representation method. The features so obtained
are used to form a biometric template. The biometric template is stored in one of the many
encrypted forms so as to avoid spoofing.
Once the database is ready, a query template needs to be authenticated using a matcher so
as to determine its similarity with templates in the database. As an example, the matcher function
or similarity measure for palmprint biometrics is Euclidean distance which will be different for
different templates in the database. The output of the matcher is a matching score which gives the
degree of similarity of the query template with various templates. This is used to arrive at a
decision using a classifier.
2.3 OVERVIEW OF COMMONLY USED BIOMETRICS
1. Finger Print Technology
A fingerprint is an impression of the friction ridges of all or any part of the finger. A friction
ridge is a raised portion of the on the palmar (palm) or digits (fingers and toes) or plantar (sole)
skin, consisting of one or more connected ridge units of friction ridge skin. These ridges are
sometimes known as "dermal ridges" or "dermal ". The traditional method uses the ink to get the
finger print onto a piece of paper. This piece of paper is then scanned using a traditional scanner.
Now in modern approach, live finger print readers are used .These are based on optical, thermal,
silicon or ultrasonic principles. It is the oldest of all the biometric techniques. Optical finger print
reader is the most common at present. They are based on reflection changes at the spots where
finger papilar lines touch the reader surface. All the optical fingerprint readers comprise of the
source of light, the light sensor and a special reflection surface that changes the reflection according
to the pressure. Some of the readers are fitted out with the processing and memory chips as well
The size of optical finger is around 10*10*15. It is difficult to minimize them much more as
the reader has to comprise the source on light reflection surface and light sensor.
Optical Silicon Fingerprint Sensor is based on the capacitance of finger. Dc-capacitive finger
print sensor consists of rectangular arrays of capacitors on a silicon chip. One plate of the
capacitors is finger, other plate contains a tiny area of metallization on the chips surfaces on placing
finger against the surfaces of a chip, the ridges of finger print are close to the nearby pixels and
have high capacitance to them. The valleys are more distant from the pixels nearest them and
therefore have lower capacitance.
Ultrasound finger print is newest and least common. They use ultrasound to monitor the
figure surfaces, the user places the finger on a piece of glass and the ultrasonic sensor moves and
reads whole finger print. This process takes 1 or 2 seconds.
Finger print matching techniques can be placed into two categories. One of them is Minutiae
based and the other one is Correlation based. Minutiae based techniques find the minutiae points
first and then map their relation placement on the finger. Correlation based techniques require the
precise location of a registration point and are affected by image translation and rotation.
2 . Face
Facial images are the most common biometric characteristic used by humans to make a
personal recognition, hence the idea to use this biometric in technology. This is a nonintrusive
method and is suitable for covert recognition applications. The applications of facial recognition
range from static ("mug shots") to dynamic, uncontrolled face identification in a cluttered
background (subway, airport). Face verification involves extracting a feature set from a twodimensional image of the user's face and matching it with the template stored in a database.

The most popular approaches to face recognition are based on either: 1) the location and
shape of facial attributes such as eyes, eyebrows, nose, lips and chin, and their spatial relationships,
or 2) the overall (global) analysis of the face image that represents a face as a weighted
combination of a number of canonical faces. Although performance of commercially available
systems is reasonable there is still significant room for improvement since false reject rate (FRR) is
about 10% and false accept rate (FAR) is 1%. These systems also have difficulties in recognizing a
face from images captured from two different angles and under different ambient illumination
conditions. It is questionable if a face itself is a sufficient basis for recognizing a person from a large
number of identities with an extremely high level of confidence. Facial recognition system should
be able to automatically detect a face in an image, extract its features and then recognize it from a
general viewpoint (i.e., from any pose) which is a rather difficult task. Another problem is the fact
that the face is a changeable social organ displaying a variety of expressions.
3. Retina
Retinal recognition creates an "eye signature" from the vascular configuration of the retina
which is supposed to be a characteristic of each individual and each eye, respectively. Since it is
protected in an eye itself, and since it is not easy to change or replicate the retinal vasculature, this
is one of the most secure biometric. Image acquisition requires a person to look through a lens at an
alignment target, therefore it implies cooperation of the subject. Also retinal scan can reveal some
medical conditions and as such public acceptance is questionable.
4. Iris
The iris begins to form in the third month of gestation and the structures creating its
pattern are largely complete by the eight month. Its complex pattern can contain many distinctive
features such as arching ligaments, furrows, ridges, crypts, rings, corona, freckles and a zigzag
collarette. Iris scanning is less intrusive than retinal because the iris is easily visible from several
meters away. Responses of the iris to changes in light can provide an important secondary
verification that the iris presented belongs to a live subject. Irises of identical twins are different,
which is another advantage. Newer systems have become more user-friendly and cost-effective. A
careful balance of light, focus, resolution and contrast is necessary to extract a feature vector from
localized image. While the iris seems to be consistent throughout adulthood, it varies somewhat up
to adolescence.
The iris pattern is taken by a special gray scale camera in the distance of 10- 40 cm of
camera. Once the gray scale image of the eye is obtained then the software tries to locate the iris
within the image. If an iris is found then the software creates a net of curves covering the iris. Based
on the darkness of the points along the lines the software creates the iris code. Here, two influences
have to take into account. First, the overall darkness of image is influenced by the lighting condition
so the darkness threshold used to decide whether a given point is dark or bright cannot be static, it
must be dynamically computed according to the overall picture darkness. Secondly, the size of the
iris changes as the size of the pupil changes. Before computing the iris code, a proper
transformation must be done.
In decision process, the matching software takes two iris codes and compute the hamming
distance based on the number of different bits. The hamming distances score (within the range 0
means the same iris codes), which is then compared with the security threshold to make the final
decision. Computing the hamming distance of two iris codes is very fast (it is the fact only counting
the number of bits in the exclusive OR of two iris codes). We can also implement the concept of
template matching in this technique. In template matching, some statistical calculation is done
between a stored iris template and a produced. Depending on the result decision is taken

5. Hand Geometry Technology


It is based on the fact that nearly every persons hand is shaped differently and that the
shape of a persons hand does not change after certain age. These techniques include the estimation
of length, width, thickness and surface area of the hand. Various method are used to measure the
hands- Mechanical or optical principle.
One of the earliest automated biometric systems was installed during late 60s and it used
hand geometry and stayed in production for almost 20 years. The technique is very simple,
relatively easy to use and inexpensive. Dry weather or individual anomalies such as dry skin do not
appear to have any negative effects on the verification accuracy. Since hand geometry is not very
distinctive it cannot be used for identification of an individual from a large population, but rather in
a verification mode. Further, hand geometry information may not be invariant during the growth
period of children. Limitations in dexterity (arthritis) or even jewelry may influence extracting the
correct hand geometry information. This method can find its commercial use in laptops rather easy.
There are even verification systems available that are based on measurements of only a few fingers
instead of the entire hand. These devices are smaller than those used for hand geometry.
6. Voice
The features of an individual's voice are based on physical characteristics such as vocal
tracts, mouth, nasal cavities and lips that are used in creating a sound. These characteristics of
human speech are invariant for an individual, but the behavioral part changes over time due to age,
medical conditions and emotional state.
Voice recognition techniques are generally categorized according to two approaches: 1)
Automatic Speaker Verification (ASV) and 2) Automatic Speaker Identification (ASI). Speaker
verification uses voice as the authenticating attribute in a two-factor scenario. Speaker
identification attempts to use voice to identify who an individual actually is. Voice recognition
distinguishes an individual by matching particular voice traits against templates stored in a
database. Voice systems must be trained to the individual's voice at enrollment time, and more than
one enrollment session is often necessary. Feature extraction typically measures formants or sound
characteristics unique to each person's vocal tract. The pattern matching algorithms used in voice
recognition are similar to those used in face recognition.
7. Signature
Signature is a simple, concrete expression of the unique variations in human hand
geometry. The way a person signs his or her name is known to be characteristic of that individual.
Collecting samples for this biometric includes subject cooperation and requires the writing
instrument. Signatures are a behavioral biometric that change over a period of time and are
influenced by physical and emotional conditions of a subject. In addition to the general shape of the
signed name, a signature recognition system can also measure pressure and velocity of the point of
the stylus across the sensor pad.
8. Infrared thermogram (facial, hand or hand vein)
It is possible to capture the pattern of heat radiated by the human body with an infrared
camera. That pattern is considered to be unique for each person. It is a noninvasive method, but
image acquisition is rather difficult where there are other heat emanating surfaces near the body.
The technology could be used for covert recognition. A related technology using near infrared
imaging is used to scan the back of a fist to determine hand vein structure, also believed to be
unique. Like face recognition, it must deal with the extra issues of three-dimensional space and
orientation of the hand. Set-back is the price of infrared sensors.

9. Gait
This is one of the newer technologies and is yet to be researched in more detail. Basically,
gait is the peculiar way one walks and it is a complex spatio-temporal biometrics. It is not supposed
to be very distinctive but can be used in some low-security applications. Gait is a behavioral
biometric and may not remain the same over a long period of time, due to change in body weight or
serious brain damage. Acquisition of gait is similar to acquiring a facial picture and may be an
acceptable biometric. Since video-sequence is used to measure several different movements this
method is computationally expensive.
10. Keystroke
It is believed that each person types on a keyboard in a characteristic way. This is also not
very distinctive but it offers sufficient discriminatory information to permit identity verification.
Keystroke dynamics is a behavioral biometric; for some individuals, one could expect to observe
large variations in typical typing patterns. Advantage of this method is that keystrokes of a person
using a system could be monitored unobtrusively as that person is keying information. Another
issue to think about here is privacy.
11. Odor
Each object spreads around an odor that is characteristic of its chemical composition and
this could be used for distinguishing various objects. This would be done with an array of chemical
sensors, each sensitive to a certain group of compounds. Deodorants and parfumes could lower the
distinctiveness.
12. Ear
It has been suggested that the shape of the ear and the structure of the cartilaginous tissue
of the pinna are distinctive. Matching the distance of salient points on the pinna from a landmark
location of the ear is the suggested method of recognition in this case. This method is not believed
to be very distinctive.
Table : Comparison of various biometric technologies

13. DNA
Deoxyribonucleic acid (DNA) is probably the most reliable biometrics. It is in fact a onedimensional code unique for each person. Exception are identical twins. This method, however, has
some drawbacks: 1) contamination and sensitivity, since it is easy to steal a piece of DNA from an
individual and use it for an ulterior purpose, 2) no real-time application is possible because DNA
matching requires complex chemical methods involving expert's skills, 3) privacy issues since DNA
sample taken from an individual is likely to show susceptability of a person to some diseases. All
this limits the use of DNA matching to forensic applications.
It is obvious that no single biometric is the "ultimate" recognition tool and the choice
depends on the application. A brief comparison of the above techniques based on seven factors.
2. 4 BIOMETRIC SYSTEM PERFORMANCE
Due to different positioning on the acquiring sensor, imperfect imaging conditions,
environmental changes, deformations, noise and bad user's interaction with the sensor, it is
impossible that two samples of the same biometric characteristic, acquired in different sessions,
exactly coincide. For this reason a biometric matching systems' response is typically a matching
score s (normally a single number) that quantifies the similarity between the input and the
database template representations. The higher the score, the more certain the system is that the
two samples coincide. A similarity score s is compared with an acceptance threshold t and if s is
greater than or equal to t compared samples belong to a same person.

Figure 5: Biometric system error rates


Pairs of biometric samples generating scores lower than t belong to a different person. The
distribution of scores generated from pairs of samples from different persons is called an impostor
distribution, and the score distribution generated from pairs of samples of the same person is called
a genuine distribution, Figure 5.
The main system errors are usually measured in terms of:

FNMR (false nonmatch rate) mistaking two biometrics measurements from the same person to
be from two different persons;
FMR (false match rate) mistaking biometric measurement from two different persons to be
from the same person.
FNMR and FMR are basically functions of the system threshold t: if the system's designers
decrease t to make the system more tolerant to input variations and noise, FMR increases. On the
other hand, if they raise t to make the system more secure, FNMR increases accordingly. FMR and
FNMR are brought together in a receiver operating characteristic (ROC) curve that plots the FMR
against FNMR (or 1-FNMR) at different thresholds.
There are two other recognition error rates that can be also used and they are: failure to
capture (FTC) and failure to enroll (FTE). FTC denotes the percentage of times the biometric
device fails to automatically capture a sample when presented with a biometric characteristic. This
usually happens when system deals with a signal of insufficient quality. The FTE rate denotes the
percentage of times users cannot enroll in the recognition system.
Equal Error Rate (EER): The rates at which both accept and reject errors are equal. ROC or
DET plotting is used because how FAR and FRR can be changed, is shown clearly. When quick
comparison of two systems is required, the ERR is commonly used. Obtained from the ROC plot by
taking the point where FAR and FRR have the same value. The lower the EER, the more accurate the
system is considered to be.
Template Capacity: It is defined as the maximum number of sets of data which can be
input in to the system.
3. Secure Electronic Payment System
3.1 Introduction:
As payment is an integral part of mercantile process, electronic payment system is an
integral part of e-commerce. The emergence of e-commerce has created new financial needs that in
many cases cannot be effectively fulfilled by traditional payment systems. For instance, new types
of purchasing relationships-such as auction between individuals online-have resulted in the need
for peer-to-peer payment methods that allows individuals to e-mail payments to the other
individual.
Increasingly, people are using computer networks access and pay for goods and services
with electronic money. E-money or digital cash is merely an electronic representation of funds. Emoney with a net result of funds transferred from one party to another. The primary function of ecash or e-money is to facilitate transaction on the network. E-money is a necessary innovation in
the financial markets. In electronic payment system, server stores records of every transaction.
When the electronic payment system eventually goes online to communicate with the shops and
the customers who can deposit their money and the server uploads these records for auditing
purposes.
Size of Electronic Payments: Electronic payment system is conducted in different e-commerce
categories such as Business-to-Business (B2B), Business-to-Consumer (B2C), Consumer-toBusiness (C2B) and Consumer-to-Consumer (C2C). Each of which has special characteristics that
depend on the value of order. Danial, classified electronic payment systems as follows:
1. Micro Payment (less than $ 10) that is mainly conducted in C2C and B2C e-commerce.
2. Consumer Payment that has a value between $ 10 and $ 500. It is conducted mainly in 2C
transactions.
3. Business Payment that has the value more than $ 500. it is conducted mainly in B2B ecommerce .

3.2 Process of Electronic Payment System


Electronic payment systems have been in operations since 1960s and have been expanding
rapidly as well as growing in complexity. After the development of conventional payment system,
EFT (Electronic Fund Transfer) based payment system came into existence. It was first electronic
based payment system, which does not depend on a central processing intermediary. An electronic
fund transfer is a financial application of EDI (Electronic Data Interchange), which sends credit card
numbers or electronic cheques via secured private networks between banks and major
corporations. To use EFT to clear payments and settle accounts, an online payment service will
need to add capabilities to process orders, accounts and receipts. But a landmark came in this
direction with the development of digital currency. The nature of digital currency or electronic
money mirrors that of paper money as a means of payment. As such, digital currency payment
systems have the same advantages as paper currency payment, namely anonymity and
convenience. As in other electronic payment systems (i.e. EFT based and intermediary based) here
too security during the transaction and storage is a concern, although from the different
perspective, for digital currency systems double spending, counterfeiting, and storage become
critical issues whereas eavesdropping and the issue of liability (when charges are made without
authorizations) is important for the notational funds transfer. Figure 6 shows digital currency
based payment system.

Figure 6: Electronic payment system


3.3 TYPES OF ELECTRONIC PAYMENT SYSTEMS
With the growing complexities in the e-commerce transactions, different electronic
payment systems have appeared in the last few years. At least dozens of electronic payment
systems proposed or already in practice are found. The grouping can be made on the basis of what
information is being transferred online. Murthy (2002) explained six types of electronic payment
systems: (1) PC-Banking (2) Credit Cards (3) Electronic Cheques (i-cheques) (4) Micro payment (5)

Smart Cards and (6) E-Cash. Kalakota and Whinston identified three types of electronic payment
systems: (1) Digital Token based electronic payment systems (2) Smart Card based electronic
payment system13 and (3) Credit based electronic payment systems14. Dennis classified electronic
payment system into two categories: (1) Electronic Cash and (2) Electronic Debit-Credit Card
Systems. Thus, electronic payment system can be broadly divided into four general types:
[1] Online Credit Card Payment System
[2] Electronic Cheque System
[3] Electronic Cash System and
[4] Smart Card based Electronic Payment System
3.4 Secure electronic payment system:
This system includes two main parts: client module and server module. The purpose of this
module is to pass request made by client to server which stores all transaction information in a set
of data files. The server site would respond by sending all the items of the client requests. The client
process also contains solution-specific logic and provides the interface between the user and the
rest of application system. The user interface module and the server module communicate using
TCP/IP protocol.
This system explained deals with the implementation of client/server database for secure
electronic payment system. Database management system (DBMS) is a computer program, which
serves as a tool for storing data in a database for retrieving information from it and for keeping it
up to date. DBMS is very helpful to all users who want to input large amount of information and
vast amount of calculations at the same time.
To be secure electronic payment system, it is used cryptography which can protect
conventional transaction data such as account number, amount and other information. To most
people, cryptography is concerned with keeping communication private. Indeed, the protection of
sensitive communications has been the emphasis of cryptography throughout much of its history
RC5 algorithm is used for data encryption and decryption which can provide confidentiality and
security.
3.4.1 OVERVIEW OF THE SYSTEM
This system is designed on Client/Server architecture that is developed on the windows and
network groups. This system includes two main parts which are user interface module and server
module. This thesis presents the user interface module. Modules of secure electronic payment
system can be shown in Figure 7. The purpose of this module is to pass request made by client to
server. The server stores all transaction information in a set of data files. There are different types
of clients that are customers and shops who own the shop owners.

Figure 7: Modules of Secure Electronic Payment System

The communication between the client and server is TCP/IP protocol. Database
management system serves as a tool for storing data in a database for retrieving information from
all users in this system. To be secure this system, symmetric key cryptosystem RC5 is used to
protect conventional transaction data such as account numbers, amount and other information.

Figure 8: Overview of the System


This system is secure for the customers and shop owners because it has been designed from
the start for the needs of the network. The security architecture of electronic payment system is
designed by RC5 encryption/decryption program. This system eliminates the fraud that occurs
today with stolen credit card numbers. The overview of the secure electronic payment system is
shown in Figure 8.
3.4.2 USER INTERFACE FOR CLIENTMODULES
The user interface for client module is a linking process between the customers and the
system. The user interface takes all inputs for the user, and passes the information to the server to
deal with database system. The sole purpose of the user can be seen in the system, and this user
interface is the one module where tasks can be controlled under the server. User interface module
flow chart is shown in Figure 9 .

Figure 9: User Interface Module Flow Chart


3.4.3. SYSTEM DESIGN
The system design consists of customers and shop owner who owns shops. The client
module contains databases which are related to category, goods, purchase details, sales details and
shop owner.

Vous aimerez peut-être aussi