Vous êtes sur la page 1sur 9

Question 1:

i) What are the Nyquist and the Shannon limits on the channel capacity? Elaborate Through
examples.
Ans. In information theory, the Shannon–Hartley theorem is an application of the noisy
Channel coding theorem to the archetypal case of a continuous-time analog communications channel subject
to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a
bound on the maximum amount of error-free digital data (that is, information) that can be transmitted with a
specified bandwidth in the presence of the noise interference, under the assumption that the signal power is
bounded and the Gaussian noise process is characterized by a known power or power spectral density. The
law is named after Claude Shannon and Ralph Hartley.
Statement of the theorem
Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley theorem
states that the channel capacity C, meaning the theoretical tightest upper bound on the information rate
(excluding error correcting codes) of clean (or arbitrarily low bit error rate) data that can be sent with a given
average signal power S through an analog communication channel subject to additive white Gaussian noise
of power N, is: Where C is the channel capacity in bits per second; B is the bandwidth of the channel in
hertz (pass band bandwidth in case of a modulated signal); S is the total received signal power over the
bandwidth (in case of a modulated signal, often denoted C, i.e. modulated carrier), measured in watt or volt2;
N is the total noise or interference power over the bandwidth, measured in watt or volt2; and S/N is the
signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication signal to the Gaussian
noise interference expressed as a linear power ratio (not as logarithmic decibels).
Historical development
During the late 1920s, Harry Nyquist and Ralph Hartley developed a handful of fundamental ideas related to
the transmission of information, particularly in the context of the telegraph as a communications system. At
the time, these concepts were powerful breakthroughs individually, but they were not part of a
comprehensive theory. In the 1940s, Claude Shannon developed the concept of channel capacity, based in
part on the ideas of Nyquist and Hartley, and then formulated a complete theory of information and its
transmission.
Nyquist rate
In 1927, Nyquist determined that the number of independent pulses that could be put through a telegraph
channel per unit time is limited to twice the bandwidth of the channel. In symbols, where fP is the pulse
frequency (in pulses per second) and B i s the bandwidth (in hertz). The quantity 2B later came to be called
the Nyquist rate, and transmitting at the limiting pulse rate of 2B pulses per second as signaling at the
Nyquist rate. Nyquist published his results in 1928 as part of his paper "Certain topics in Telegraph
Transmission Theory."
ii) Differentiate between the followings:
a) FDM and TDM
The resource allocation method is the most important factor for the efficient design of Sub MAP.
Particularly, the multiplexing between data regions and control regions is a key issue. The time-division
multiplexing (TDM) and the frequency-division multiplexing (FDM) can be considered for IEEE802.16m
system. Error! Reference source not found. shows the concept of the resource allocation based on TDM
and FDM.
TDM is to divide a sub frame into a MAP region and a data burst region by OFDMA symbol, and FDM is to
divide a sub frame by sub channels or resource blocks (RB’s). In the figure, the TDM Sub MAP is located in
the second symbol of the sub frame. To enhance the channel estimation performance in high-speed mobile
environments, we put the TDM Sub MAP hardware out there to support. Second, and perhaps more or at
least very important, it could well turn up on the test. If one question stands between you and passing, don’t
make this the one you miss. In principle, circuit switching and packet switching both are used in high-
capacity networks. In circuit-switched networks, network resources are static, set in “copper” if you will,
from the sender to receiver before the start of the transfer, thus creating a “circuit”. The resources remain
dedicated to the circuit during the entire transfer and the entire message follows the same path. In packet-
switched networks, the message is broken into packets, each of which can take a different route to the
destination where the packets are recompiled into the original message. All the above can be handled by a
router or a switch but much of IT today is going toward flat switched networks. So when we’re talking about
circuit switching or packet switching, we are more and more talking about doing it on a switch.

First, let’s be sure we understand what we mean by a switched network. A switched network goes through a
switch instead of a router. This actually is the way most networks are headed, toward flat switches on
VLANs instead of routers. Still, it’s not always easy to tell a router from a switch. It’s commonly believed
that the difference between a switched network and a routed network is simple binary opposition. Taint’s so.
A router operates at Layer 3 of the OSI Model and can create and connect several logical networks,
including those of different network topologies, such as Ethernet and Token Ring. A router will provide
multiple paths (compared to only one on a bridge) between segments and will map nodes on a segment and
the connecting paths with a routing protocol and internal routing tables. Being a Layer 3 device, the router
uses the destination IP address to decide where a frame should go. If the destination IP address is on a
segment directly connected to the router, then the router will forward the frame out the appropriate port to
that segment. If not, the router will search its routing table for the correct destination, again, using that IP
address. Having talked about a router as being a Layer 3 device, think about what I’m about to say next as a
general statement. I know there are exceptions, namely the Layer 3 switch. We’re not going to get into that,
not in this article. A switch is very like a bridge in that is usually a layer 2 device that looks to MAC
addresses to determine where data should be directed. A switch has other applications in common with a
bridge. Like a bridge, a switch will use transparent and source-route methods to move data and Spanning
Tree Protocol (STP) to avoid loops. However, switches are superior to bridges because they provide greater
port density and they can be configured to make more intelligent decisions about where data goes. The three
most common switch methods are:
1. Cut-through - Streams data so that the first part of a packet exits the switch before the rest of the packet
has finished entering the switch, typically within the first 12 bytes of an Ethernet frame.
2. Store-and-Forward - The entire frame is copied into the switch's memory buffer and it stays there while
the switch processes the Cyclical Redundancy Check (CRC) to look for errors in the frame. If the frame
contains no errors, it will be forwarded. If a frame contains an error, it will be dropped. Obviously, this
method has higher latency than cut-through but there will be no fragments or bad frames taking up
bandwidth.
3. Fragment-free Switching - Think of this as a hybrid of cut-through and store-and-forward. The switch
reads only the first 64 bytes of the frame into buffer
c) Virtual Circuit and Diagram
Differences between datagram and virtual circuit networks
There are a number of important differences between virtual circuit and datagram networks. The choice
strongly impacts complexity of the different types of node. Use of datagram’s between intermediate nodes
allows relatively simple protocols at this level, but at the expense of making the end (user) nodes more
complex when end-to-end virtual circuit service is desired. The Internet transmits data grams between
intermediate nodes using IP. Most Internet users need additional functions such as end-to-end error and
sequence control to give a reliable service (equivalent to that provided by virtual circuits). This reliability
may be provided by the Transmission Control Protocol (TCP) which is used end-to-end across the Internet,
or by applications such as the trivial file transfer protocol (ftp) running on top of the User Datagram Protocol
(UDP).
Fig: virtual circuit & diagram
d) Stop and Wait Protocol and Sliding Window Protocol
This experiment will bring out the correct choice of the packet sizes for transmission in Noisy channels.
Open two sliding window (S/W) applications, each in one computer. Assign Node Id 1 as receiver and Node
Id 0 as sender. Conduct the experiment for 200 seconds. Set the link rate to be 8kbps. Set the protocol to
CSMA/CD. Set the No. of Packets (Window size) to 1 in both nodes (because Stop and Wait protocol is a
window size-1 algorithm). Make sure the No. of Nodes is set to 2 and the Inter Packet delay (IPD) in both
the nodes is set to 0. This makes sure no delay is introduced in the network other than the transmission
delay. Set the TX/Rx Mode to be Promiscuous mode and the direction as sender or Receiver accordingly.
Set BER to 0 and run the experiment. Find out the size of the transmitted packets and the acknowledgement
packets received. Calculate the overhead involved in the transmitted packets for different packet sizes.
Choose packet sizes 10 ..100 bytes in multiples of 10. Now set BER to 10-3 and perform the experiment.
Give timeout as 1000ms. Calculate the throughputs. Perform the previous steps now for a BER of 10-4 for
packet sizes (100..900) bytes in steps of 100 and calculate the throughputs. Packet sizes are chosen longer,
as the BER is less. Give larger timeout as packets are longer (say, 2000ms). Plot throughput vs packet size
curves and find out the optimum packet size for the different BERs. Sliding Window Protocol This
experiment will bring out the necessity of increasing the transmitter and receiver window sizes and the
correct choice of the window size in a delay network.
1. Set up the same configuration as for the previous experiment.
2. Set BER to 0 and fix the packet size at 100 bytes.
3. Set IPD at the sender constantly at 20 ms and the IPD at the receiver to vary between 40 to 190 ms (in
steps of 50). This setting simulates various round-trip delays in the network.
4. Change the Window sizes from 1 to 5 in both the nodes together. Give large timeout, 10000ms, as this
will make sure that there are very few re-transmissions. Now perform the experiment.
5. Plot the throughput vs Window Sizes for different IPDs and find out the optimum window size for
different delays.

Question 2:
i) Compare the features of different 10 Mbps, 100 Mbps and 1000 Mbps Ethernet technology.
Ans) Ethernet (the name commonly used for IEEE 802.3 CSMA/CD) is the dominant cabling and low level
data delivery technology used in local area networks (LANs). First developed in the 1970s, it was published
as an open standard by DEC, Intel, and Xerox (or DIX), and later described as a formal standard by the
IEEE. Following are some Ethernet features:
• Ethernet transmits data at up to ten million bits per second (10Mbps). Fast Ethernet supports up to
100Mbps and Gigabit Ethernet supports up to 1000Mbps. Many buildings on the Indiana University campus
are wired with Fast Ethernet and the campus backbone is Gigabit Ethernet.
• Ethernet supports networks built with twisted-pair (10BaseT), thin and thick coaxial (10Base2 and
10Base5, respectively), and fiber-optic (10BaseF) cabling. Fast Ethernets can be built with twisted-pair
(100BaseT) and fiber-optic (100BaseF) cabling. Currently, 10BaseT Ethernets are the most common.
• Data is transmitted over the network in discrete packets (frames) which are between 64 and 1518 bytes in
length (46 to 1500 bytes of data, plus a mandatory 18 bytes of header and CRC information).
• Each device on an Ethernet network operates independently and equally, precluding the need for a central
controlling device.
• Ethernet supports a wide array of data types, including TCP/IP, AppleTalk, and IPX.
• To prevent the loss of data, when two or more devices attempt to send packets at the same time, Ethernet
detects collisions. All devices immediately stop transmitting and wait a randomly determined period of time
before they attempt to transmit again.
ii) Explain the basic operation of collision detection in Ethernet Collision detected procedure.
1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers
detect the collision.
2. Increment retransmission counter.
3. Was the maximum number of transmission attempts reached? If so, abort transmission.
4. Calculate and wait random back off period based on number of collision
5. Re-enter main procedure at stage 1.
This can be likened to what happens at a dinner party, where all the guests talk to each other through a
common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two
guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this
time is generally measured in microseconds). The hope is that by each choosing a random period of time,
both guests will not choose the same time to try to speak again, thus avoiding another collision.
Exponentially increasing back-off times (determined using the truncated binary exponential back off
algorithm) are used when here is more than one failed attempt to transmit. Computers were connected to an
Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin
Ethernet the transceiver was integrated into the network adapter). While a simple passive wire was highly
reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a
single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint
systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in
such a manner that some nodes would work properly while others work slowly because of excessive retries
or not at all (see standing wave for an explanation of why); these could be much more painful to diagnose
than a complete failure of the segment. Debugging such failures often involved several people crawling
around wiggling connectors while others watched the displays of computers running a ping command and
shouted out reports as performance changed. Since all communications happen on the same wire, any
information sent by one computer is received by all, even if that information is intended for just one
destination. The network interface card interrupts the CPU only when applicable packets are received: the
card ignores information not addressed to it unless it is put into “promiscuous mode". This "one speaks, all
listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can
eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is
shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart
iii) Explain the advantages of fiber optics over copper wire and coaxial cable advantages of fiber optic
cable
• System Performance
• Greatly increased bandwidth and capacity
• Lower signal attenuation (loss)
• Immunity to Electrical Noise
• Immune to noise (electromagnetic interference [EMI] and radio-frequency interference
[RFI]
• No crosstalk
• Lower bit error rates
• Signal Security
• Difficult to tap
• Nonconductive (does not radiate signals) Electrical Isolation
• No common ground required
• Freedom from short circuit and sparks
• Size and Weight
• Reduced size and weight cables
• Environmental Protection
• Resistant to radiation and corrosion
• Resistant to temperature variations
• Improved ruggedness and flexibility
• Less restrictive in harsh environments
• Overall System Economy
• Low per-channel cost
• Lower installation cost
Question3.
i) Describe how the TCP/IP protocol stack is organized compared to the ISI/OSI protocol stack
The layers near the top are logically closer to the user application, while those near the bottom are logically
closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a
method of abstraction to isolate upper layer protocols from the nitty-gritty detail of transmitting bits over,
for example, Ethernet and collision detection, while the lower layers avoid having to know the details of
each and every application and its protocol. The layers near the top are logically closer to the user
application, while those near the bottom are logically closer to the physical transmission of the data.
Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer
protocols from the nitty-gritty detail of transmitting bits over, for example, Ethernet and collision detection,
while the lower layers avoid having to know the details of each and every application and its protocol. This
abstraction also allows upper layers to provide services that the lower layers cannot, or choose not to,
provide. Again, the original OSI Reference Model was extended to include connectionless services (OSIRM
CL). [6] For example, IP is not designed to be reliable and is a best effort delivery protocol. This means that
all transport layer implementations must choose whether or not to provide reliability and to what degree.
UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides both data
integrity and delivery guarantee (by retransmitting until the receiver acknowledges the reception of the
packet). This model lacks the formalism of the OSI reference model and associated documents, but the IETF
does not use a formal model and does not consider this a limitation, as in the comment by David D. Clark,
"We reject: kings, presidents and voting. We believe in: rough consensus and running code." Criticisms of
this model, which have been made with respect to the OSI Reference Model, often do not consider ISO's
later extensions to that model.
1. For multi-access links with their own addressing systems (e.g. Ethernet) an address mapping protocol is
needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF
does not use the terminology, this is a sub network dependent convergence facility according to an extension
to the OSI model, the Internal Organization of the Network Layer (IONL) [7].
2. ICMP & IGMP operates on top of IP but do not transport data like UDP or TCP. Again, this functionality
exists as layer management extensions to the OSI model, in its
Management Framework (OSIRM MF) [8]
3. The SSL/TLS library operates above the transport layer (utilizes TCP) but below application protocols.
Again, there was no intention, on the part of the designers of these protocols, to comply with OSI
architecture.
4. The link is treated like a black box here. This is fine for discussing IP (since the whole point of IP is it
will run over virtually anything). The IETF explicitly does not intend to discuss transmission systems, which
is a less academic but practical alternative to the OSI
Reference Model. This abstraction also allows upper layers to provide services that the lower layers cannot,
or choose not to, provide. Again, the original OSI Reference Model was extended to include connectionless
services (OSIRM CL). [6] For example, IP is not designed to be reliable and is a best effort delivery protocol.
This means that all transport layer implementations must choose whether or not to provide reliability and to
what degree. UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides
both data integrity and delivery guarantee (by retransmitting until the receiver acknowledges the reception of
the packet). This model lacks the formalism of the OSI reference model and associated documents, but the
IETF does not use a formal model and does not consider this a limitation, as in the comment by David D.
Clark, "We reject: kings, presidents and voting. We believe in: rough consensus and running code."
Criticisms of this model, which have been made with respect to the OSI Reference Model, often do not
consider ISO's later extensions to that model.
1. For multi-access links with their own addressing systems (e.g. Ethernet) an address mapping protocol is
needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF
does not use the terminology, this is a sub network dependent convergence facility according to an extension
to the OSI model, the Internal Organization of the Network Layer (IONL) [7].
2. ICMP & IGMP operates on top of IP but do not transport data like UDP or TCP. Again, this functionality
exists as layer management extensions to the OSI model, in its Management Framework (OSIRM MF) [8]
3. The SSL/TLS library operates above the transport layer (utilizes TCP) but below application protocols.
Again, there was no intention, on the part of the designers of these protocols, to comply with OSI
architecture.
4. The link is treated like a black box here. This is fine for discussing IP (since the whole point of IP is it
will run over virtually anything). The IETF explicitly does not intend to discuss transmission systems, which
is a less academic but practical alternative to the OSI Reference Model.
(ii)Describe the relationship between IP address & MAC address
MAC addresses are typically used only to direct packets in the device-to-device portion of a network
transaction. That means that your computer's MAC address will be in network packets only until the next
device in the chain. If you have a router, then your machine's MAC address will go no further than that.
Your router's MAC address will show up in packets sent further upstream, until that too is replaced by the
MAC address of the next device - likely either your modem or your ISP's router. So your MAC address
doesn't make it out very far. Even if someone knows your MAC address, that knowledge certainly doesn't
help anyone do anything either good or bad. An IP address is assigned to every device on a network so that
device can be located on the network. The internet is just a network after all, and every device connected to
it has an IP address so that it can be located. The server that houses Ask Leo!, for example, is at
72.3.133.152. That number is used by the network routing equipment so that when you ask for a page from
the site, that request is routed to the right server. The computers or equipment you have connected to the
internet are also assigned IP addresses. If you're directly connected, your computer will have an IP address
that can be reached from anywhere on the internet. If you're behind a router, that router will have that
internet-visible IP address, but it will then set up a private network that your computer is connected to,
assigning IP addresses out of a private range that is not directly visible on the internet. All internet traffic
must go through the router, and will appear on the internet to have come from that router

Question 4:
i) Why use ftp to transfer files rather than e- mail
FTP (File Transfer Protocol) is a way you can move files from one computer location (network) to another.
To make an FTP connection you can use a standard Web browser Internet Explorer, Netscape, etc.) or a
dedicated FTP software program, referred to as an FTP 'Client'. If you have ever used fetch (Mac), WFTP
(PC) or FTP (Win 95), you are already familiar with some clients for file transfer. Web browsers, like
Netscape, can be another ftp tool that is particularly easy to use. This document is about using a web
browser, which will work fine if you only need access to your home directory (the same as H: if you are on
campus) or to the samples directory; if you need access to other directories (like orchard) you would need to
use a client program. To help people without ftp access, a number of ftp sites have set up mail servers (also
known as archive servers) that allow you to get files via e-mail. You send a request to one of these machines
and they send back the file you want. As with ftp, you'll be able to find everything from historical
documents to software (but please note that if you do have access to ftp, that method is always quicker and
ties up fewer resources than using e-mail).
ii) What is the purpose of time to live field of the ip datagram header?
Time to live (sometimes abbreviated TTL) is a limit on the period of time or number of iterations or
transmissions in computer and computer network technology that a unit of data (e.g. a packet) can
experience before it should be discarded
IP packets
In IPv4, time to live (TTL) is an 8-bit field in the Internet Protocol (IP) header. It is the 9th octet of 20. The
time to live value can be thought of as an upper bound on the time that an IP datagram can exist in an
internet system. The TTL field is set by the sender of the datagram, and reduced by every host on the route
to its destination. If the TTL field reaches zero before the datagram arrives at its destination, then the
datagram is discarded and an ICMP error datagram (11 - Time Exceeded) is sent back to the sender. The
purpose of the TTL field is to avoid a situation in which an undeliverable datagram keeps circulating on an
internet system, and such a system eventually becoming swamped by such immortal data grams. In theory,
time to live is measured in seconds, although every host that passes the datagram must reduce the TTL by at
least one unit. In practice, the TTL field is reduced by one on every hop. To reflect this practice, the field is
named hop limit in IPv6.
iii) How does the 3-way handshake mechanism for creating a TCP connection work?
The three-way handshake in Transmission Control Protocol (also called the three message handshake) is the
method used to establish and tear down network connections. This handshaking technique is referred to as
the 3-way handshake or as "SYN-SYN-ACK" (or more accurately SYN, SYN-ACK, ACK). The TCP
handshaking mechanism is designed so that two computers attempting to communicate can negotiate the
parameters of the network connection before beginning communication. This process is also designed so
that both ends can initiate and negotiate separate connections at the same time.
3-Way Handshake Description
Below is a (very) simplified description of the TCP 3-way handshake process. Refer to the diagram on the
right as you examine the list of events on the left.
EVENT DIAGRAM
Host A sends a TCP Synchronize packet to Host B
Host B receives A's SYN
Host B sends a Synchronize Acknowledgement
Host A receives B's SYN-ACK
Host A sends Acknowledge
Host B receives ACK. TCP connection is ESTABLISHED.
Synchronize and acknowledge messages are indicated by a bit inside the TCP header of the segment. TCP
knows whether the network connection is opening, synchronizing or established by using the Synchronize
and Acknowledge messages when establishing a network connection. When the communication between
two computers ends, another 3-way communication is performed to tear down the TCP connection. This
setup and teardown of a TCP connection is part of what qualifies TCP a reliable protocol. Note that UDP
does not perform this 3-way handshake and for this reason, it is referred to as an unreliable protocol.
iv) What is the different between a port and a socket?
Ans: A port is a software address on a computer on the network–for instance, the News server is a piece of
software that is normally addressed through port 119, the POP server through port 110, the SMTP server
through port 25, and so on. A socket is a communication path to a port. When you want your program to
communicate over the network, you have given it a way of addressing the port and this is done by creating a
socket and attaching it to the port. Basically, socket = IP + ports Sockets provide access to the port + IP.

Question 5:
i) Explain the difference between distance vector and link state routing protocol through examples.
DISTANCE VECTOR
Distance is the cost of reaching a destination, usually based on the number of hosts the path passes through,
or the total of all the administrative metrics assigned to the links in the path. Vector From the standpoint of
routing protocols, the vector is the interface traffic will be forwarded out in order to reach an given
destination network along a route or path selected by the routing protocol as the best path to the destination
network. Distance vector protocols use a distance calculation plus an outgoing network interface (a vector)
to choose the best path to a destination network. The network protocol (IPX, SPX, IP, and AppleTalk,
Decent etc.) will forward data using the best paths selected.
Common distance vector routing protocols include:
• AppleTalk RTMP
• IPX RIP
• IP RIP
• IGRP
Advantages of Distance Vector Protocols
Well Supported Protocols such as RIP have been around a long time and most, if not all devices that perform
routing will understand RIP.
LINK STATE
Link State protocols track the status and connection type of each link and produce a calculated metric based
on these and other factors, including some set by the network administrator. Link state protocols know
whether a link is up or down and how fast it is and calculate a cost to 'get there'. Since routers run routing
protocols to figure out how to get to a destination, you can think of the 'link states' as being the status of the
interfaces on the router. Link State protocols will take a path which has more hops, but that uses a faster
medium over a path using a slower medium with fewer hops. Because of their awareness of media types and
other factors, link state protocols require more processing power (more circuit logic in the case of ASICs)
and memory. Distance vector algorithms being simpler require simpler hardware.
A Comparison: Link State vs. Distance Vector
See Fig. 1-1 below. If all routers were running a Distance Vector protocol, the path or 'route' chosen would
be from A B directly over the ISDN serial link, even though that link is about 10 times slower than the
indirect route from A C D B. A Link State protocol would choose the A C D B path because it's using a
faster medium (100 Mb Ethernet). In this example, it would be better to run a Link State routing protocol,
but if all the links in the network are the same speed, then a Distance Vector protocol is better. ii) Why is
the security of web very important today? Also outline the design goals and features of SSL 3.0. The
great importance of Web security
However, while using the Internet, along with the convenience and speed of access to information come new
risks. Among them are the risks that valuable information will be lost, stolen, corrupted, or misused and that
the computer systems will be corrupted. If information is recorded electronically and is available on
networked computers, it is more vulnerable than if the same information is printed on paper and locked in a
file cabinet. Intruders do not need to enter an office or home, and may not even be in the same country. They
can steal or tamper with information without touching a piece of paper or a photocopier. They can create
new electronic files, run their own programs, and even hide all evidence of their unauthorized activity.
Basic Web security concepts
The three basic security concepts important to information on the Internet are:
1. Confidentiality.
2. Integrity.
3. Availability.
This document introduces the Secure Sockets Layer (SSL) protocol. Originally developed by Netscape, SSL
has been universally accepted on the World Wide Web for authenticated and encrypted communication
between clients and servers. The new Internet Engineering Task Force (IETF) standard called Transport
Layer Security (TLS) is based on SSL. This was recently published as an IETF Internet-Draft, The TLS
Protocol Version 1.0. Netscape products will fully support TLS. This document is primarily intended for
administrators of Netscape server products, but the information it contains may also be useful for developers
of applications that support SSL. The document assumes that you are familiar with the basic concepts of
public-key cryptography, as summarized in the companion document Introduction to Public-Key
Cryptography.
The SSL Protocol
The Transmission Control Protocol/Internet Protocol (TCP/IP) governs the transport and routing of data
over the Internet. Other protocols, such as the Hypertext Transport Protocol (HTTP), Lightweight Directory
Access Protocol (LDAP), or Internet Messaging Access Protocol (IMAP), run "on top of" TCP/IP in the
sense that they all use TCP/IP to support typical application tasks such as displaying web pages or running
email servers. Figure 1 SSL runs above TCP/IP and below high-level application protocols The SSL
protocol runs above TCP/IP and below higher-level protocols such as HTTP or IMAP. It uses TCP/IP on
behalf of the higher-level protocols, and in the process allows an SSL-enabled server to authenticate itself to
an SSL-enabled client, allows the client to authenticate itself to the server, and allows both machines to
establish an encrypted connection. These capabilities address fundamental concerns about communication
over the Internet and other TCP/IP networks:
• SSL server authentication allows a user to confirm a server's identity. SSL-enabled client software can
use standard techniques of public-key cryptography to check that a server's certificate and public ID are
valid and have been issued by a certificate authority (CA) listed in the client's list of trusted CAs. This
confirmation might be important if the user, for example, is sending a credit card number over the network
and wants to check the receiving server's identity.
• SSL client authentication allows a server to confirm a user's identity. Using the same techniques as those
used for server authentication, SSL-enabled server software can check that a client's certificate and public ID
are valid and have been issued by a certificate authority (CA) listed in the server's list of trusted CAs. This
confirmation might be important if the server, for example, is a bank sending confidential financial
information to a customer and wants to check the recipient's identity.
• An encrypted SSL connection requires all information sent between a client and a server to be encrypted
by the sending software and decrypted by the receiving software, thus providing a high degree of
confidentiality. Confidentiality is important for both parties to any private transaction. In addition, all data
sent over an encrypted SSL connection is protected with a mechanism for detecting tampering--that is, for
automatically determining whether the data has been altered in transit. The SSL protocol includes two sub-
protocols: the SSL record protocol and the SSL handshake protocol. The SSL record protocol defines the
format used to transmit data. The SSL handshake protocol involves using the SSL record protocol to
exchange a series of messages between an SSL-enabled server and an SSL-enabled client when they first
establish an SSL connection. This εξχηανγε οφ µεσσαγεσ ισ δεσιγνεδ το φαχιλιτατε τηε φολλοωινγ αχτιονσ:
• Authenticate the server to the client.
• Allow the client and server to select the cryptographic algorithms, or ciphers, that they both support.
• Optionally authenticate the client to the server.
• Use public-key encryption techniques to generate shared secrets.
• Establish an encrypted SSL connection.

Vous aimerez peut-être aussi