Académique Documents
Professionnel Documents
Culture Documents
Table of contents
1.1. Internet architectures (client-server, peer-to-peer).........................................2
1.2. Internet protocols (IPv4, IPv6, TCP, UDP)...................................................12
1.2.1. IPv4...........................................................................................................12
1.2.2. IPv6...........................................................................................................19
1.2.3. TCP...........................................................................................................25
1.2.4. UDP ..........................................................................................................30
1.3. Internet routing and network interconnection ...............................................33
1.4. Fundamental Internet technologies (DNS, DHCP/DHCPv6).......................40
1.4.1. DNS ..........................................................................................................40
1.4.2. DHCP/DHCPv6.........................................................................................44
1.5. World Wide Web (WWW).............................................................................50
1.6. Important Internet services (E-mail, FTP, BitTorrent, Skype, Youtube, social
networking) .........................................................................................................57
1.7. Internet regulation and network neutrality ....................................................66
Abbreviations ......................................................................................................73
References .........................................................................................................75
But over time the formerly simple and clear Internet architecture became a
patchwork of new multimedia application demands, balconies, detours,
wormholes, workarounds and bypasses. And moreover there are many
limitations in the current Internet, such as: processing and handling limitations,
storage limitations, IPv4 addresses limitations, transmission limitations, control
and operation limitations and etc. The Internet and its architecture have grown in
evolutionary fashion from modest beginnings, rather than from a Grand Plan.
While this process of evolution is one of the main reasons for the technology's
success, it nevertheless seems useful to record a overview of the current
principles of the Internet architecture.
Fact is that in the near future, the high volume of content together with
new emerging and mission critical applications is expected to stress the Internet
to such a degree that it will possibly not be able to respond adequately to its new
role. This challenge has motivated many groups and research initiatives
worldwide to search for structural modifications to the Internet architecture in
order to be able to face the new requirements.
But first of all, let we see: from what the Internet IP architecture is
consisted? Architecture for IP networks may consist of three parts, Application
(or Service) Model, System Model and Technology Model. Relationships among
the three parts for IP Networks Architecture are as shown in Figure 1.2.
Performance parameters for the system and its components should also
be defined in this model. System model for IP networks architecture can be
described from, and functions divided into two planes (or directions): entities
plane (horizontal direction) and logical plane (vertical direction).
Functions on entities plane for System model of IP networks architecture
can be divided into three sections: core network, access network and customers
network. Each of them can be further divided in detail, for example, functions of a
core network can be divided into two layers: IP layer function and
telecommunication layer function. Further information regarding more detailed
functionality distribution can be found in ITU-T Y.1231 - IP Access Network
Architecture. The architecture details for the (Telecommunications) Access
Network Transport Function can be found in ITU-T G.902 - Framework
Recommendation on functional Access.
The technology model for IP network architecture should consist of a
series of technical standards or recommendations, describing configuration,
interrelation and interaction of various components in an IP network as shown
abstractly in Figure 1.4. The technology model comprises a diversified set of
4
Figure 1.5. Illustration of Hourglass Protocol Stack, where on network layer there is only
one protocol- IP.
Generally, the communication between the client and server can use TCP,
UDP or other protocol, but both sides need to use the same type of protocol and
appropriate socket interfaces (stream socket for the TCP, and datagram"
socket for the UDP).
In the case of a connectionless-oriented communication (based on
UDP/IP) there is not required explicit identification of who the server is. When
sending datagram via UDP/IP the sending application needs to specify its own IP
address and port number on the local machine (i.e., to open a datagram socket)
through which the datagram will be sent. When a machine expects incoming
datagrams (via UDP/IP), the receiving application must declare IP address and
port on the local machine (i.e. to open a datagram socket) through which it
expects to receive datagrams from other machines (i.e., hosts).
In the case of connection-oriented communication (TCP) the approach is
different:
The client must connect to the server before receiving or sending data
from/to it.
The server listens on a specific port and IP address (on a given interface),
and must accept the communication with the client before sending or
receiving data.
The server can accept client when it receives a request to connect by the
client.
Well known Internet services (i.e., applications), which use client-server
communications are Electronic mail (E-mail), File Transfer Protocol (FTP),
Hypertext Transfer Protocol (HTTP), etc. for which will come a word in chapter 5
(World Wide Web (WWW)) and chapter 6 (Important Internet services (E-mail,
FTP, BitTorrent, Skype, Youtube, social networking)) of this module. Also,
fundamental Internet technologies, such as DHCP and DNS (included in chapter
4 in this module), are based on the client-server model.
Due to such approach, P2P networks have high scalability and robustness
because they do not relay on a single network host such as server in clientserver network architectures. As shown in Figure 1.7, in P2P networks each
participant who is connected to the network is a node with equal access to
network resources and to all other users. The owner of each node (e.g.,
computer) on a P2P network is supposed to set up certain resources (e.g.,
processing power, access data rate to Internet, memory on the hard disk, etc.)
which are shared with other nodes in the P2P network. In such way P2P network
is a distributed application architecture that partitions certain tasks among
several peers. Hence, P2P networking is based on establishing a temporarily
logical architecture of peers (nodes in the P2P networks), as an overlay network
in the Internet, where peers act as clients or servers to other nodes in the P2P
network allowing shared access to different resources such as files, streams
(e.g., video streams), devices (e.g., sensors), etc.
1.1.3 General design issues for Internet architecture
Furthermore, let us summarise the general design issues for Internet
architecture:
Heterogeneity is inevitable and must be supported by design: Multiple
types of hardware must be allowed for, e.g. transmission speeds differing
10
Figure 1.8. Four objectives and twelve design goals of future networks.
Figure 1.8 above shows the relationships between the four objectives
described in clause 7 of [10] and the twelve design goals described in this
clause. It should be noted that some design goals, such as network
management, mobility, identification, and reliability and security, may relate to
multiple objectives. Figure 1.8 shows only the relationships between a design
goal and its most relevant objective.
11
Figure 1.9. General form of an IP datagram, the TCP/IP analogy to a network frame.
12
13
Informally called Type Of Service (TOS), the 8-bit SERVICE TYPE field
specifies how the datagram should be handled. The field was originally divided
into five subfields as shown in Figure 1.11.
Figure 1.11. The original five subfields that comprise the 8-bit SERVICE TYPE field.
Figure 1.12. The differentiated services (DS) interpretation of the SERVICE TYPE field
in an IP datagram.
Under the differentiated services interpretation, the first six bits comprise a
codepoint, which is sometimes abbreviated DSCP and the last two bits are left
unused. A codepoint value maps to an underlying service definition, typically
through an array of pointers. Although it is possible to define 64 separate
services, the designers suggest that a given router will only have a few services,
14
As the Table 1.1 indicates, half of the values (i.e., the 32 values in pool I)
must be assigned interpretations by the ETF. Currently, all values in pools 2 and
3 are available for experimental or local use. However, if the standards bodies
exhaust all values in pool I, they may also choose to assign values in pool 3.
The division into pools may seem unusual because it relies on the loworder bits of the value to distinguish pools. Thus, rather than a contiguous set of
values, pool I contains every other codepoint value (i.e., the even numbers
between 2 and 64). The division was chosen to keep the eight codepoints
corresponding to values xxxO 0 0 in the same pool.
Whether the original ToS interpretation or the revised differentiated
services interpretation is used, it is important to realize that routing software must
choose from among the underlying physical network technologies at hand and
must adhere to local policies. Thus, specifying a level of service in a datagram
does not guarantee that routers along the path will agree to honor the request.
To summarize this part of the Section: we regard the service type
specification as a hint to the routing algorithm that helps it choose among various
15
paths to a destination based on local policies and its knowledge of the hardware
technologies available on those paths. An internet does not guarantee to provide
any particular type of service.
Furthermore, the three fields in the datagram header, IDENTIFICATION,
FLAGS, and FRAGMENT OFFSET, control fragmentation and reassembly of
datagrams. Field IDENTIFICATION contains a unique integer that identifies the
datagram. Recall that when a router fragments a datagram, it copies most of the
fields in the datagram header into each fragment. Thus, the IDENTIFICATION
field must be copied. Its primary purpose is to allow the destination to know
which arriving fragments belong to which datagrams. As a fragment arrives, the
destination uses the IDENTIFICATION field along with the datagram source
address to identify the datagram.
Computers sending IP datagrams must generate a unique value for the
IDENTIFICATION field for each datagram. One technique used by IP software
keeps a global counter in memory, increments it each time a new datagram is
created, and assigns the result as the datagram's IDENTIFICATION field.
Recall that each fragment has exactly the same format as a complete
datagram. For a fragment, field FRAGMENT OFFSET specifies the offset in the
original datagram of the data being carried in the fragment, measured in units of
8 octets, starting at offset zero. To reassemble the datagram, the destination
must obtain all fragments starting with the fragment that has offset 0 through the
fragment with highest offset. Fragments do not necessarily arrive in order, and
there is no communication between the router that fragmented the datagram and
the destination trying to reassemble it. The low-order two bits of the 3-bit FLAGS
field control fragmentation. Usually, application software using TCP/IP does not
care about fragmentation because both fragmentation and reassembly are
automatic procedures that occur at a low level in the operating system, invisible
to end users. However, to test internet software or debug operational problems, it
may be important to test sizes of datagrams for which fragmentation occurs. The
first control bit aids in such testing by specifying whether the datagram may be
fragmented. It is called the do not fragment bit because setting it to 1 specifies
that the datagram should not be fragmented. An application may choose to
disallow fragmentation when only the entire datagram is useful.
Moreover, the field TIME TO LIVE specifies how long, in seconds, the
datagram is allowed to remain in the internet system. The idea is both simple and
important: whenever a computer injects a datagram into the internet, it sets a
maximum time that the datagram should survive. Routers and hosts that process
datagrams must decrement the TIME TO LIVE (TTL) field as time passes and
remove the datagram from the internet when its time expires. Estimating exact
times is difficult because routers do not usually know the transit time for physical
networks. A few rules simplify processing and make it easy to handle datagrams
without synchronized clocks. First, each router along the path from source to
destination is required to decrement the TTL field by I when it processes the
datagram header. Furthermore, to handle cases of overloaded routers that
introduce long delays, each router records the local time when the datagram
16
arrives and decrements the TTL by the number of seconds the datagram
remained inside the router waiting for service.
Whenever a TTL field reaches zero, the router discards the datagram and
sends an error message back to the source. The idea of keeping a timer for
datagrams is interesting because it guarantees that datagram cannot travel
around an internet forever, even if routing tables become corrupt and routers
route datagrams in a circle. Although once important, the notion of a router
delaying a datagram for many seconds is now outdated - current routers and
networks are designed to forward each datagram within a reasonable time. If the
delay becomes excessive, the router simply discards the datagram. Thus, in
practice, the TTL acts as a "hop limit" rather than an estimate of delay. So, each
router only decrements the value by one.
Field PROTOCOL is analogous to the type field in a network frame; the
value specifies which high-level protocol was used to create the message carried
in the DATA area of the datagram. In essence, the value of PROTOCOL
specifies the format f the DATA area. The mapping between a high level protocol
and the integer value used in the PROTOCOL field must be administered by a
central authority to guarantee agreement across the entire Internet.
Field HEADER CHECKSUM ensures integrity of header values. The IP
checksum is formed by treating the header as a sequence of 16-bit integers (in
network byte order), adding them together using one's complement arithmetic,
and then taking the one's complement of the result. For purposes of computing
the checksum, field HEADER CHECKSUM is assumed to contain zero.
It is important to note that the checksum only applies to values in the IP
header and not to the data. Separating the checksum for headers and data has
advantages and disadvantages. Because the header usually occupies fewer
octets than the data, having a separate checksum reduces processing time at
routers which only need to compute header checksums. The separation also
allows higher level protocols to choose their own checksum scheme for the data.
The chief disadvantage is that higher level protocols are forced to add their own
checksum or risk having corrupted data to go undetected.
Fields SOURCE IP ADDRESS and DESTINATION IP ADDRESS contain
the 32-bit IP addresses (IPv4 addresses) of the datagram's sender and intended
recipient. Although the datagram may be routed through many intermediate
routers, the source and destination fields never change; they specify the IP
addresses of the original source and ultimate destination.
Every host on the Internet has its own IP address, which consists of two
parts: network ID (network part of the IP address), and host ID (host part of the
IP address), and has a total length of 32 bits for IP versions 4. Network ID
defines the network, and if the network should be part of the Internet it is given by
a global authority, the Internet Corporation for Assigned Names and Numbers
(ICANN), usually through its regional organizations. For each new network that
requires access to the Internet, ICANN assigns network ID. Host ID identifies
uniquely a given host in the network. IP address in a unique way identifies the
specific network interface on a given Internet host (e.g., a computer, mobile
device, etc.) in a given IP network. There are two types of IP addressing:
17
Classful addressing;
Classes addressing (Classless Inter-Domain Routing - CIDR).
Classful addressing
Classfull addressing is defined with five different classes of IP addresses,
shown in Figure 1.13. Class-A IP address allows the existence of 126 different
networks with 16 million hosts pre network; class-B includes 16 382 networks
with 65534 hosts per network; class-C includes 2 million networks with 254 hosts
per network.
18
There are key features of IPv6 which may significantly impact Broadband
Internet in various ways, such as addressing schemes, QoS, security and
mobility:
- Simplified packet format: IPv6 headers are simplified from IPv4 headers.
Some IPv4 header fields have been dropped or made optional to limit their
20
bandwidth cost. They also have a constant size to reduce the common
processing cost of packet handling.
- Expanded addressing scheme: IPv6 addressing schemes have a large
addressing space due to an increased size of the IP address fields to support
more levels of addressing hierarchy, a much greater number of addressable
nodes and interfaces, and a simpler autoconfiguration of addresses. The
scalability of multicast routing is improved by adding a "scope" field to multicast
addresses. In addition, a new type of address called an "anycast address" is
defined and is used to send a packet to any of a group of nodes.
- QoS: A flow label and traffic class fields in IPv6 header are added to
enable the labeling of packets belonging to particular traffic "flows" for which the
sender requests special handling, such as non-default quality of service or "realtime" service. In addition, IPv6 hop-by-hop header with router-alert option will
indicate the contents of IPv6 packets to support the selective processing of the
intermediate nodes.
- Security support: IPv6 supports built-in IPsec services such as
authentication, data integrity and data confidentiality using authentication header
(AH) and encapsulating security payload (ESP) extension headers. These enable
end-to-end security services via global IP addresses even though intermediate
nodes do not understand the IPsec headers.
- Mobility support: IPv6 capabilities such as neighbour discovery, address
resolution and reachability detection support the mobility services using
destination option, routing and mobility extension headers.
Furthermore, the format of IPv6 header is shown in Figure 1.14.
21
22
have to use smaller packets. In such case higher header redundancy leads to
inefficient utilization of the available links and network capacity than IPv4.
IPv6 addressing architecture
IPv6 addressing differs from the IPv4 addressing. Each IPv6 address has
length of 128 bits (i.e., 16 bytes), and is divided in three parts, which is different
than IPv4 addresses which have only two parts (network ID and host ID). Due to
larger address, IPv6 addresses are written in colon hexadecimal notation in
which 128 bits are divided into 8 sections, each section with 16 bits (which
equals to 4 hexadecimal digits). The preferred form is x:x:x:x:x:x:x:x, where an
"x" can be 1 to 4 hexadecimal digits. It is less than 4 in cases when there are
consecutive series of zeros in the address as shown in the example below.
Example of IPv6 address:
2001:0000:0000:0000:0008:0800:200C:417A, which may be written also
as:
2001:0:0:0:8:800:200C:417A, and in compressed mode it will be:
2001::8:800:200C:417A (the use of "::" replaces one or more groups of
consecutive zeros in the IPv6 address, and can be used only once).
There are three types of IPv6 addresses:
Unicast: This is identifier to a single interface in the network.
Anycast: This type of address in used when identifier is given for a set of
network interfaces, which may belong to different nodes. When a packet
is sent to a destination any-cast address it should be delivered (by means
of routing) to the nearest nodes in the set (according to the routing
metrics). This type of addressing appears with IPv6 (in IPv4 are defined
unicast and multicast addresses, but also local broadcast addresses
which are not present in IPv6).
Multicast: This is an identifier to a set of network interfaces. Packet
addressed to a multicast address will be delivered to all addresses in the
set.
IPv6 also allows CIDR notation (as it exists for IPv4), which is performed
by using IPv6 address and a binary prefix mask, as given in IPv6 notation in
Table 1.3.
Binary prefix
00...00 (128 zeros)
0.0...01 (128 bits)
11111111 (8 ones)
1111111010 (10 bits)
IPv6 notation
::/128
::1/128
FF00::/8
FE80::/10
23
IPv6 addressing architecture has one hierarchy layer more than IPv4. The
general format for global unicast IPv6 addresses has three parts: global routing
prefix, subnet ID, and interface ID (Figure 1.15). All global unicast IPv6
addresses (other than those with leading zeros, which in fact have embedded
IPv4 addresses in the lowest 32 bits) have 64-bit interface ID.
Link-local IPv6 addresses (Table 1.1) are used for so-called stateless
address autoconfiguration, where 64-bits of the interface ID are obtained from
the interfaces link address (e.g., using 16 zeroes concatenated with 48-bit
Ethernet address of the given interface). IPv6 stateful address autoconfiguration
is provided with DHCP in the same manner as IPv4.
We can conclude that the IPv6 is a well-defined protocol to support the
Nowadays and Future Internet functions. In the following we give the impact of
using IPv6 to the Internet from various viewpoints.
Enhanced service capabilities: IPv6 enables congestion/flow control using
additional QoS information such as flow label, etc. The flow label field of
IPv6 header enables IPv6 flow identification independently of transport
layer protocols. This means that new enhanced service capabilities can be
introduced more easily. IPv6 supports better mobility by removing triangle
routing problem. IPv6 supports secure networking using embedded IPv6
security solution such as ESP and AH.
Any-to-any IP connectivity: IP connectivity will be one of the vital features
in order to cope with the increasing number of end users/devices. Using
globally routable IPv4 addresses to network millions of devices, such as
sensors, is not feasible. On the other hand, IPv6 offers the advantages of
localizing traffic with unique local addresses, while making some devices
globally reachable by assigning addresses which are scoped globally.
Therefore, the greatest potential of IPv6 will be realized in the objects-toobjects communications. IPv6 can satisfy this end-to-end principle of the
Internet.
Self-organization and service discovery using autoconfiguration: IPv6 can
provide autoconfiguration capability using neighbour discovery protocol,
etc. Through linking together the IP layer and lower layers,
autoconfiguration enables with ease self-organization and service
discovery of network management and reduces management
requirements.
Multi-homing using IPv6 addressing: IPv6 can handle multiple
heterogeneous access interfaces and/or multiple IPv6 addresses through
24
In the main mechanism for functioning, the window principle is used in TCP, but
with a few differences:
Since TCP provides a byte-stream connection, sequence numbers are
assigned to each byte in the stream. TCP divides this contiguous byte stream
into TCP segments to transmit them. The window principle is used at the byte
level, that is, the segments sent and ACKs received will carry byte-sequence
numbers and the window size is expressed as a number of bytes, rather than a
number of packets.
The window size is determined by the receiver when the connection is
established and is variable during the data transfer. Each ACK message will
include the window size that the receiver is ready to deal with at that particular
time.
The sender's data stream can now be seen as following Figure 1.17:
26
Where:
A: Bytes that are transmitted and have been acknowledged.
B: Bytes that are sent but not yet acknowledged.
C: Bytes that can be sent without waiting for any acknowledgment.
D: Bytes that cannot be sent yet.
Remember that TCP will block bytes into segments, and a TCP segment
only carries the sequence number of the first byte in the segment.
Furthermore, the TCP segment format is shown in Figure 1.18.
Where:
Source Port: The 16-bit source port number, used by the receiver to
reply.
Destination Port: The 16-bit destination port number.
27
Sequence Number: The sequence number of the first data byte in this
segment. If the SYN control bit is set, the sequence number is the initial
sequence number (n) and the first data byte is n+1.
Acknowledgment Number: If the ACK control bit is set, this field contains
the value of the next sequence number that the receiver is expecting to receive.
Data Offset: The number of 32-bit words in the TCP header. It indicates
where the data begins.
Reserved: Six bits reserved for future use; must be zero.
URG: Indicates that the urgent pointer field is significant in this segment.
ACK: Indicates that the acknowledgment field is significant in this
segment.
PSH: Push function.
RST: Resets the connection.
SYN: Synchronizes the sequence numbers.
FIN: No more data from sender.
Window: Used in ACK segments. It specifies the number of data bytes,
beginning with the one indicated in the acknowledgment number field that the
receiver (= the sender of this segment) is willing to accept.
Checksum: The 16-bit one's complement of the one's complement sum
of all 16-bit words in a pseudo-header, the TCP header, and the TCP data. While
computing the checksum, the checksum field itself is considered zero.
TCP sends data in variable length segments. Sequence numbers are
based on a byte count. Acknowledgments specify the sequence number of the
next byte that the receiver expects to receive. Consider that a segment gets lost
or corrupted. In this case, the receiver will acknowledge all further well-received
segments with an acknowledgment referring to the first byte of the missing
packet. The sender will stop transmitting when it has sent all the bytes in the
window. Eventually, a timeout will occur and the missing segment will be
retransmitted. The Figure 1.19 illustrates and example where a window size of
1500 bytes and segments of 500 bytes are used.
Before any data can be transferred, a TCP connection has to be
established between the two processes. One of the processes (usually the
server) issues a passive OPEN call, the other an active OPEN call. The passive
OPEN call remains dormant until another process tries to connect to it by an
active OPEN.
On the network, three TCP segments are exchanged, thats why this
process is also known as three way handshake. Note that the exchanged TCP
segments include the initial sequence numbers from both sides, to be used on
subsequent data transfers.
Closing the connection is done implicitly by sending a TCP segment with
the FIN bit (no more data) set. Since the connection is full-duplex (that is, there
are two independent data streams, one in each direction), the FIN segment only
closes the data transfer in one direction. The other process will now send the
remaining data it still has to transmit and also ends with a TCP segment where
the FIN bit is set. The connection is deleted (status information on both sides)
once the data stream is closed in both directions.
28
One big difference between TCP and UDP is the congestion control
algorithm. The TCP congestion algorithm prevents a sender from overrunning the
capacity of the network (for example, slower WAN links). TCP can adapt the
sender's rate to network capacity and attempt to avoid potential congestion
situations. In order to understand the difference between TCP and UDP,
understanding basic TCP congestion control algorithms is very helpful.
29
1.2.4. UDP
The User Datagram Protocol (UDP) is a standard protocol with STD
number 6. UDP is described by RFC 768 User Datagram Protocol. Its status is
recommended, but in practice every TCP/IP implementation that is not used
30
exclusively for routing will include UDP. The UDP is basically an application
interface to IP.
It adds no reliability, flow-control, or error recovery to IP. It simply serves
as a multiplexer/demultiplexer for sending and receiving datagrams, using ports
to direct the datagrams, as shown in Figure 1.22.
31
32
The Internet has become much more than just a network used to access
information. In the past decades, new important applications have emerged, such
as electronic commerce, voice over IP (VoIP), IPTV, WWW (or Web services),
social networking, and many more. In addition, many applications such as online
banking or online trading are business critical and time sensitive. As a result,
users and businesses that rely on the Internet infrastructure require a high
degree of reliability from the operators of the network. Reliability encompasses
the ability to offer the network users high-bandwidth and low-latency service in
the presence of accidental hardware failures or planned maintenance, and ability
to deliver data securely even in the presence of malicious attacks on the Internet
infrastructure.
Network routing, the selection of paths to destinations, is perhaps one of
the most important features of the Internet that determines the performance,
security, and reliability of the network.
The routing algorithm defines which network path, or paths, are allowed
for each packet. Ideally, the routing algorithm supplies shortest paths to all
packets such that traffic load is evenly distributed across network links to
minimize contention.
In a IP packet-based network, two successive packets of the same user
pair may travel along different routes, and a routing decision is necessary for
each individual packet (see Figure 1.24). In a virtual circuit network, a routing
decision is made when each virtual circuit is set up. The routing algorithm is used
to choose the communication path for the virtual circuit. All packets of the virtual
circuit subsequently use this path up to the time that the virtual circuit is either
terminated or rerouted for some reason (see Figure 1.25).
However, some paths provided by the network topology may not be
allowed in order to guarantee that all packets can be delivered, no matter what
the traffic behavior. Paths that have an unbounded number of allowed
nonminimal hops from packet sources, for instance, may result in packets never
reaching their destinations. This situation is referred to as livelock. Likewise,
paths that cause a set of packets to block in the network forever waiting only for
network resources (i.e., links or associated buffers) held by other packets in the
set also prevent packets from reaching their destinations. This situation is
referred to as deadlock. As deadlock arises due to the finiteness of network
resources, the probability of its occurrence increases with increased network
traffic and decreased availability of network resources. For the network to
function properly, the routing algorithm must guard against this anomaly which
can occur in various formsfor example, routing deadlock, request-reply
(protocol) deadlock, and fault-induced (reconfiguration) deadlock. At the same
33
time, for the network to provide the highest possible performance, the routing
algorithm must be efficientallowing as many routing options to packets as there
are paths provided by the topology, in the best case. The routing in a network
typically involves a rather complex collection of algorithms that work more or less
independently and yet support each other by exchanging services or information.
The complexity is due to a number of reasons. First, routing requires coordination
between all the nodes of the subnet rather than just a pair of modules as, for
example, in data link and transport layer protocols. Second, the routing system
must cope with link and node failures, requiring redirection of traffic and an
update of the databases maintained by the system. Third, to achieve high
performance, the routing algorithm may need to modify its routes when some
areas within the network become congested.
The two main functions performed by a routing algorithm are the selection
of routes for various origin-destination pairs and the delivery of messages to their
correct destination once the routes are selected. The second function is
conceptually straightforward using a variety of protocols and data structures
(known as routing tables). The focus will be on the first function (selection of
routes) and how it affects network performance.
34
There are two main performance measures that are substantially affected
by the routing algorithm-throughput (quantity of service) and average packet
delay (quality of service). Routing interacts with flow control in determining these
performance measures by means of a feedback mechanism shown in Figure
1.26 (As good routing keeps delay low, flow control allows more traffic into the
network).
When the traffic load offered by the external sites to the subnet is
relatively low, it will be fully accepted into the network, that is,
throughput = offered load
When the offered load is excessive, a portion will be rejected by the flow
control algorithm and
throughput = offered load - rejected load
The traffic accepted into the network will experience an average delay per
packet that will depend on the routes chosen by the routing algorithm. However,
throughput will also be greatly affected (if only indirectly) by the routing algorithm
because typical flow control schemes operate on the basis of striking a balance
35
between throughput and delay (i.e., they start rejecting offered load when delay
starts getting excessive). Therefore, as the routing algorithm is more successful
in keeping delay low, the flow control algorithm allows more traffic into the
network. While the precise balance between delay and throughput will be
determined by flow control, the effect of good routing under high offered load
conditions is to realize a more favorable delay-throughput curve along which flow
control operates, as shown in Figure 1.27.
There are a number of ways to classify routing algorithms. One way is to
divide them into centralized and distributed. In centralized algorithms, all route
choices are made at a central node, while in distributed algorithms; the
computation of routes is shared among the network nodes with information
exchanged between them as necessary.
Figure 1.27 Delay-throughput operating curves for good and bad routing.
36
Figure 1.28 The usage of a BGP (Border Gateway Protocol) routing protocol.
37
the customer has to pay the provider for all traffic that traverses the link between
the AS-es, no matter what the direction of the traffic. In peer-peer relationships,
the peers forward traffic for each other free of charge. The nature of business
relationships determines which routes are preferred by AS-es. For example,
given the choice between a customer, peer and provider route, the AS will prefer
the customer route which is the most profitable. Business relationships also play
a role even after a BGP speaker selects the single route to the destination that it
prefers - a BGP speaker will not announce a provider route to another provider
as it would have to pay to both providers for the transit traffic. For this reason ASes need the flexibility to choose among multiple paths, and the option to
announce the selected path to an arbitrary subset of their neighbors. BGP allows
such flexibility - if an AS learns about multiple routes from its neighbors, it can
apply an arbitrary policy to choose the preferred path, and decide which
neighbors to announce the path to.
BGP is a protocol based on trust. When a route announcement is
received, autonomous systems cannot verify whether a path announced by a
neighboring BGP speaker corresponds to an existing physical path, and whether
that path is available to the neighbor. For this reason, BGP is extremely
vulnerable to malicious attacks where an attacker compromises a router to make
false routing announcements and to missconfigurations where a speaker
mistakenly announces an incorrect route.
Network operators need intradomain routing protocols that ensure network
connectivity even as the network topology changes due to link additions,
hardware failures, or during planned equipment maintenance. In addition,
network operators desire to balance the load in their networks to avoid
congestion. One protocol satisfying these goals is Open Shortest Path First
(OSPF) [25]. OSPF is a link state routing protocol, i.e., a protocol that collects
information from routers about their connectivity (the state of their links). Then,
the routers construct a graph representing the network, and traffic is sent on the
shortest path according to link weights that were pre-assigned to each link. If a
router finds multiple shortest paths, traffic is split evenly on the outgoing links.
The link state information is maintained by each router and if it changes, it
is flooded in the network. The benefits of using OSPF include the ability to react
to link failures - when a link fails the information is immediately flooded in the
network and all of the routers can compute new shortest paths that avoid the
failed link. Furthermore, proper link weight assignment allows load balancing.
However, OSPF only allows to split the traffic on paths of the same minimal cost.
This approach does not allow much flexibility, and if the same link weights are
used before and after a failure, the performance may be suboptimal. Moreover,
finding appropriate link weights is computationally hard.
Multiprotocol Label Switching (MPLS) [26] is a routing protocol that can be
used to provide control over which flows traverse which paths. MPLS attaches
labels to data packets, and forwarding decisions are made based purely on the
content of the label. When a packet is received by a router, a label swap
operation is performed. The old label is popped and another label is pushed on
top of the label stack, and the packet is forwarded to the appropriate neighbor.
38
An advantage of MPLS is that it can be applied to all data packets, such as ATM,
SONET or Ethernet packets, irrespective of the lower-layer details of the
corresponding protocols and technologies. MPLS can be used in conjunction with
any standard IP routing algorithm to determine the routes that should be used.
MPLS is often used in conjunction with OSPF and RSVP [27]. OSPF is used to
calculate the desired set of routes, as described above, and the Resource
Reservation Protocol (RSVP) is then used to configure the routers on the end-toend paths. When a link fails, several mechanisms can be used to recover from
the failure. Local path protection mechanisms are used to redirect traffic from a
failed link onto an alternate path that connects the two link end points. Example
of local path protection is MPLS Fast Reroute.
The router that manages the backup path is called the Point of Local
Repair (PLR), and the router where the backup path merges with the original
path is called the Merge Point (MP). The primary benefit of Fast Reroute is its
speed because the PLR can start forwarding packets on the pre-calculated
backup path immediately after the failure is detected. Unfortunately, Fast Reroute
often does not provide adequate performance because it can cause congestion
in the neighborhood of the failed link. A more flexible mechanism that allows
some end-to-end path restructuring is needed to balance the load more evenly.
For this reason, network operators are often forced to perform end-to-end route
re-optimization after a failure event.
39
always higher in the name hierarchy. So, in the given example the top-level
domain is the domain "com".
The domain name of any node in the tree is the sequence of node labels
leading from that node all the way up to the root domain.
41
Furthermore, each node in the tree has one or more resource records
(RRs), which hold information about the domain name (for instance, the IP
address of www.incognito.com).
RRs can store a large variety of information about a domain: IP address,
name server, mail exchanger, alias, hostname, geo-location, service discovery,
certificates, and arbitrary text.
RRs contain information such as:
Start-of-Authority (SOA) Record
42
When a zone file indicates to a querying server that this is the authoritative
record for this domain, it says to the query, You Have Arrived. The SOA
contains the following data fields:
Serial Number: indicates number of changes to the zone file. The
number increases as the file is updated.
Refresh: tells the name server how often to check to update its data
Retry: tells server when to return if it is unable to refresh the data
Expire: tells how long the data can sit before it is too old to be valid
Time to Live: tells other servers how long to cache the data they have
downloaded
Name Server (NS) Record
An NS record is a record that indicates which computer is to be used to
retrieve information about the domain name space for a particular domain name.
A Host Name Server contains information about your computer and supplies IP
addresses that are associated with it.
Mail eXchange (MX) Record
MX records specify the mail server address for the domain name. This
record allows email addressed to a specific domain to be delivered to the mail
server that is responsible for it. The mail server is a host address. There can be a
number of mail servers associated with a MX record. Each server has a priority
set for mail receipt.
Address (A) Record
This record tells the name server the correct IP address for the domain.
The name server that is authoritative for the domain contains all the information
necessary to resolve this name.
Canonical (C-NAME) Record
CName records provide name-to-name-to-IP address mapping for any
domain name aliasing. The difference between CNAME and A records is that
the CNAME resolves to another domain name that then resolves to an IP
address.
Furthermore, the Name Servers (NSs) generally store complete
information about a zone. There are two types of name servers: primary and
secondary. Every zone MUST have its data stored on both a primary and a
secondary name server.
The Primary name servers hold authoritative information about set of
domains, as well as cached data about domains previously requested from other
servers. Each name server stores a portion of the overall name space (a zone
file), and can contact other name servers to lookup names outside its name
space. The name server listens for DNS queries, and if the queried name is in
the local zone data or cache, responds immediately with an answer. If the name
isnt in the local database or cache, the server uses its resolver to forward the
query to other authoritative name servers.
If domain data changes, the primary name server is responsible for
incrementing the Serial Number field in the SOA record in order to signal the
change to secondary name servers.
43
On the other side, the Secondary name servers can download a copy of
zone information from a primary name server using a process called a zone
transfer. Zone transfers allow secondary name servers to download complete
copies of zones. Secondary name servers perform zone transfers according to
the Expire Time parameter in the SOA record.
In order to resolve the IP address of a domain name, a name server works
on the domain name segment by segment, from highest-level domain appearing
on the right, to lowest-level domain on the left. The resolver usually has to query
several servers (in recursive or iterative way) that are authoritative for various
portions of the domain name to find all the necessary information.
One of the inherent abilities of DNS is the ability to store recently retrieved
domain names, a process called caching. This process is useful for speeding
up the resolution process. Each time a name server learns the authoritative
name servers for a zone and the addresses of those servers, it can cache this
information to help speed-up subsequent queries. Thus, the next time a resolver
queries for the same domain name, the name server is able to respond
immediately because the answer is stored in its cache.
Finally, the DNS system is a fundamental piece of the Internet framework.
The hierarchical structure of the DNS name space, worldwide network of name
servers, and efficient local caches allow broadband operators to provide highspeed, user-friendly Internet communications.
1.4.2. DHCP/DHCPv6
The Dynamic Host Configuration Protocol (DHCP) is a standardized
network protocol used on Internet for dynamically distributing network
configuration parameters, such as IP addresses (IPv4 and IPv6) for interfaces
and services. With DHCP, computers request IP addresses version 4 and
networking parameters automatically from a DHCP server, reducing the need for
a network administrator or a user to configure these settings manually.
The purpose of DHCP is to provide the automatic (dynamic) allocation of
IP client configurations for a specific time period (called a lease period) and to
eliminate the work necessary to administer a large IP network.
DHCP was created by the Dynamic Host Configuration Working Group of
the Internet Engineering Task Force (IETF: a volunteer organization which
defines protocols for use on the Internet).
When connected to a network, every computer must be assigned a unique
address. However, when adding a machine to a network, the assignment and
configuration of network (IP) addresses has required human action. The
computer user had to request an address, and then the administrator would
manually configure the machine. Mistakes in the configuration process are easy
for novices to make, and can cause difficulties for both the administrator making
the error as well as neighbors on the network. Also, when mobile computer users
travel between sites, they have had to relive this process for each different site
from which they connected to a network. In order to simplify the process of
44
45
46
47
each other on the same link only (same side of the router). This is what most
Linux IPv6 clients that I've run into do by default. They self assign and unless
configured to do so after the fact do not utilize DHCPv6 at all. This does help
keep network chatter to a minimum, but may leave clients unable to talk to
everywhere they want to go as they could be missing important pieces, such as
the DNS servers that may reside on a different network.
DHCPv6 works a lot like DHCPv4 did, especially from a fingerprinting
perspective. The basic concept is to find and request an IPv6 address and then
have the ability to ask for other pieces of information you may need.
The main difference between the DHCPv6 and DHCPv4 Message types is
given in Table 1.6.
Table 1.6 DHCPv6 vs DHCPv4 Message types.
48
All IPv6 systems support multicasting. All DHCPv6 servers register that
they want to receive DHCPv6 multicast packets. This means the network knows
where to send them. In IPv4, clients broadcast their requests, and networks do
not know how far to send them.
One exchange configures all interfaces. A single DHCPv6 request may
includes all interfaces on a client. This allows the server to offer addresses to all
interfaces in a single exchange. Each interface may also have different options.
Defines address allocation types.
DHCPv6 allows normal address allocation, as well as temporary address
allocation. In a sense, all addresses are "temporary", but the in this case it
means the IPv6 privacy addresses. DHCPv6 does not have as many options
defined as DHCP for IPv4, but there are quite a few.
You can find these by searching the IETF RFCs, and they include:
IPv6 address, IPv6 prefix
Rapid commit
Vendor-specific options extension
SIP servers
DNS servers & search options
NIS configuration
SNTP servers
Finally, the DHCPv6 has a place in IPv6 networks. It is significantly
improved over DHCP in IPv4, and is useful either instead of or in addition to
stateless autoconfiguration. The software is there today, and getting better and
better.
49
51
52
Hypertext alone is not practical when dealing with large sets of structured
information such as are contained in data bases: adding a search to the
hypertext model gives W3 its full power (Figure 1.36). Indexes are special
documents which, rather than being read, may be searched. To search an index,
a reader gives keywords (or other search criteria). The result of a search is
another document containing links to the documents found.
The architecture of WWW (Figure 1.39) is one of browsers (clients) which
know how to present data but not what its origin is, and servers which know how
to extract data but are ignorant of how they will be presented. Servers and clients
are unaware of the details of each others operating system quirks and exotic
data formats.
All the data in the Web is presented with a uniform human interface
(Figure 1.40). The documents are stored (or generated byalgorithms) throughout
the internet by computers with different operating systems and data formats.
Following a link from the SLAC home page (the entry into the Web of a SLAC
user) to the NIKHEF telephone book is as easy and quick as following the link to
a SLAC Working Note.
All communication in the Web between clients and servers is based on the
Hypertext Transfer Protocol (HTTP). HTTP is a relatively simple client-server
protocol; a client sends a request message to a server and waits for a response
message. An important property of HTTP is that it is stateless. In other words, it
does not have any concept of open connection and does not require a server to
maintain information on its clients.
53
54
Figure 1.41 (a) Using nonpersistent connections. (b) Using persistent connections.
55
As more and more of the Web is becoming remixable, the entire Internet
system is turning into both a platform and the database. Yet, such
transformations are never smooth. For one, scalability is a big issue. And of
course legal aspects are never simple. But it is not a question of if web sites
become web services, but when and how. APIs are a more controlled, cleaner
and altogether preferred way of becoming a web service. However, when APIs
are not available or sufficient, scraping is bound to continue and expand. In the
same time all possibilities and ideas for Future Web services are open, and as
always time will be best judge.
56
E-mail service
The e-mail service - interactions between email servers and clients are
governed by email protocols. The three most common email protocols are POP,
IMAP and MAPI. One example of using Post Office Protocol 3 (POP3) service
with the Simple Mail Transfer Protocol (SMTP) service, which sends outgoing email is given in Figure 1.44. Most email software operates under one of these
(and many products support more than one). To understand that the correct
protocol must be selected, and correctly configured, if you want your email
account to work, we must to know about those protocols. The Post Office
Protocol (currently in version 3, hence POP3) allows email client software to
retrieve email from a remote server.
57
58
FTP service
FTP (File Transfer Protocol) - this was one of the first Internet services
developed and it allows users to move files from one computer to another. Using
the FTP program, a user can logon to a remote computer, browse through its
files, and either download or upload files (if the remote computer allows). These
can be any type of file, but the user is only allowed to see the file name; no
description of the file content is included. You might encounter the FTP protocol if
you try to download any software applications from the World Wide Web. Many
sites that offer downloadable applications use the FTP protocol. An example of a
FTP Protocol Window is given in figure 1.46.
BitTorrent
BitTorrent is not an is a network protocol that facilitates decentralized (or
distributed) file sharing over the Internet (see Figure 1.47). In this way it is similar
to the functionality provided by traditional peer-to-peer (P2P) applications like
59
Napster and Kazza in the 1990s. However, BitTorrent differs fundamentally from
these older P2P sharing applications because it introduces components such as
BitTorrent websites, torrents, trackers, seeders, and leeches. BitTorrent is also
unique in how it efficiently uses bandwidth to achieve high data transfer rates. If
the file you want is available from multiple hosts, BitTorrent establishes
connections with them and downloads chunks of the file simultaneously.
Skype service
One of the most important emerging trends in telecommunications, which
development represents a major change in the emerging information and
communication technologies, undoubtedly is Voice over IP the transmission of
voice over packet-switched IP networks. VoIP has developed considerably in
recent years and is gaining widespread public recognition and adoption through
consumer solutions such as Skype and BTs strategy of moving to an IP-based
network.
But, let we begin with the basic essence of VoIP. VoIP is using the IP
protocols, originally designed for the Internet, to break voice calls up into digital
packets. In order for a call to take place the separate packets travel over an IP
network and are reassembled at the far end. The breakthrough was in being able
to transmit voice calls, which are much more sensitive to any time delays or
problems on the network, in the same way as data. High-availability solutions for
VoIP networks address the need for users to be able to place and receive calls
under peak-load call rates or during device maintenance or failure. In addition to
lost productivity, voice-network downtime often results in lost revenue, customer
dissatisfaction, and even a weakened market position. Various situations can
take devices off line, ranging from planned downtime for maintenance to
catastrophic failure. There are two key elements that contribute to availability in a
VoIP network: capacity and redundancy. These concepts will now be explored
further, because we will outcome this section. We will just mentioned that in VoIP
communications the most frequently used protocol is Session Initiation Protocol
(SIP). SIP is a signalling communications protocol, widely used for controlling
multimedia communication sessions such as VoIP and video calls over IP
networks. The protocol defines the messages that are sent between peers which
govern establishment, termination and other essential elements of a call. SIP can
be used for creating, modifying and terminating two-party (unicast) or multiparty
(multicast) sessions consisting of one or several media streams. Other SIP
applications include video conferencing, streaming multimedia distribution,
instant messaging, presence information, file transfer, fax over IP and online
games. Originally designed by Henning Schulzrinne and Mark Handley in 1996,
SIP has been developed and standardized in RFC 3261 under the auspices of
the Internet Engineering Task Force (IETF).
For example, Skype is a peer-to-peer VoIP client developed by KaZaa
that allows its users to place voice calls and send text messages to other users
of Skype clients (see Figure 1.48). In essence, it is very similar to the MSN and
Yahoo IM applications, as it has capabilities for voicecalls, instant messaging,
audio conferencing, and buddy lists.
60
Figure 1.48 Illustration of the skype network architecture. There are three main entities:
supernodes, ordinary nodes, and the login server.
First released in August 2003, Skype was created by Dane Janus Friis
and Swede Niklas Zennstrm in cooperation with Estonians Ahti Heinla, Priit
Kasesalu, and Jaan Tallinn, who developed the backend, which was also used in
music-sharing application Kazaa. Registered users of Skype are identified by a
unique Skype Name and may be listed in the Skype directory. Skype allows
these registered users to communicate through both instant messaging and
voice chat. Voice chat allows telephone calls between pairs of users and
61
conference calling and uses a proprietary audio codec. Skype's text chat client
allows group chats, emoticons, storing chat history, and editing of previous
messages. Offline messages were implemented in a beta of version 5 but
removed after a few weeks without notification. The usual features familiar to
instant messaging usersuser profiles, online status indicators, and so onare
also included.
However, the underlying protocols and techniques it employs are quite
different.
Furthermore, are presented the main factors that have been promoted by
Skype (VoIP) and its main barriers. So, the main factors that have been
promoting VoIP include:
Low cost/no cost software (softphone and configuration tools) for PCs and
PDAs;
Wide availability of analogue telephone adapters;
Growing availability of broadband, wireless hot spots and other forms of
broadband access;
Packetised voice enables much more efficient use of the network (bandwidth
is only used when something is actually being transmitted);
The VoIP network can handle connections from many applications and many
users at the same time (unlike the dedicated circuit-switch approach).
Relative high cost of PSTN calls.
On the other hand, the main barriers opposing Skype and other VoIP services
are including:
High quality and reliability of the PSTN;
VoIP quality of service can be variable;
Lack of intrinsic QoS in many of IP networks around the world;
Many challenges in wireless VoIP users;
Some VoIP feature, service and VoIP service provider
interconnection limitations;
Relative difficulty in setup and use;
End-2-end integrity of the signalling and bearer path problems;
Introduction of call plans and flat rates charges by traditional PSTN
operators.
Figure 1.49 Worldmap of super nodes to which Skype establishes a TCP connection at
login.
62
Overall, the Skype is a selfish application and it tries to obtain the best
available network and CPU resources for its execution. It changes its application
priority to high priority in Windows during the time call is established. It evades
blocking by routing its login messages over Super Nodes (shortly SNs, the super
nodes worldmap is presented in Figure 1.49). This also implies that Skype is
relying on SNs, who can misbehave, to route login messages to the login server.
Skype does not allow a user to prevent its machine from becoming a SN
although it is possible to prevent Skype from becoming a SN by putting a
bandwidth limiter on the Skype application when no call is in progress.
Theoretically speaking, if all Skype users decided to put bandwidth limiter on
their application, the Skype network can possibly collapse since the SNs hosted
by Skype may not have enough bandwidth to relay all calls.
Youtube
YouTube services is a video-sharing website headquartered in San Bruno,
California, United States. The service was created by three former PayPal
employees in February 2005. In November 2006, it was bought by Google for
US$1.65 billion. The site allows users to upload, view, and share videos, and it
makes use of WebM, H.264, and Adobe Flash Video technology to display a
wide variety of user-generated and corporate media video. Available content
includes video clips, TV clips, music videos, and other content such as video
blogging, short original videos, and educational videos. The YouTube video
download mechanisms are illustrated over one example of possible evolution
when accessing to youtube.com from a PC (top) and m.youtube.com from a
smartphone (bottom) in Figure 1.50.
63
exploit positive opportunities and benefits of new and emerging services, but also
the first to have to negotiate appropriate behaviours within new communities, and
to have to identify and manage risk.
The most popular dedicated social network sites worldwide are
Facebook, MySpace, Twitter, LinkedIn, Instagram, Google+, Bebo, Vkontakte,
Odnoklassniki and etc. Also, the most popular social networking sites by country
are given in Figure 1.51. These types of social networking services are profilefocussed activity centres around web pages that contain information about the
activities, interests and likes (and dislikes) of each member. While the number of
visitors to social networking sites is increasing, so too are the numbers of new
services being launched, along with the number of longstanding (within the
relatively brief lifespan of the internet) websites that are adding, developing or
refining social network service features or tools. The ways in which we connect
to social networking services are expanding too. Games-based and mobile
phone-based social networking services that interact with existing web-based
platforms, or with new mobile-focused communities, are rapidly developing
areas.
Figure 1.51 Illustration of the most popular social networking sites by country.
65
a)
b)
Figure 1.52 Illustration of the a) Open neutral access model; b) Non-neutral access
model [62].
66
There are many reasons why Net Neutrality is not respected, among the
most frequent ones are:
Access providers violate Net Neutrality to optimise profits: Some
Internet access providers demand the right to block or slow down Internet traffic
for their own commercial benefit. Internet access providers are not only in control
of Internet connections, they also increasingly start to provide content, services
and applications. They are increasingly looking for the power to become the
gatekeepers of the Internet.
Access providers violate Net Neutrality to comply with the law:
Governments are increasingly asking access and service providers to restrict
certain types of traffic, to filter and monitor the Internet to enforce the law. A
decade ago, there were only four countries filtering and censoring the Internet
worldwide today, they are over forty. In Europe, website blocking has been
introduced for instance in Belgium, France, Italy, the UK and Ireland. This is done
for reasons as varied as protecting national gambling monopolies and
implementing demonstrably ineffective efforts to protect copyright. Some
politicians call for Net Neutrality and demand filtering or blocking for law
enforcement purposes at the same time. However, it is a paradox to create legal
incentives for operators to invest in monitoring and filtering or blocking
technology, while at the same time demanding that they do not use this
technology for their own business purposes.
Access providers violate Net Neutrality for privatised censorship: In
the UK, blocking measures by access providers have frequently been misused to
block unwanted content.
Despite all those lacks of respects for Net Neutrality, furthermore we give
the 10 reasons for network neutrality:
1) No discrimination Net Neutrality is the principle that all types of
content and all senders and recipients of information are treated
equally.
2) Free Expression The history of the Internet shows very clearly that
Net Neutrality encourages creative expression. The ability to publish
content and to express opinions online does not depend on financial or
social status and is not restricted to an elite. There is a huge trend
towards people sharing information and experiences online,
sometimes referred to as web 2.0.
3) Privacy Measures to undermine Net Neutrality can have a direct
impact on our privacy. In a non-neutral Internet, providers would be
able to monitor our communications in order to differentiate between
messaging, streaming, P2P, e-mails and so on.
4) Access to Information Net Neutrality is also the catalyst for the
creation of diverse and abundant online content. Non-profit projects
like Wikipedia, blogs and user-generated content in general have the
same conditions to access and publish information as large,
commercial Internet players. Without Net Neutrality, we would have a
two-tier Internet where only those who can pay would be able to
access information or get content delivered faster than other users.
67
68
itself; regulation of activities that can be conducted only over the internet; and,
regulation of activities which can be, but need not be, conducted over the
Internet.
The first sphere: Direct regulation of the internet infrastructure itself,
including:
a. the standards of communication,
b. the equipment used to provide and access Internet
communication,
c. intermediaries engaged in the provision of Internet
communications, e.g. Internet Service Providers (ISPs)
The second sphere: Regulation of activities that can be conducted
only over the internet and which have no significant off-line
analogues. An example is the regulation of anonymous online
communication via anonymizing re-mailers.
The third sphere: Finally, there is the regulation of the enormous
category of activities which may or may not be conducted over the
internet, e.g. e-commerce in both tangible and intangible goods. In
many cases the Internet version of an activity often will simply be
swept up in the general regulation of the type of conduct.
(a) In some cases, however, the Internet version may be subject to
special or additional regulation because the use of the Internet is seen as
somehow aggravating an underlying problem or offense. An example of this is
US attempts to regulate the provision of obscene or "indecent" content to minors
via the Internet.
(b) In other cases, there may be attempts to craft special regulations
for the Internet version of an activity because of fears that its international
character (and concomitant regulatory arbitrage), the ease of anonymization, or
the elimination of formerly prohibitive transactions costs changes the danger,
incidence, or character of the activity -- or, most commonly, makes the
enforcement of the pre-existing rules difficult or impossible. Examples of this
include attempts to regulate peer-to-peer sharing of material copyrighted by
others and regulation (or in some cases discouragement) of e-cash.
These spheres of regulation are obviously related in many ways. What
matters most for current purposes, however, is that this schema underlines why
approaches to the first sphere of regulation, direct regulation of the infrastructure,
have two sometimes radically different sets of motives even though the
regulatory techniques and tools often may overlap or even interfere with one
another.
On the one hand, some regulatory (or de-regulatory) strategies pursue
goals that are primarily internal to the first sphere. For example, as described
below, the current Internet architecture depends on the unique assignment of
Internet Protocol numbers; the regulation of the mechanisms that control
assignment of these potentially valuable resources -- and which determine when
and how the underlying standards might be modified -- is a matter of critical
importance to the Internet, one that is (currently) internal to the first sphere.
69
70
institutions that might take on the jobs ICANN handles, and perhaps others more
global Internet regulation also. One self-nominated candidate is the ITU, which is
currently sponsoring the World Summit on the Information Society. A third
approach uses the traditional apparatus of bilateral and multilateral treaties to
address particular issues arising from the Internet that are thought to require
trans-national regulation.
One fact remains, the Internets evolution is dynamic and complex. The
availability and design of a suitable regulatory response must reflect this
dynamism, and also the responsiveness of regulators and market players to each
other. Therefore, national legislation should be future proof and avoid being
overly prescriptive, to avoid a premature response to the emerging environment.
The European legal basis for regulatory intervention in Directives 2009 136 EC
and 2009 140 EC is an enabling framework to prevent competition abuses and
prevent discrimination, under which national regulators need the skills and
evidence base to investigate unjustified discrimination. Regulators expecting a
smoking gun to present itself should be advised against such a reactive
approach. A more proactive approach to monitoring and researching non-neutral
behaviours will make network operators much more cognisant of their duties and
obligations. The pace of change in the relation between architecture and content
on the Internet requires continuous improvement in the regulators research and
technological training. This is in part a reflection of the complexity of the issue
set, including security and Internet peering issues, as well as more traditional
telecoms and content issues.
Regulators can monitor both commercial transactions and traffic shaping
by ISPs to detect potentially abusive discrimination. No matter what theoretical
powers may exist, their usage in practice and the issue of forensic gathering of
evidence may ultimately be more important. An ex ante requirement to
demonstrate internal network metrics to content provider customers and
consumers may be a practical solution. Should packet discrimination be
introduced, the types of harmful discrimination that can result may be
undetectable by consumers and regulators. Blocking is relatively easy to spot,
but throttling or choking bandwidth may be more difficult. A solution may be to
require network operators to provide their Service Level Agreements both to
content providers and more transparently to the end-user via a regulatory or coregulatory reporting requirement. Strong arguments remain for ensuring that
ISPs inform consumers when they reach a monthly download limit, ensuring no
return to the rationed per-minute or per-byte Internet use that Europe
experienced in the 1990s with dial-up. As the law and practice stand today, it
seems that most customers do not know when they have been targeted as overstrenuous users of the Internet, only that their connection has slowed. Once
targeted, customers generally cannot prove their innocence they have to
accept the Terms of Use of the ISP without appeal (except theoretically via
courts for breach of contract, or regulator for infringement of their consumer
rights). The number of alternative ISPs is shrinking not only is the ISP business
expensive, leading to concentration in the industry, but the costs of renting
backhaul from dominant operators is sufficiently high that no ISP would want to
71
offer service to a suspected bandwidth hog. We may expect to see more protest
behaviour by netizens who do not agree with these policies, especially where
ISPs are seen to have failed to inform end-users fully about the implications of
policy changes. Regulators and politicians are challenged publicly by such
problems, particularly given the ubiquity of email, Facebook, Twitter and social
media protests against censorship, and there are two Pirate Party MEPs elected
to the European Parliament for 200914 (the Pirate Party is originally a Swedish
political group dedicated to open and interchangeable digital information, notably
a reduction in copyright enforcement). Regulators will need to ensure that the
network operators report more fully and publicly the levels of connectivity that
they provide between themselves as well as to end-users. Internet architecture
experts have explained that discrimination is most likely to occur at this level as it
is close to undetectable by those not in the two networks concerned in the
handover of content. A reporting requirement will need to be imposed if voluntary
agreement is not possible. As this information is routinely collected by the
network operators for internal purposes, it should not impose a substantial
burden. Regulators should be wary of imposing costs on ISPs that are
disproportionate. Very high entry barrier coregulation and self-regulation can curb
market entry. Onerous regulation (including self-regulation) leads towards closed
and concentrated structures, for three reasons [56]:
1. larger companies are better able to bear compliance costs;
2. larger companies have the lobbying power to seek to influence
regulation;
3. dominant and entrenched market actors in regulated bottlenecks play
games with regulators in order to increase the sunk costs of market entry for
other actors, and can pass through costs to consumers and innovators in noncompetitive markets.
Therefore any solution needs to take note of the potential for larger
companies to game a co-regulatory scheme and create additional compliance
costs for smaller companies (whether content or network operators, and the
combination of sectors makes this a particularly complex regulatory game). The
need for greater research towards understanding the nature of congestion
problems on the Internet and their effect on content and innovation is clear.
Finally, let we summarise this section, there are incentives for network
providers to police the traffic by type, if not by content. It enables the network
providers, many of whom also operate their own proprietary applications, to
charge a different price to non-affiliated content owners than to affiliated owners.
This differential pricing could make the profitable operation of non-affiliated
providers more difficult. On that basis, a walled garden of ISP services and
those of its preferred content partners might become the more successful
business model. That model makes regulation much easier to enforce, but also
prevents some of the interoperability and open access for users that is held to
lead to much Web 2.0 innovation for businesses. The answer must be
contingent on political, market and technical developments. The issue of
uncontrolled Internet flows versus engineered solutions is central to the question
of a free versus regulated Internet.
72
Abbreviations
3G Third Generation
4G Fourth Generation
5G Fifth Generation
AAA Authentication, Authorization, Accounting
AP Access Point
APDV Application Protocol Data Unit
API Application Programming Interface
ARM Advanced RISC Machine
ATM Asynchronous Transfer Mode
BTS Base Transceiver Station
CaaS Communications as a Service
CC Cloud Computing
CDN Content Delivery Network
CPU Central Processing Unit
CRM Customer Relationship Management
CSC Cloud Service Customer
CSN Cloud Service Partner
CSP Cloud Service Provider
CSU Cloud Service User
DaaS Desktop as a Service
DFS Distributed File System
DHT Distributed Hash Table
DNS Domain Name System
EC2 Elastic Compute Cloud
ET Emergency Telecommunications
ETS Emergency Telecommunications Service
FI Functional Interface
GPS Global Positioning System
HA Home Agent
I/O Input/Output
IA Integrated Authenticated
IaaS Infrastructure as a Service
IAM Identity and Access Management
IANA Internet Assigned Numbers Authority
ICANN Internet Corporation for Assigned Names and Numbers
ICT Information and Communication Technology
ID Identifier
IMERA French acronym for Mobile Interaction in Augmented Reality Environment
IP Internet Protocol
IPv4 (IP version 4)
IPv6 (IP version 6)
IRNA Intelligent Radio Network Access
iSCSI Internet Small Computer System Interface
ISP Internet service provider
73
IT Information Technology
JME Java ME, a Java platform
LAN Local Area Network
LBS Location Base Service
LTE Long Term Evolution
LTS Location Trusted Server
MAUI Memory Arithmetic Unit and Interface
MC Mobile Computing
MCC Mobile Cloud Computing
MDP Markov Decision Process
MPLS Multi-Protocol Label Switching
MSC Mobile Service Cloud
NaaS Network as a Service
NAS Network Attached Storage
NFS Network File System
NTP Network Time Protocol
OS Operating System
P2P Peer-to-Peer
PaaS Platform as a Service
PHP Hypertext Preprocessor
PII Personally Identifiable Information
PKI Public Key Infrastructure
QoE Quality of Experience
QoS Quality of Service
REST Repretational State Transfer
RFS Random File System
S3 Simple Storage Service
SaaS Software as a Service
SAN Storage Area Network
SES Software Enabled Services
SIM Subscriber Identity Module
SLA Service Level Agreement
SLA Service Level Agreement
SMI Service Management Interface
TCC Truster Crypto Coprocessor
URI Uniform Resource Identifier
vCPU virtual CPU
VI Virtual Infrastructure
VLAN Virtual Local Area Network
VM Virtual Machine
VoIP Voice over IP
VPN Virtual Private Network
WAN Wide Area Network
WLAN Wireless Local Area Network
WiFi Wireless Fidelity
74
References
[1] Toni Janevski, "NGN Architectures, Protocols and Services", John Wiley & Sons, UK,
April 2014.
[2] IEEE Communications Magazine, pp.:24-62, July 2011.
[3] Internet architecture (2000): http://www.livinginternet.com/i/iw_arch.htm
[4] RFC 1958; B. Carpenter, et. al.; Architectural Principles of the Internet; Jun 1996,
link: http://www.rfc-editor.org/rfc/rfc1958.txt
[5] Barath Raghavan, Teemu Koponen, Ali Ghodsi, Martn Casado, Sylvia Ratnasamy,
and Scott Shenker, Software-Defined Internet Architecture: Decoupling Architecture
from Infrastructure, Hotnets 12, Seattle, WA, USA, October 2930, 2012.
[6] RFC 3426; S. Floyd; General Architectural and Policy Considerations; Nov 2002, link:
http://www.rfc-editor.org/rfc/rfc3426.txt
[7] RFC 3439; R. Bush, D. Meyer; Some Internet Architectural Guidelines and
Philosophy; Dec 2002, link: http://www.rfc-editor.org/rfc/rfc3439.txt
[8] RFC 3819; P. Karn, Ed.; Advice for Internet Subnetwork Designers; July 2004, link:
http://www.rfc-editor.org/rfc/rfc3819.txt
[9] ITU-T Rec. Y.1001 (11/2000): IP framework A framework for convergence of
telecommunications network and IP network technologies.
[10] ITU-T Rec. Y.3001 (05/11): Future networks: Objectives and design goals.
[11] TCP/IP tutorial and technical overview, chapter 5 : Transport layer protocols, link:
http://www.cs.virginia.edu/~cs458/material/Redbook-ibm-tcpip-Chp5.pdf, last accessed:
05.05.2015
[12] Microsoft Developer Network: Internet Protocol version 4 Address Classes,
http://msdn.microsoft.com/en-us/library/aa918342.aspx, last accessed: 08.05.2015
[13] Americas Headquarters Cisco Systems, Inc., IP Addressing: IPv4 Addressing
Configuration
Guide,
Cisco
IOS
XE
Release
3S,
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_ipv4/configuration/xe-3s/ipv4xe-3s-book.pdf , last accessed: 09.05.2015.
[14] ITU-T Rec. Y.2051 (02/2008): General overview of IPv6-based NGN.
[15] ITU-T Rec. Y.2053 (02/2008): Functional requirements for IPv6 migration in NGN.
[16] Requirements for Internet Hosts -- Communication Layers:
http://tools.ietf.org/html/rfc1122
[17] Internet Protocol, Version 6 (IPv6) Specification: https://tools.ietf.org/html/rfc2460
[18] A TCP/IP Tutorial: https://tools.ietf.org/html/rfc1180
[19] User Datagram Protocol (UDP): https://www.ietf.org/rfc/rfc768.txt
[20] The Lightweight User Datagram Protocol (UDP-Lite):
https://tools.ietf.org/html/rfc3828
75
76
http://www.cse.wustl.edu/~jain/cis678-97/ftp/f32_dhc.pdf
[43] DHCP by learning centre of vicomsoft; http://www.vicomsoft.com/learningcenter/dhcp/
[44] World-Wide Web, Tim Berners-Lee, Robert Cailliau, C.E.R.N.
http://www.freehep.org/chep92www.pdf
[45] T.J. Berners-Lee, R. Cailliau, J-F Groff, B. Pollermann, CERN, "World-Wide Web:
The Information Universe", published in Electronic Networking: Research, Applications
and Policy, Vol. 2 No 1, Spring 1992, Meckler Publishing, Westport, CT, USA.
[46] T.J. Berners-Lee, R. Cailliau, J-F Groff, B. Pollermann, CERN, "World-Wide Web:
An Information Infrastructure for High-Energy Physics", Presented at "Artificial
Intelligence and Software Engineering for High Energy Physics" in La Londe, France,
January 1992. Proceedings published by World Scientific, Singapore, ed. D Perret-Gallix
[47] Distributed Document-Based Systems, Chap. 11
http://www.cs.vu.nl/~ast/books/ds1/11.pdf
[48] Salman A. Baset and Henning G. Schulzrinne, "An Analysis of the Skype Peer-toPeer Internet Telephony Protocol" link:
http://www1.cs.columbia.edu/~salman/publications/skype1_4.pdf
[49] Skype. http://www.skype.com
[50] Kazaa. http://www.kazaa.com
[51] SkypeOut. http://www.skype.com/products/skypeout/
[52] SkypeIn. http://www.skype.com/products/skypein/
[53] Alessandro Finamore et al. YouTube Everywhere: Impact of Device a nd
Infrastructure Synergies on User Experience, MC11, November 24, 2011, Berlin,
Germany. Link: http://conferences.sigcomm.org/imc/2011/docs/p345.pdf
[54] https://net.educause.edu/ir/library/pdf/ELI7018.pdf
[55] http://www.digizen.org/downloads/social-networking-overview.pdf
[56] Christopher T. Marsden, "Network Neutrality and Internet Service Provider Liability
Regulation: Are the Wise Monkeys of Cyberspace Becoming Stupid?" Global Policy
Volume 2 . Issue 1 . January 2011.
[57] Christopher S.Yoo, "Network Neutrality or Internet Innovation?"
http://object.cato.org/sites/cato.org/files/serials/files/regulation/2010/2/regv33n1-6.pdf
[58] Kathleen Ann Ruane, Legislative Attorney, Net Neutrality: The FCCs Authority to
Regulate
Broadband
Internet
Traffic
Management,
https://www.fas.org/sgp/crs/misc/R40234.pdf March 26, 2014
[59] Antonio Segura-Serrano, "Internet Regulation and the Role of International Law",
Max Planck Yearbook of United Nations Law, Volume 10, 2006, p. 191-272. link:
http://www.mpil.de/files/pdf3/06_antoniov1.pdf
[60] A. Michael Froomkin, International and National Regulation of the Internet, link:
http://law.tm/docs/International-regulation.pdf
77
[61] http://www.cyber-rights.org/documents/clsr17_5_01.pdf
[62] The EDRi papers, Net Neutrality, link:
https://edri.org/files/paper08_netneutrality.pdf
[63] Cheng, H. Kenneth, Bandyopadhyay, Subhajyoti and Guo, Hong. \The Debate on
Net Neutrality: A Policy Perspective" 25 Jun 2008. Information Systems Research,
Forthcoming. Available at: http://net.educause.edu/ir/library/pdf/CSD4854.pdf
[64] Hahn, Robert W. and Wallsten, Scott. \The Economics of Net Neutrality." The
Economists' Voice: Vol. 3: Iss. 6, Article 8. The Berkeley Electronic Press 2006. 20
Nov. 2011. http://www.bepress.com/ev/vol3/iss6/art8/
78