Vous êtes sur la page 1sur 79

ITU Centres of Excellence for Europe

Next Generation Broadband Internet Access


Module 1:
Internet technology fundamentals

Table of contents
1.1. Internet architectures (client-server, peer-to-peer).........................................2
1.2. Internet protocols (IPv4, IPv6, TCP, UDP)...................................................12
1.2.1. IPv4...........................................................................................................12
1.2.2. IPv6...........................................................................................................19
1.2.3. TCP...........................................................................................................25
1.2.4. UDP ..........................................................................................................30
1.3. Internet routing and network interconnection ...............................................33
1.4. Fundamental Internet technologies (DNS, DHCP/DHCPv6).......................40
1.4.1. DNS ..........................................................................................................40
1.4.2. DHCP/DHCPv6.........................................................................................44
1.5. World Wide Web (WWW).............................................................................50
1.6. Important Internet services (E-mail, FTP, BitTorrent, Skype, Youtube, social
networking) .........................................................................................................57
1.7. Internet regulation and network neutrality ....................................................66
Abbreviations ......................................................................................................73
References .........................................................................................................75

1.1. Internet architectures (client-server, peer-to-peer)

Internet architecture by definition is a meta-network, a constantly evolving


and changing collection of huge number of individual networks
intercommunicating with a common protocol the IP (Internet Protocol).
Moreover, the Internets architecture is described in its name, a short form of the
compound word inter-networking. So the entire architecture is based in the very
specification of the standard TCP/IP protocol stack (see Figure 1.1), designed to
connect any two networks (or any two individual hosts), which may be very
different in internal hardware, software, and technical design. Once two networks
are interconnected, communication with TCP/IP is enabled end-to-end, so that
any node on the Internet has the near magical ability to communicate with any
other no matter where they are. This openness of design has enabled the
Internet architecture to grow to a global planetary scale. So as we all know, now
the Internet is a planet-wide communication medium.
While some requirements for networks do not change, a number of
requirements are evolving and changing and new requirements arise, causing
networks and their architecture to evolve. The basic architecture of large-scale
public networks, such as telecommunication networks, is difficult to change due
to the enormous amount of resources needed to build, operate, and maintain
them. Their architecture is therefore carefully designed to be flexible enough to
satisfy continually changing requirements. For instance, Internet Protocol (IP)
absorbs and hides the different protocols and implementations of underlying
layers and, with its simple addressing and other features, it has succeeded in
adapting to the enormous changes in scalability, as well as factors such as
quality of service (QoS) and security.

Figure 1.1. Internet TCP/IP Protocol Stack.

But over time the formerly simple and clear Internet architecture became a
patchwork of new multimedia application demands, balconies, detours,
wormholes, workarounds and bypasses. And moreover there are many
limitations in the current Internet, such as: processing and handling limitations,
storage limitations, IPv4 addresses limitations, transmission limitations, control
and operation limitations and etc. The Internet and its architecture have grown in
evolutionary fashion from modest beginnings, rather than from a Grand Plan.
While this process of evolution is one of the main reasons for the technology's
success, it nevertheless seems useful to record a overview of the current
principles of the Internet architecture.
Fact is that in the near future, the high volume of content together with
new emerging and mission critical applications is expected to stress the Internet
to such a degree that it will possibly not be able to respond adequately to its new
role. This challenge has motivated many groups and research initiatives
worldwide to search for structural modifications to the Internet architecture in
order to be able to face the new requirements.
But first of all, let we see: from what the Internet IP architecture is
consisted? Architecture for IP networks may consist of three parts, Application
(or Service) Model, System Model and Technology Model. Relationships among
the three parts for IP Networks Architecture are as shown in Figure 1.2.

Figure 1.2. Internet IP model relationships.

Application architecture for IP networks should reflect the relationship


between customers and IP networks which provide services for the customers. It
defines the applications role that an IP network should support, and describes
the attributes of application services an IP network can provide for its users, such
as media representation for various application services, Quality of Service
(QoS) and requirements of traffic types. An application architecture model for IP
Networks is shown in Figure 1.3.
System model for IP networks architecture should reflect the capabilities
and construction of an IP network. In this case, system function components,
interconnecting entities and relationships among them for supporting various
application requirements by the IP network are described, such as nodes, links,
terminals and their physical connection, location and label.

Figure 1.3. Application/service model.

Performance parameters for the system and its components should also
be defined in this model. System model for IP networks architecture can be
described from, and functions divided into two planes (or directions): entities
plane (horizontal direction) and logical plane (vertical direction).
Functions on entities plane for System model of IP networks architecture
can be divided into three sections: core network, access network and customers
network. Each of them can be further divided in detail, for example, functions of a
core network can be divided into two layers: IP layer function and
telecommunication layer function. Further information regarding more detailed
functionality distribution can be found in ITU-T Y.1231 - IP Access Network
Architecture. The architecture details for the (Telecommunications) Access
Network Transport Function can be found in ITU-T G.902 - Framework
Recommendation on functional Access.
The technology model for IP network architecture should consist of a
series of technical standards or recommendations, describing configuration,
interrelation and interaction of various components in an IP network as shown
abstractly in Figure 1.4. The technology model comprises a diversified set of
4

referenced standards or recommendations, for services, interfaces, equipment


and interrelationships.
In practice, the Internet technical architecture looks a bit like a multidimensional river system, with small tributaries feeding medium-sized streams
feeding large rivers. For example, an individual's access to the Internet is often
from home over a modem to a local Internet service provider who connects to a
regional network connected to a national network. At the office, a desktop
computer might be connected to a local area network with a company connection
to a corporate Intranet connected to several national Internet service providers.
In general, small local Internet service providers connect to medium-sized
regional networks which connect to large national networks, which then connect
to very large bandwidth networks on the Internet backbone. Most Internet service
providers have several redundant network cross-connections to other providers
in order to ensure continuous availability.
Furthermore, it is generally felt that in an ideal situation there should be
one, and only one, protocol at the Internet network level (see the Figure 1.5).

Figure 1.4. Technology and Standards Model.

Figure 1.5. Illustration of Hourglass Protocol Stack, where on network layer there is only
one protocol- IP.

This allows for uniform and relatively seamless operations in a


competitive, multi-vendor, multi-provider public network. There can of course be
multiple protocols to satisfy different requirements at other levels, and there are
many successful examples of large private networks with multiple network layer
protocols in use. In practice, there are at least two reasons why more than one
network layer protocol might be in use on the public Internet. Firstly, there can be
a need for gradual transition from one version of IP to another. Secondly,
fundamentally new requirements might lead to a fundamentally new protocol.
The Internet level protocol must be independent of the hardware medium
and hardware addressing. This approach allows the Internet to exploit any new
digital transmission technology of any kind, and to decouple its addressing
mechanisms from the hardware. It allows the Internet to be the easy way to interconnect fundamentally different transmission media, and to offer a single
platform for a wide variety of Information Infrastructure applications and services.
1.1.1 Client-server networking
Commonly used networking model in Internet is client-server. This means
that some of the machines (i.e., hosts) connected to the network are ready to
accept a request from another machine (i.e., host) for a particular service (Figure
1.6). Client-server model is asymmetric, as follows:
The server provides services through well-defined interfaces (listens for
requests from clients with open sockets on predefined or well-known
ports).
The client requests certain service through the given interface on the
server.

The server responds to client's requests.

Figure 1.6. Client-server model

Generally, the communication between the client and server can use TCP,
UDP or other protocol, but both sides need to use the same type of protocol and
appropriate socket interfaces (stream socket for the TCP, and datagram"
socket for the UDP).
In the case of a connectionless-oriented communication (based on
UDP/IP) there is not required explicit identification of who the server is. When
sending datagram via UDP/IP the sending application needs to specify its own IP
address and port number on the local machine (i.e., to open a datagram socket)
through which the datagram will be sent. When a machine expects incoming
datagrams (via UDP/IP), the receiving application must declare IP address and
port on the local machine (i.e. to open a datagram socket) through which it
expects to receive datagrams from other machines (i.e., hosts).
In the case of connection-oriented communication (TCP) the approach is
different:
The client must connect to the server before receiving or sending data
from/to it.
The server listens on a specific port and IP address (on a given interface),
and must accept the communication with the client before sending or
receiving data.
The server can accept client when it receives a request to connect by the
client.
Well known Internet services (i.e., applications), which use client-server
communications are Electronic mail (E-mail), File Transfer Protocol (FTP),
Hypertext Transfer Protocol (HTTP), etc. for which will come a word in chapter 5
(World Wide Web (WWW)) and chapter 6 (Important Internet services (E-mail,
FTP, BitTorrent, Skype, Youtube, social networking)) of this module. Also,
fundamental Internet technologies, such as DHCP and DNS (included in chapter
4 in this module), are based on the client-server model.

1.1.2 Peer-to-peer networking


Peer-to-peer (shortly written as P2P) networks are self-organizing
networks, generally without a centralized server.

Figure 1.7. Comparison of peer-to-peer and client-server network architectures

Due to such approach, P2P networks have high scalability and robustness
because they do not relay on a single network host such as server in clientserver network architectures. As shown in Figure 1.7, in P2P networks each
participant who is connected to the network is a node with equal access to
network resources and to all other users. The owner of each node (e.g.,
computer) on a P2P network is supposed to set up certain resources (e.g.,
processing power, access data rate to Internet, memory on the hard disk, etc.)
which are shared with other nodes in the P2P network. In such way P2P network
is a distributed application architecture that partitions certain tasks among
several peers. Hence, P2P networking is based on establishing a temporarily
logical architecture of peers (nodes in the P2P networks), as an overlay network
in the Internet, where peers act as clients or servers to other nodes in the P2P
network allowing shared access to different resources such as files, streams
(e.g., video streams), devices (e.g., sensors), etc.
1.1.3 General design issues for Internet architecture
Furthermore, let us summarise the general design issues for Internet
architecture:
 Heterogeneity is inevitable and must be supported by design: Multiple
types of hardware must be allowed for, e.g. transmission speeds differing




















by at least 7 orders of magnitude, various computer word lengths, and


hosts ranging from memory-starved microprocessors up to massively
parallel supercomputers.
Multiple types of application protocol must be allowed for, ranging from the
simplest such as remote login up to the most complex such as distributed
databases.
If there are several ways of doing the same thing, choose one. If a
previous design, in the Internet context or elsewhere, has successfully
solved the same problem, choose the same solution unless there is a
good technical reason not to.
Duplication of the same protocol functionality should be avoided as far as
possible, without of course using this argument to reject improvements.
All designs must scale readily to very many nodes per site and to many
millions of sites.
Performance and cost must be considered as well as functionality.
Keep it simple. When in doubt during design, choose the simplest solution.
Modularity is good. If you can keep things separate, do so.
In many cases it is better to adopt an almost complete solution now, rather
than to wait until a perfect solution can be found.
Avoid options and parameters whenever possible.
Any options and parameters should be configured or negotiated
dynamically rather than manually.
Be strict when sending and tolerant when receiving. Implementations must
follow specifications precisely when sending to the network, and tolerate
faulty input from the network. When in doubt, discard faulty input silently,
without returning an error message unless this is required by the
specification.
Be parsimonious with unsolicited packets, especially multicasts and
broadcasts.
Circular dependencies must be avoided. For example, routing must not
depend on look-ups in the Domain Name System (DNS), since the
updating of DNS servers depends on successful routing.
Objects should be self decribing (include type and size), within reasonable
limits. Only type codes and other magic numbers assigned by the Internet
Assigned Numbers Authority (IANA) may be used.
All specifications should use the same terminology and notation, and the
same bit- and byte-order convention.
And perhaps most important: Nothing gets standardised until there are
multiple instances of running code.

The network topology provides information and resources on the real-time


construction of the Internet network, including graphs and statistics. The following
references provide additional and deeper information about the Internet
architecture [2-10].
In the end of this first section let we give the advantages of Internet in a
few words:

 Communication tool: Data, voice, video can be instantly


exchangedthanks to high-speed networks
 Teaching aid: Many complex concepts were easily explained using
graphics. Wealth of knowledge.
 Business tool: E-commerce has totally changed the way we were
doing business
 Information tool: Ocean full of information on any particular
topicthanks to the powerful search engines.
 E-governance: Many government organizations already started
using for collection of revenue, tax etc; Imagine earlier- even to pay
electricity bill it was total one days job sometimes!
 Travel: Internet is useful for advance reservation, tour planning etc..
 Medical & health: A specialist doctors services can be availed over
net.
Overall, the possibilities with the Internet are endless, especially with the
coming of Future Internet and services. However, it is not known if current
networks can continue to fulfil changing requirements in the future. Nor is it
known whether the growing market of new application areas will have the
potential to finance the enormous investment required to change the networks, if
the new architecture is to be sufficiently attentive to backward compatibility and
migration costs. Research communities have been working on various
architectures and supporting technologies, such as network virtualization [bAnderson], [b-ITU-T FG-FN NWvirt], energy-saving of networks [b-ITU-T FG-FN
Energy], and content-centric networks [b-Jacobson]. It is, therefore, reasonable
to expect that some requirements can be realized by the new network
architectures and supporting technologies described by recent research
activities, and that these could be the foundation of networks of the future, whose
trial services and phased deployment is estimated to fall approximately between
2015 and 2020. In this Recommendation, networks based on such new
architecture are named "Future Networks" (FNs). The Recommendation [10]
describes in more details the objectives that may differentiate FNs from existing
networks, design goals that FNs should satisfy, target dates and migration
issues, and technologies for achieving the design goals. Since the design goals
are high-level capabilities and characteristics that are recommended to be
supported by FNs, it should be noted that some of these design goals may be
extremely difficult to support in a particular FN, and that each design goal will not
be implemented in all FNs. Whether the support of each of these design goals in
a specific FN will be required, recommended, or optional, is a topic for further
study.

10

Figure 1.8. Four objectives and twelve design goals of future networks.

Figure 1.8 above shows the relationships between the four objectives
described in clause 7 of [10] and the twelve design goals described in this
clause. It should be noted that some design goals, such as network
management, mobility, identification, and reliability and security, may relate to
multiple objectives. Figure 1.8 shows only the relationships between a design
goal and its most relevant objective.

11

1.2. Internet protocols (IPv4, IPv6, TCP, UDP)


The protocol that defines the unreliable, connectionless delivery
mechanism on Network layer is called the Internet Protocol (IP). IP provides
three important definitions. First, the IP protocol defines the basic unit of data
transfer used throughout a IP internet. Thus, it specifies the exact format of all
data as it passes across the internet. Second, IP software performs the routing
function, choosing a path over which data will be sent (which will be cover in the
next chapter). Third, in addition to the precise, formal specification of data
formats and routing, IP includes a set of rules that embody the idea of unreliable
packet delivery. The rules characterize how hosts and routers should process
packets, how and when error messages should be generated, and the conditions
under which packets can be discarded. IP is such a fundamental part of the
design that a TCP/IP internet is sometimes called an IP-based technology.
Moreover, we have two version of IP, IP version 4 (IPv4) and IP version 6 (IPv6)
overviewed in the following. Moreover, this chapter provides an overview of the
most important and common protocols of the transport layer, the User Datagram
Protocol (UDP) and the Transmission Control Protocol (TCP).
In telecommunication networking, a transport layer provides end-to-end or
host-to-host communication services for applications within a layered architecture
of network components and protocols. The transport layer provides services such
as connection-oriented data stream support, reliability, flow control, and
multiplexing. By building on the functionality provided by the Internet Protocol
(IP), the transport protocols deliver data to applications executing in the IP host.
1.2.1. IPv4
The analogy between a physical network and a TCP/IP internet is strong.
On a physical network, the unit of transfer is a frame that contains a header and
data, where the header gives information such as the (physical) source and
destination addresses. The internet calls its basic transfer unit an Internet
datagram, sometimes referred to as an IP datagram or merely a datagram. Like a
typical physical network frame, a datagram is divided into header and data areas.
Also like a frame, the datagram header contains the source and destination
addresses and a type field that identifies the contents of the datagram. The
difference, of course, is that the datagram header contains IP addresses
whereas the frame header contains physical addresses. Figure 1.9 shows the
general form of a datagram. IP specifies the header format including the source
and destination IP addresses. IP does not specify the format of the data area; it
can be used to transport arbitrary data.

Figure 1.9. General form of an IP datagram, the TCP/IP analogy to a network frame.

12

Now that we have described the general layout of an IP datagram, we can


look in more detail at the contents of the IP version 4 (IPv4) headers and
datagrams. The Figure 1.10 shows the arrangement of fields in an IPv4
datagram. Because datagram processing occurs in software, the contents and
format are not constrained by any hardware. For example, the first 4-bit field in a
datagram (VERS) contains the version of the IP protocol that was used to create
the datagram. It is used to verify that the sender, receiver, and any routers in
between them agree on the format of the datagram. All IP software is required to
check the version field before processing a datagram to ensure it matches the
format the software expects.

Figure 1.10. Format of an IPv4 datagram, the basic unit of transfer in a


TCP/IP internet.

If standards change, machines will reject datagrams with protocol versions


that differ from theirs, preventing them from misinterpreting datagram contents
according to an outdated format. The given IP protocol version is 4 (in Figure
1.10). Consequently, the term IPv4 is often used to denote the current protocol in
this Section, but also in the next Section the IPv6 header and IPv6 fundaments
are given in more details.
The header length field (HLEN), also 4 bits, gives the datagram header
length measured in 32-bit words. As we will see, all fields in the header have
fixed length except for the IP OPTIONS and corresponding PADDING fields. The
most common header, which contains no options and no padding, measures 20
octets and has a header length field equal to 5.
The TOTAL LENGTH field gives the length of the IP datagram measured
in octets, including octets in the header and data. The size of the data area can
be computed by subtracting the length of the header (HLEN) from the TOTAL
LENGTH. Because the TOTAL LENGTH field is 16 bits long, the maximum
possible size of an IPv4 datagram is 216 or 65,535 octets. In most applications
this is not a severe limitation. It may become more important in the future if
higher speed networks can carry data packets larger than 65,535 octets.

13

Informally called Type Of Service (TOS), the 8-bit SERVICE TYPE field
specifies how the datagram should be handled. The field was originally divided
into five subfields as shown in Figure 1.11.

Figure 1.11. The original five subfields that comprise the 8-bit SERVICE TYPE field.

Three PRECEDENCE bits specify datagram precedence, with values


ranging from 0 (normal precedence) through 7 (network control), allowing
senders to indicate the importance of each datagram. Although some routers
ignore type of service, it is an important concept because it provides a
mechanism that can allow control information to have precedence over data. For
example, many routers use a precedence value of 6 or 7 for routing traffic to
make it possible for the routers to exchange routing information even when
networks are congested.
Bits D, T, and R specify the type of transport desired for the datagram.
When set, the D bit requests low delay, the T bit requests high throughput, and
the R bit requests high reliability. Of course, it may not be possible for an internet
to guarantee the type of transport requested (i.e., it could be that no path to the
destination has the requested property). Thus, we think of the transport request
as a hint to the routing algorithms, not as a demand. If a router does know more
than one possible route to a given destination, it can use the type of transport
field to select one with characteristics closest to those desired. For example,
suppose a router can select between a low capacity leased line or a high
bandwidth (but high delay) satellite connection. Datagrams carrying keystrokes
from a user to a remote computer could have the D bit set requesting that they
be delivered as quickly as possible, while datagrams carrying a bulk file transfer
could have the T bit set requesting that they travel across the high capacity
satellite path.
In the late 1990s, the IETF redefined the meaning of the 8-bit SERVICE
TYPE field to accommodate a set of differentiated services (DS). Figure 1.12
illustrates the resulting definition.

Figure 1.12. The differentiated services (DS) interpretation of the SERVICE TYPE field
in an IP datagram.

Under the differentiated services interpretation, the first six bits comprise a
codepoint, which is sometimes abbreviated DSCP and the last two bits are left
unused. A codepoint value maps to an underlying service definition, typically
through an array of pointers. Although it is possible to define 64 separate
services, the designers suggest that a given router will only have a few services,

14

and multiple codepoints will map to each service. Moreover, to maintain


backward compatibility with the original definition, the standard distinguishes
between the first three bits of the codepoint (the bits that were formerly used for
precedence) and the last three bits. When the last three bits contain zero, the
precedence bits define eight broad classes of service that adhere to the same
guidelines as the original definition: datagrams with a higher number in their
precedence field are given preferential treatment over datagrams with a lower
number. That is, the eight ordered classes are defined by codepoint values of the
form: XXX000
where X denotes either a zero or a one.
The differentiated services design also accommodates another existing
practice - the widespread use of precedence 6 or 7 for routing traffic. The
standard includes a special case to handle these precedence values. A router is
required to implement at least two priority schemes: one for normal traffic and
one for high-priority traffic. When the last three bits of the CODEPOINT field are
zero, the router must map a codepoint with precedence 6 or 7 into the higher
priority class and other codepoint values into the lower priority class. Thus, if a
datagram arrives that was sent using the original TOS scheme, a router using the
differentiated services scheme will honor precedence 6 and 7 as the datagram
sender expects. The 64 codepoint values are divided into three administrative
groups as Table 1.1 illustrates.
Table 1.1. The three administrative pools of codepoint values.

As the Table 1.1 indicates, half of the values (i.e., the 32 values in pool I)
must be assigned interpretations by the ETF. Currently, all values in pools 2 and
3 are available for experimental or local use. However, if the standards bodies
exhaust all values in pool I, they may also choose to assign values in pool 3.
The division into pools may seem unusual because it relies on the loworder bits of the value to distinguish pools. Thus, rather than a contiguous set of
values, pool I contains every other codepoint value (i.e., the even numbers
between 2 and 64). The division was chosen to keep the eight codepoints
corresponding to values xxxO 0 0 in the same pool.
Whether the original ToS interpretation or the revised differentiated
services interpretation is used, it is important to realize that routing software must
choose from among the underlying physical network technologies at hand and
must adhere to local policies. Thus, specifying a level of service in a datagram
does not guarantee that routers along the path will agree to honor the request.
To summarize this part of the Section: we regard the service type
specification as a hint to the routing algorithm that helps it choose among various

15

paths to a destination based on local policies and its knowledge of the hardware
technologies available on those paths. An internet does not guarantee to provide
any particular type of service.
Furthermore, the three fields in the datagram header, IDENTIFICATION,
FLAGS, and FRAGMENT OFFSET, control fragmentation and reassembly of
datagrams. Field IDENTIFICATION contains a unique integer that identifies the
datagram. Recall that when a router fragments a datagram, it copies most of the
fields in the datagram header into each fragment. Thus, the IDENTIFICATION
field must be copied. Its primary purpose is to allow the destination to know
which arriving fragments belong to which datagrams. As a fragment arrives, the
destination uses the IDENTIFICATION field along with the datagram source
address to identify the datagram.
Computers sending IP datagrams must generate a unique value for the
IDENTIFICATION field for each datagram. One technique used by IP software
keeps a global counter in memory, increments it each time a new datagram is
created, and assigns the result as the datagram's IDENTIFICATION field.
Recall that each fragment has exactly the same format as a complete
datagram. For a fragment, field FRAGMENT OFFSET specifies the offset in the
original datagram of the data being carried in the fragment, measured in units of
8 octets, starting at offset zero. To reassemble the datagram, the destination
must obtain all fragments starting with the fragment that has offset 0 through the
fragment with highest offset. Fragments do not necessarily arrive in order, and
there is no communication between the router that fragmented the datagram and
the destination trying to reassemble it. The low-order two bits of the 3-bit FLAGS
field control fragmentation. Usually, application software using TCP/IP does not
care about fragmentation because both fragmentation and reassembly are
automatic procedures that occur at a low level in the operating system, invisible
to end users. However, to test internet software or debug operational problems, it
may be important to test sizes of datagrams for which fragmentation occurs. The
first control bit aids in such testing by specifying whether the datagram may be
fragmented. It is called the do not fragment bit because setting it to 1 specifies
that the datagram should not be fragmented. An application may choose to
disallow fragmentation when only the entire datagram is useful.
Moreover, the field TIME TO LIVE specifies how long, in seconds, the
datagram is allowed to remain in the internet system. The idea is both simple and
important: whenever a computer injects a datagram into the internet, it sets a
maximum time that the datagram should survive. Routers and hosts that process
datagrams must decrement the TIME TO LIVE (TTL) field as time passes and
remove the datagram from the internet when its time expires. Estimating exact
times is difficult because routers do not usually know the transit time for physical
networks. A few rules simplify processing and make it easy to handle datagrams
without synchronized clocks. First, each router along the path from source to
destination is required to decrement the TTL field by I when it processes the
datagram header. Furthermore, to handle cases of overloaded routers that
introduce long delays, each router records the local time when the datagram

16

arrives and decrements the TTL by the number of seconds the datagram
remained inside the router waiting for service.
Whenever a TTL field reaches zero, the router discards the datagram and
sends an error message back to the source. The idea of keeping a timer for
datagrams is interesting because it guarantees that datagram cannot travel
around an internet forever, even if routing tables become corrupt and routers
route datagrams in a circle. Although once important, the notion of a router
delaying a datagram for many seconds is now outdated - current routers and
networks are designed to forward each datagram within a reasonable time. If the
delay becomes excessive, the router simply discards the datagram. Thus, in
practice, the TTL acts as a "hop limit" rather than an estimate of delay. So, each
router only decrements the value by one.
Field PROTOCOL is analogous to the type field in a network frame; the
value specifies which high-level protocol was used to create the message carried
in the DATA area of the datagram. In essence, the value of PROTOCOL
specifies the format f the DATA area. The mapping between a high level protocol
and the integer value used in the PROTOCOL field must be administered by a
central authority to guarantee agreement across the entire Internet.
Field HEADER CHECKSUM ensures integrity of header values. The IP
checksum is formed by treating the header as a sequence of 16-bit integers (in
network byte order), adding them together using one's complement arithmetic,
and then taking the one's complement of the result. For purposes of computing
the checksum, field HEADER CHECKSUM is assumed to contain zero.
It is important to note that the checksum only applies to values in the IP
header and not to the data. Separating the checksum for headers and data has
advantages and disadvantages. Because the header usually occupies fewer
octets than the data, having a separate checksum reduces processing time at
routers which only need to compute header checksums. The separation also
allows higher level protocols to choose their own checksum scheme for the data.
The chief disadvantage is that higher level protocols are forced to add their own
checksum or risk having corrupted data to go undetected.
Fields SOURCE IP ADDRESS and DESTINATION IP ADDRESS contain
the 32-bit IP addresses (IPv4 addresses) of the datagram's sender and intended
recipient. Although the datagram may be routed through many intermediate
routers, the source and destination fields never change; they specify the IP
addresses of the original source and ultimate destination.
Every host on the Internet has its own IP address, which consists of two
parts: network ID (network part of the IP address), and host ID (host part of the
IP address), and has a total length of 32 bits for IP versions 4. Network ID
defines the network, and if the network should be part of the Internet it is given by
a global authority, the Internet Corporation for Assigned Names and Numbers
(ICANN), usually through its regional organizations. For each new network that
requires access to the Internet, ICANN assigns network ID. Host ID identifies
uniquely a given host in the network. IP address in a unique way identifies the
specific network interface on a given Internet host (e.g., a computer, mobile
device, etc.) in a given IP network. There are two types of IP addressing:

17

Classful addressing;
Classes addressing (Classless Inter-Domain Routing - CIDR).

Classful addressing
Classfull addressing is defined with five different classes of IP addresses,
shown in Figure 1.13. Class-A IP address allows the existence of 126 different
networks with 16 million hosts pre network; class-B includes 16 382 networks
with 65534 hosts per network; class-C includes 2 million networks with 254 hosts
per network.

Figure 1.13. IPv4 addressing classes.

Class D is for multicast addresses. Besides the unicast IP address, a


given host can have one or more multicast addresses of the class D. Each
datagram that contains a multicast destination address simultaneously is
delivered to all hosts that have been assigned to the given multicast address.
Class E addresses are reserved for future use.
IP addresses are 4 byte in length, so the IP addresses are canonically
represented in decimal-dot notation, consisted of four decimal numbers (each
number ranging from 0 to 255), separated by dots (e.g., 192.168.1.1). In classful
addressing an organization is granted a block in one the three classes A, B, or C.
Then, the network ID of the IP address is used by routers for routing a packet to
its destination.
Classless addressing
With expansion of the Internet and number of hosts in the Internet, the
classful addressing could not solve the problem for larger IP address space. The

18

solution was to change the distribution of IP addresses from classes to more


flexible approach, hence classless addressing has been proposed by the IETF to
replace the classful network design. The aim was to slow down the exhaustion of
IP addresses as well as the growth of the routing tables. Hence, the method was
called Classless Inter-Domain Routing (CIDR). Again, the IP addresses are
consisted of two parts (similar to classful addressing): a network prefix (in the
same role as network ID), and a host suffix (in the same role as host ID). While in
classless addressing the network ID can be 8, 16, or 24 bits, in classless
addressing it may have any value from 1 to 32. So, if network prefix has length of
n bits, then host suffix has length of (32 - n) bits.
In IP classless addressing (CIDR), the IP address is represented with a 64
bit value composed of two parts:
IP address of 32 bits; and
mask of 32 bits.
There are two commonly used approaches to denote networks addressed
using classless addressing, and they are:
1) a.b.c.d/255.255.255.0 (in this case the mask is in a decimal-dot notation,
the same as IP address);
2) a.b.c.d/24 (in this case the mask is a decimal number that defines the
number of leftmost bits in the 32-bit mask which are set to "1"; for
example, mask "/24" corresponds to a decimal-dot notation
255.255.255.0)
Allocation of IP addresses
How host can obtain IP address? Each host must have the IP address of
each interface through which communicates. There are two ways for the
allocation of IP address:
Static IP address (hard-coded) set by the system administrator;
Dynamically assigned IP address by Dynamic Host Configuration Protocol
(DHCP), which is more flexible for the network and more convenient for
ordinary Internet users.
How does the network obtain the network part of the IP address? It is
assigned from IP address space of the ISP (Internet Service Provider) for the
given network.
How an ISP obtains a block of IP addresses? It is done through ICANN
(Internet Corporation for Assigned Names and Numbers), www.icann.org.
However, not all ISPs contact ICANN directly. Namely, smaller ISPs receive
block of IP addresses from the major ISPs. Only the largest ISPs (which are
often international organizations) communicate directly with ICANN (for assigning
IP addresses) through its five Regional Internet Registries (RIRs).
1.2.2. IPv6
The Internet Protocol version 6 (IPv6) is emerging to form the basis of the
nowadays and future Broadband Internet services. It is expected that IPv6-based
19

networks will replace IPv4-based networks in order to overcome the ultimate


limitations of Internet Protocol version 4 (IPv4). Nowadays and in the Future
Internet we expect many transition scenarios to IPv6. An IPv6 transition
mechanism means a mechanism that supports transition between IPv4 and IPv6.
There are many kinds of IPv6 transition mechanisms available, and the user can
choose one that fits into a deployment environment.
Generally, IP-based networks are classified into IPv4 and IPv6 according
to the IP protocol types used. Their behaviors differ according to the features of
each protocol version. Therefore, clarifying the differences between IPv4 and
IPv6 is very useful for identifying the operations of IPv4 and IPv6, which influence
network design and service operations. Key features of IPv4 and IPv6 are
summarized in Table 1.2.
Table 1.2. Key features of IPv4 and IPv6.

There are key features of IPv6 which may significantly impact Broadband
Internet in various ways, such as addressing schemes, QoS, security and
mobility:
- Simplified packet format: IPv6 headers are simplified from IPv4 headers.
Some IPv4 header fields have been dropped or made optional to limit their

20

bandwidth cost. They also have a constant size to reduce the common
processing cost of packet handling.
- Expanded addressing scheme: IPv6 addressing schemes have a large
addressing space due to an increased size of the IP address fields to support
more levels of addressing hierarchy, a much greater number of addressable
nodes and interfaces, and a simpler autoconfiguration of addresses. The
scalability of multicast routing is improved by adding a "scope" field to multicast
addresses. In addition, a new type of address called an "anycast address" is
defined and is used to send a packet to any of a group of nodes.
- QoS: A flow label and traffic class fields in IPv6 header are added to
enable the labeling of packets belonging to particular traffic "flows" for which the
sender requests special handling, such as non-default quality of service or "realtime" service. In addition, IPv6 hop-by-hop header with router-alert option will
indicate the contents of IPv6 packets to support the selective processing of the
intermediate nodes.
- Security support: IPv6 supports built-in IPsec services such as
authentication, data integrity and data confidentiality using authentication header
(AH) and encapsulating security payload (ESP) extension headers. These enable
end-to-end security services via global IP addresses even though intermediate
nodes do not understand the IPsec headers.
- Mobility support: IPv6 capabilities such as neighbour discovery, address
resolution and reachability detection support the mobility services using
destination option, routing and mobility extension headers.
Furthermore, the format of IPv6 header is shown in Figure 1.14.

Figure 1.14. IPv6 header format.

21

As a novelty compared to IPv4, IPv6 supports Quality of Service (QoS)


per flow on the network layer. A flow is a sequence of related packets sent from a
source to a destination. This means that the flow-based QoS (which is generally
determined by losses, packet delay, and bandwidth given in bits/s) will be easier
to implement in the Internet, which is especially needed for transition of
traditional services, such as telephony or television, towards the all-IP
environment. A packet is classified to a certain flow by the triplet consisted of
Flow Label, Source Address, and Destination Address. Flow labeling with the
Flow Label field enables classification of packets belonging to a specific flow.
Without the flow label the classifier must use transport header and port numbers
which belong to transport layer (for port numbers refer to TCP and UDP sections
in this chapter). Flow state should be established in a subset or all of the IPv6
nodes on the path, which should keep track on all triplets of all flows in use.
However, IPv6 does not guarantee the actual end-to-end QoS as there is no
reservation of network resources (this should be provided by other mechanisms
and is part of next generation networks, as discussed in the following chapters).
Other important novelty of IPv6 is the Next Header field which identifies
the type of header immediately following the IPv6 header. Single IPv6 packet
may carry zero, one or more next headers, placed between the IPv6 header and
the upper protocol header (e.g., TCP header, UDP header). These so-called
extension headers may carry routing information, authentication, authorization
and accounting information, etc., which provides better network layer
functionalities in all network environments.
In order to improve the routing IPv6 header has a fixed format that allows
hardware processing for faster routing. Significant changes are made regarding
the fragmentation of data in IPv6, which is done at the source host contrary to
IPv4 where it is performed in routers.
Header checksum in IPv6 header is omitted in order to reduce the
processing of IP headers in routers (as a reminder, in IPv4 each router for each
packet have to calculate a new Header Checksum, due to changes in the TTL
field). The error control in IP header is redundant because it is provided in lower
protocol layers (e.g. MAC layer) and upper protocol layers (e.g., TCP, UDP).
Hence, checksum is omitted in IPv6.
Hop limit in IPv6 header is an 8-bit value that provides the same functions
as the TTL field in IPv4.
On the other hand, IPv6 is a newer IP version that is not significantly
different from the previous, IPv4. Networks still are assigned network address
blocks or prefixes, IPv6 routers route packets hop by hop, providing
connectionless delivery, and network interfaces still must have valid IPv6
addresses. However, IPv6 should be seen as simpler, scalable and more
efficient version of the Internet Protocol.
The minimum length of the IPv6 header is 40 bytes. Hence, the IPv6
packet header is at least two times larger than IPv4 header (which has minimum
header length of 20 bytes), thus introducing redundancy per packet due to longer
IPv6 addresses. Namely, in real-time communications (e.g., Voice over IP) we

22

have to use smaller packets. In such case higher header redundancy leads to
inefficient utilization of the available links and network capacity than IPv4.
IPv6 addressing architecture
IPv6 addressing differs from the IPv4 addressing. Each IPv6 address has
length of 128 bits (i.e., 16 bytes), and is divided in three parts, which is different
than IPv4 addresses which have only two parts (network ID and host ID). Due to
larger address, IPv6 addresses are written in colon hexadecimal notation in
which 128 bits are divided into 8 sections, each section with 16 bits (which
equals to 4 hexadecimal digits). The preferred form is x:x:x:x:x:x:x:x, where an
"x" can be 1 to 4 hexadecimal digits. It is less than 4 in cases when there are
consecutive series of zeros in the address as shown in the example below.
Example of IPv6 address:
2001:0000:0000:0000:0008:0800:200C:417A, which may be written also
as:
2001:0:0:0:8:800:200C:417A, and in compressed mode it will be:
2001::8:800:200C:417A (the use of "::" replaces one or more groups of
consecutive zeros in the IPv6 address, and can be used only once).
There are three types of IPv6 addresses:
Unicast: This is identifier to a single interface in the network.
Anycast: This type of address in used when identifier is given for a set of
network interfaces, which may belong to different nodes. When a packet
is sent to a destination any-cast address it should be delivered (by means
of routing) to the nearest nodes in the set (according to the routing
metrics). This type of addressing appears with IPv6 (in IPv4 are defined
unicast and multicast addresses, but also local broadcast addresses
which are not present in IPv6).
Multicast: This is an identifier to a set of network interfaces. Packet
addressed to a multicast address will be delivered to all addresses in the
set.
IPv6 also allows CIDR notation (as it exists for IPv4), which is performed
by using IPv6 address and a binary prefix mask, as given in IPv6 notation in
Table 1.3.

Table 1.3. Types of IPv6 addresses


Address type
Unspecified
Loopback
Multicast
Link-local unicast
Global unicast
(includes all anycast)

Binary prefix
00...00 (128 zeros)
0.0...01 (128 bits)
11111111 (8 ones)
1111111010 (10 bits)

IPv6 notation
::/128
::1/128
FF00::/8
FE80::/10

All other IPv6 addresses

23

IPv6 addressing architecture has one hierarchy layer more than IPv4. The
general format for global unicast IPv6 addresses has three parts: global routing
prefix, subnet ID, and interface ID (Figure 1.15). All global unicast IPv6
addresses (other than those with leading zeros, which in fact have embedded
IPv4 addresses in the lowest 32 bits) have 64-bit interface ID.

Figure 1.15 IPv6 global unicast address format

Link-local IPv6 addresses (Table 1.1) are used for so-called stateless
address autoconfiguration, where 64-bits of the interface ID are obtained from
the interfaces link address (e.g., using 16 zeroes concatenated with 48-bit
Ethernet address of the given interface). IPv6 stateful address autoconfiguration
is provided with DHCP in the same manner as IPv4.
We can conclude that the IPv6 is a well-defined protocol to support the
Nowadays and Future Internet functions. In the following we give the impact of
using IPv6 to the Internet from various viewpoints.
Enhanced service capabilities: IPv6 enables congestion/flow control using
additional QoS information such as flow label, etc. The flow label field of
IPv6 header enables IPv6 flow identification independently of transport
layer protocols. This means that new enhanced service capabilities can be
introduced more easily. IPv6 supports better mobility by removing triangle
routing problem. IPv6 supports secure networking using embedded IPv6
security solution such as ESP and AH.
Any-to-any IP connectivity: IP connectivity will be one of the vital features
in order to cope with the increasing number of end users/devices. Using
globally routable IPv4 addresses to network millions of devices, such as
sensors, is not feasible. On the other hand, IPv6 offers the advantages of
localizing traffic with unique local addresses, while making some devices
globally reachable by assigning addresses which are scoped globally.
Therefore, the greatest potential of IPv6 will be realized in the objects-toobjects communications. IPv6 can satisfy this end-to-end principle of the
Internet.
Self-organization and service discovery using autoconfiguration: IPv6 can
provide autoconfiguration capability using neighbour discovery protocol,
etc. Through linking together the IP layer and lower layers,
autoconfiguration enables with ease self-organization and service
discovery of network management and reduces management
requirements.
Multi-homing using IPv6 addressing: IPv6 can handle multiple
heterogeneous access interfaces and/or multiple IPv6 addresses through

24

single or multiple access interfaces. Multi-homing can provide redundancy


and fault tolerance.
1.2.3. TCP
The Transmission Control Protocol (TCP) is a standard protocol with STD
number 7. TCP is described by RFC 793 Transmission Control Protocol. Its
status is recommended, but in practice, every TCP/IP implementation that is not
used exclusively for routing will include TCP. TCP provides considerably more
facilities for applications than UDP, notably error recovery, flow control, and
reliability. TCP is a connection-oriented protocol, unlike UDP, which is
connectionless. Most of the user application protocols, such as Telnet and FTP,
use TCP. The two processes communicate with each other over a TCP
connection (InterProcess Communication - IPC), as shown in Figure 1.16.

Figure 1.16 TCP - Connection between processes - (Processes 1 and 2 communicate


over a TCP connection carried by IP packets).

As noted above, the primary purpose of TCP is to provide reliable logical


circuit or connection service between pairs of processes. It does not assume
reliability from the lower-level protocols (such as IP), so TCP must guarantee this
itself.
Moreover, the TCP can be characterized by the following facilities it
provides for the applications using it:
 Stream Data Transfer: From the application's viewpoint, TCP transfers a
contiguous stream of bytes through the network. The application does not
have to bother with chopping the data into basic blocks or datagrams.
TCP does this by grouping the bytes in TCP segments, which are passed
25




to IP for transmission to the destination. Also, TCP itself decides how to


segment the data and it can forward the data at its own convenience.
Sometimes, an application needs to be sure that all the data passed to
TCP has actually been transmitted to the destination. For that reason, a
push function is defined. It will push all remaining TCP segments still in
storage to the destination host. The normal close connection function also
pushes the data to the destination.
Reliability: CP assigns a sequence number to each byte transmitted and
expects a positive acknowledgment (ACK) from the receiving TCP. If the
ACK is not received within a timeout interval, the data is retransmitted.
Since the data is transmitted in blocks (TCP segments), only the
sequence number of the first data byte in the segment is sent to the
destination host. The receiving TCP uses the sequence numbers to
rearrange the segments when they arrive out of order, and to eliminate
duplicate segments.
Flow Control: The receiving TCP, when sending an ACK back to the
sender, also indicates to the sender the number of bytes it can receive
beyond the last received TCP segment, without causing overrun and
overflow in its internal buffers. This is sent in the ACK in the form of the
highest sequence number it can receive without problems. This
mechanism is also referred to as a window-mechanism, and we discuss it
in more detail later in this chapter.
Multiplexing: Achieved through the use of ports, just as with UDP.
Logical Connections: The reliability and flow control mechanisms
described above require that TCP initializes and maintains certain status
information for each data stream. The combination of this status, including
sockets, sequence numbers and window sizes, is called a logical
connection. Each connection is uniquely identified by the pair of sockets
used by the sending and receiving processes.
Full Duplex: TCP provides for concurrent data streams in both directions.

In the main mechanism for functioning, the window principle is used in TCP, but
with a few differences:
Since TCP provides a byte-stream connection, sequence numbers are
assigned to each byte in the stream. TCP divides this contiguous byte stream
into TCP segments to transmit them. The window principle is used at the byte
level, that is, the segments sent and ACKs received will carry byte-sequence
numbers and the window size is expressed as a number of bytes, rather than a
number of packets.
The window size is determined by the receiver when the connection is
established and is variable during the data transfer. Each ACK message will
include the window size that the receiver is ready to deal with at that particular
time.
The sender's data stream can now be seen as following Figure 1.17:

26

Figure 1.17 Window principle applied to TCP.

Where:
A: Bytes that are transmitted and have been acknowledged.
B: Bytes that are sent but not yet acknowledged.
C: Bytes that can be sent without waiting for any acknowledgment.
D: Bytes that cannot be sent yet.
Remember that TCP will block bytes into segments, and a TCP segment
only carries the sequence number of the first byte in the segment.
Furthermore, the TCP segment format is shown in Figure 1.18.

Figure 1.18 TCP - Segment format.

Where:
Source Port: The 16-bit source port number, used by the receiver to
reply.
Destination Port: The 16-bit destination port number.

27

Sequence Number: The sequence number of the first data byte in this
segment. If the SYN control bit is set, the sequence number is the initial
sequence number (n) and the first data byte is n+1.
Acknowledgment Number: If the ACK control bit is set, this field contains
the value of the next sequence number that the receiver is expecting to receive.
Data Offset: The number of 32-bit words in the TCP header. It indicates
where the data begins.
Reserved: Six bits reserved for future use; must be zero.
URG: Indicates that the urgent pointer field is significant in this segment.
ACK: Indicates that the acknowledgment field is significant in this
segment.
PSH: Push function.
RST: Resets the connection.
SYN: Synchronizes the sequence numbers.
FIN: No more data from sender.
Window: Used in ACK segments. It specifies the number of data bytes,
beginning with the one indicated in the acknowledgment number field that the
receiver (= the sender of this segment) is willing to accept.
Checksum: The 16-bit one's complement of the one's complement sum
of all 16-bit words in a pseudo-header, the TCP header, and the TCP data. While
computing the checksum, the checksum field itself is considered zero.
TCP sends data in variable length segments. Sequence numbers are
based on a byte count. Acknowledgments specify the sequence number of the
next byte that the receiver expects to receive. Consider that a segment gets lost
or corrupted. In this case, the receiver will acknowledge all further well-received
segments with an acknowledgment referring to the first byte of the missing
packet. The sender will stop transmitting when it has sent all the bytes in the
window. Eventually, a timeout will occur and the missing segment will be
retransmitted. The Figure 1.19 illustrates and example where a window size of
1500 bytes and segments of 500 bytes are used.
Before any data can be transferred, a TCP connection has to be
established between the two processes. One of the processes (usually the
server) issues a passive OPEN call, the other an active OPEN call. The passive
OPEN call remains dormant until another process tries to connect to it by an
active OPEN.
On the network, three TCP segments are exchanged, thats why this
process is also known as three way handshake. Note that the exchanged TCP
segments include the initial sequence numbers from both sides, to be used on
subsequent data transfers.
Closing the connection is done implicitly by sending a TCP segment with
the FIN bit (no more data) set. Since the connection is full-duplex (that is, there
are two independent data streams, one in each direction), the FIN segment only
closes the data transfer in one direction. The other process will now send the
remaining data it still has to transmit and also ends with a TCP segment where
the FIN bit is set. The connection is deleted (status information on both sides)
once the data stream is closed in both directions.

28

Figure 1.19 TCP - Acknowledgment and retransmission process.

Figure 1.20 TCP three way handshake process.

One big difference between TCP and UDP is the congestion control
algorithm. The TCP congestion algorithm prevents a sender from overrunning the
capacity of the network (for example, slower WAN links). TCP can adapt the
sender's rate to network capacity and attempt to avoid potential congestion
situations. In order to understand the difference between TCP and UDP,
understanding basic TCP congestion control algorithms is very helpful.
29

Several congestion control enhancements have been added and


suggested to TCP over the years. This is still an active and ongoing research
area, but modern implementations of TCP contain four intertwined algorithms as
basic Internet standards:
Slow start
Congestion avoidance
Fast retransmit
Fast recovery
For more details of above TCP algorithms see the references for TCP
[11]. [18] and [21]. Finally, the Figure 1.21 is illustrating the TCP Finite State
machine for better understanding the TCP mechanisms and states.

Figure 1.21 TCP Finite State Machine.

1.2.4. UDP
The User Datagram Protocol (UDP) is a standard protocol with STD
number 6. UDP is described by RFC 768 User Datagram Protocol. Its status is
recommended, but in practice every TCP/IP implementation that is not used

30

exclusively for routing will include UDP. The UDP is basically an application
interface to IP.
It adds no reliability, flow-control, or error recovery to IP. It simply serves
as a multiplexer/demultiplexer for sending and receiving datagrams, using ports
to direct the datagrams, as shown in Figure 1.22.

Figure 1.22. UDP - Demultiplexing based on ports.

UDP provides a mechanism for one application to send a datagram to


another. The UDP layer can be regarded as being extremely thin and
consequently has low overheads, but it requires the application to take
responsibility for error recovery and so on.
The applications sending datagrams to a host need to identify a target that
is more specific than the IP address, since datagrams are normally directed to
certain processes and not to the system as a whole. UDP provides this by using
ports on transport layer.
Each UDP datagram is sent within a single IP datagram. Although, the IP
datagram may be fragmented during transmission, the receiving IP
implementation will reassemble it before presenting it to the UDP layer. All IP
implementations are required to accept datagrams of 576 bytes, which means
that, allowing for maximum-size IP header of 60 bytes, a UDP datagram of 516
bytes is acceptable to all implementations. Many implementations will accept
larger datagrams, but this is not guaranteed. The UDP datagram has a 16-byte
header that is described in Figure 1.23.
Where:
Source Port: Indicates the port of the sending process. It is the port to
which replies should be addressed.
Destination Port: Specifies the port of the destination process on the
destination host.
Length: The length (in bytes) of this user datagram, including the header.

31

Checksum: An optional 16-bit one's complement of the one's


complement sum of a pseudo-IP header, the UDP header, and the UDP data.
The pseudo-IP header contains the source and destination IP addresses,
the protocol, and the UDP length.

Figure 1.23. UDP Datagram format.

The application interface offered by UDP is described in RFC 768. It


provides for:
The creation of new receive ports.
The receive operation that returns the data bytes and an indication of
source port and source IP address.
The send operation that has, as parameters, the data, source, and
destination ports and addresses.
The way this interface should be implemented is left to the discretion of
each vendor. To emphasized that that UDP and IP do not provide guaranteed
delivery, flow-control, or error recovery, so these must be provided by the
application.
Standard applications using UDP include:
Trivial File Transfer Protocol (TFTP)
Domain Name System (DNS) name server
Remote Procedure Call (RPC), used by the Network File System (NFS)
Simple Network Management Protocol (SNMP)
Lightweight Directory Access Protocol (LDAP)
In the end of this chapter we give in Table 1.4 one comparison between
the two dominant transport protocols in todays networks, t.e. UDP and TCP.
Table 1.4 TCP vs UDP performances.

32

1.3. Internet routing and network interconnection

The Internet has become much more than just a network used to access
information. In the past decades, new important applications have emerged, such
as electronic commerce, voice over IP (VoIP), IPTV, WWW (or Web services),
social networking, and many more. In addition, many applications such as online
banking or online trading are business critical and time sensitive. As a result,
users and businesses that rely on the Internet infrastructure require a high
degree of reliability from the operators of the network. Reliability encompasses
the ability to offer the network users high-bandwidth and low-latency service in
the presence of accidental hardware failures or planned maintenance, and ability
to deliver data securely even in the presence of malicious attacks on the Internet
infrastructure.
Network routing, the selection of paths to destinations, is perhaps one of
the most important features of the Internet that determines the performance,
security, and reliability of the network.
The routing algorithm defines which network path, or paths, are allowed
for each packet. Ideally, the routing algorithm supplies shortest paths to all
packets such that traffic load is evenly distributed across network links to
minimize contention.
In a IP packet-based network, two successive packets of the same user
pair may travel along different routes, and a routing decision is necessary for
each individual packet (see Figure 1.24). In a virtual circuit network, a routing
decision is made when each virtual circuit is set up. The routing algorithm is used
to choose the communication path for the virtual circuit. All packets of the virtual
circuit subsequently use this path up to the time that the virtual circuit is either
terminated or rerouted for some reason (see Figure 1.25).
However, some paths provided by the network topology may not be
allowed in order to guarantee that all packets can be delivered, no matter what
the traffic behavior. Paths that have an unbounded number of allowed
nonminimal hops from packet sources, for instance, may result in packets never
reaching their destinations. This situation is referred to as livelock. Likewise,
paths that cause a set of packets to block in the network forever waiting only for
network resources (i.e., links or associated buffers) held by other packets in the
set also prevent packets from reaching their destinations. This situation is
referred to as deadlock. As deadlock arises due to the finiteness of network
resources, the probability of its occurrence increases with increased network
traffic and decreased availability of network resources. For the network to
function properly, the routing algorithm must guard against this anomaly which
can occur in various formsfor example, routing deadlock, request-reply
(protocol) deadlock, and fault-induced (reconfiguration) deadlock. At the same

33

time, for the network to provide the highest possible performance, the routing
algorithm must be efficientallowing as many routing options to packets as there
are paths provided by the topology, in the best case. The routing in a network
typically involves a rather complex collection of algorithms that work more or less
independently and yet support each other by exchanging services or information.
The complexity is due to a number of reasons. First, routing requires coordination
between all the nodes of the subnet rather than just a pair of modules as, for
example, in data link and transport layer protocols. Second, the routing system
must cope with link and node failures, requiring redirection of traffic and an
update of the databases maintained by the system. Third, to achieve high
performance, the routing algorithm may need to modify its routes when some
areas within the network become congested.
The two main functions performed by a routing algorithm are the selection
of routes for various origin-destination pairs and the delivery of messages to their
correct destination once the routes are selected. The second function is
conceptually straightforward using a variety of protocols and data structures
(known as routing tables). The focus will be on the first function (selection of
routes) and how it affects network performance.

Figure 1.24 Overview of Routing in a datagram network.

34

Figure 1.25 Overview of Routing in a virtual circuit network.

There are two main performance measures that are substantially affected
by the routing algorithm-throughput (quantity of service) and average packet
delay (quality of service). Routing interacts with flow control in determining these
performance measures by means of a feedback mechanism shown in Figure
1.26 (As good routing keeps delay low, flow control allows more traffic into the
network).

Figure 1.26 Interaction of routing and flow control.

When the traffic load offered by the external sites to the subnet is
relatively low, it will be fully accepted into the network, that is,
throughput = offered load
When the offered load is excessive, a portion will be rejected by the flow
control algorithm and
throughput = offered load - rejected load
The traffic accepted into the network will experience an average delay per
packet that will depend on the routes chosen by the routing algorithm. However,
throughput will also be greatly affected (if only indirectly) by the routing algorithm
because typical flow control schemes operate on the basis of striking a balance
35

between throughput and delay (i.e., they start rejecting offered load when delay
starts getting excessive). Therefore, as the routing algorithm is more successful
in keeping delay low, the flow control algorithm allows more traffic into the
network. While the precise balance between delay and throughput will be
determined by flow control, the effect of good routing under high offered load
conditions is to realize a more favorable delay-throughput curve along which flow
control operates, as shown in Figure 1.27.
There are a number of ways to classify routing algorithms. One way is to
divide them into centralized and distributed. In centralized algorithms, all route
choices are made at a central node, while in distributed algorithms; the
computation of routes is shared among the network nodes with information
exchanged between them as necessary.

Figure 1.27 Delay-throughput operating curves for good and bad routing.

Note that this classification relates mostly to the implementation of an


algorithm, and that a centralized and a distributed routing algorithm may be
equivalent at some level of mathematical abstraction.
Another classification of routing algorithms relates to whether they change
routes in response to the traffic input patterns. In static routing algorithms, the
path used by the sessions of each origin-destination pair is fixed regardless of
traffic conditions. It can only change in response to a link or node failure. This
type of algorithm cannot achieve a high throughput under a broad variety of
traffic input patterns. It is recommended for either very simple networks or for
networks where efficiency is not essential. Most major packet networks use some
form of adaptive routing where the paths used to route new traffic between
origins and destinations change occasionally in response to congestion. The idea
here is that congestion can build up in some part of the network due to changes
in the statistics of the input traffic load. Then, the routing algorithm should try to
change its routes and guide traffic around the point of congestion. There are
many routing algorithms in use with different levels of sophistication and
efficiency. This variety is partly due to historical reasons and partly due to the
diversity of needs in different networks.
In this section, regarding the Internet routing and network interconnection,
generally, we divide the routing into Interdomain and Intradomain routing.

36

Interdomain routing concerns the problem of calculating the paths across


domains that the traffic needs to traverse to reach the destination.
On the other side, the Intradomain routing determines the path inside a
single administrative domain that the traffic needs to take to reach the
destination. The two problems are very different. Intradomain routing is done in a
single network and the owner of the network has a full control and information
about the network topology, load, configuration, etc. Interdomain routing
concerns exchanging traffic between separate networks whose owners, who are
business competitors, do not have full information about the other networks. For
this reason, interdomain and intradomain routing rely on different routing
protocols and face different challenges.
The Internet consists of tens of thousands of autonomous systems (ASes) that are independently owned and operated. To achieve global connectivity,
AS-es exchange information about reachability. This information exchange is
facilitated by the Border Gateway Protocol (BGP) [22-24]. BGP is a path vector
protocol, that is, when an AS uses BGP to announce a route to its neighbor, the
announcement contains a list of all other AS-es that the path traverses before
reaching the destination. The adjacent AS-es exchange the BGP messages
between their edge routers, which are also sometimes referred to as BGP
speakers. One example of BGP routing and communication is illustrated in
Figure 1.28.

Figure 1.28 The usage of a BGP (Border Gateway Protocol) routing protocol.

AS-es are typically Internet Service Providers who have business


relationships with their neigh-boring AS-es. These business relationships
determine any transit fees. While business relationships are confidential, a model
that is believed to correspond to reality classifies business relation ships into two
categories: customer-provider and peer-peer. In customer-provider relationship,

37

the customer has to pay the provider for all traffic that traverses the link between
the AS-es, no matter what the direction of the traffic. In peer-peer relationships,
the peers forward traffic for each other free of charge. The nature of business
relationships determines which routes are preferred by AS-es. For example,
given the choice between a customer, peer and provider route, the AS will prefer
the customer route which is the most profitable. Business relationships also play
a role even after a BGP speaker selects the single route to the destination that it
prefers - a BGP speaker will not announce a provider route to another provider
as it would have to pay to both providers for the transit traffic. For this reason ASes need the flexibility to choose among multiple paths, and the option to
announce the selected path to an arbitrary subset of their neighbors. BGP allows
such flexibility - if an AS learns about multiple routes from its neighbors, it can
apply an arbitrary policy to choose the preferred path, and decide which
neighbors to announce the path to.
BGP is a protocol based on trust. When a route announcement is
received, autonomous systems cannot verify whether a path announced by a
neighboring BGP speaker corresponds to an existing physical path, and whether
that path is available to the neighbor. For this reason, BGP is extremely
vulnerable to malicious attacks where an attacker compromises a router to make
false routing announcements and to missconfigurations where a speaker
mistakenly announces an incorrect route.
Network operators need intradomain routing protocols that ensure network
connectivity even as the network topology changes due to link additions,
hardware failures, or during planned equipment maintenance. In addition,
network operators desire to balance the load in their networks to avoid
congestion. One protocol satisfying these goals is Open Shortest Path First
(OSPF) [25]. OSPF is a link state routing protocol, i.e., a protocol that collects
information from routers about their connectivity (the state of their links). Then,
the routers construct a graph representing the network, and traffic is sent on the
shortest path according to link weights that were pre-assigned to each link. If a
router finds multiple shortest paths, traffic is split evenly on the outgoing links.
The link state information is maintained by each router and if it changes, it
is flooded in the network. The benefits of using OSPF include the ability to react
to link failures - when a link fails the information is immediately flooded in the
network and all of the routers can compute new shortest paths that avoid the
failed link. Furthermore, proper link weight assignment allows load balancing.
However, OSPF only allows to split the traffic on paths of the same minimal cost.
This approach does not allow much flexibility, and if the same link weights are
used before and after a failure, the performance may be suboptimal. Moreover,
finding appropriate link weights is computationally hard.
Multiprotocol Label Switching (MPLS) [26] is a routing protocol that can be
used to provide control over which flows traverse which paths. MPLS attaches
labels to data packets, and forwarding decisions are made based purely on the
content of the label. When a packet is received by a router, a label swap
operation is performed. The old label is popped and another label is pushed on
top of the label stack, and the packet is forwarded to the appropriate neighbor.

38

An advantage of MPLS is that it can be applied to all data packets, such as ATM,
SONET or Ethernet packets, irrespective of the lower-layer details of the
corresponding protocols and technologies. MPLS can be used in conjunction with
any standard IP routing algorithm to determine the routes that should be used.
MPLS is often used in conjunction with OSPF and RSVP [27]. OSPF is used to
calculate the desired set of routes, as described above, and the Resource
Reservation Protocol (RSVP) is then used to configure the routers on the end-toend paths. When a link fails, several mechanisms can be used to recover from
the failure. Local path protection mechanisms are used to redirect traffic from a
failed link onto an alternate path that connects the two link end points. Example
of local path protection is MPLS Fast Reroute.
The router that manages the backup path is called the Point of Local
Repair (PLR), and the router where the backup path merges with the original
path is called the Merge Point (MP). The primary benefit of Fast Reroute is its
speed because the PLR can start forwarding packets on the pre-calculated
backup path immediately after the failure is detected. Unfortunately, Fast Reroute
often does not provide adequate performance because it can cause congestion
in the neighborhood of the failed link. A more flexible mechanism that allows
some end-to-end path restructuring is needed to balance the load more evenly.
For this reason, network operators are often forced to perform end-to-end route
re-optimization after a failure event.

39

1.4. Fundamental Internet technologies


(DNS, DHCP/DHCPv6)
The todays Internet can not be imagined without the most fundamental
Internet technologies: the Domain Name System (DNS) and the Dynamic Host
Configuration Protocol (DHCP). The purpose of DNS is to translate the domain
names into IP addresses. Because domain names are alphabetic, they're easier
to remember. As we all know, the Internet is really based on IP addresses. Every
time you use a domain name, therefore, a DNS service must translate the name
into the corresponding IP address (e.g., the domain name www.example.com
might translate to 198.149.144.52). Moreover, the DHCP is a network protocol
that enables a server to automatically assign an IP address to a computer (host)
from a defined range of numbers (i.e., a scope) configured for a given network.
Furthermore are written more details about those fundamental internet
technologies.
1.4.1. DNS
Every time you visit a website, you are interacting with the largest
distributed database in the world. This massive database is collectively known as
the DNS, or the Domain Name System. Without it, the Internet as we know it
would be unable to function. The work that the DNS does happen so seamlessly
and instantaneously that you are usually completely unaware that it's even
happening. The only time that you'll get inkling about what the DNS is doing is
when you're presented with an error after trying to visit a website.
Moreover, the DNS provides a name lookup facility that is similar to a
standard telephone directory. To perform lookups, DNS relies on a distributed
system of name servers and a standardized language to query these servers.
Each name server stores a portion of the overall name space, and can contact
other name servers to lookup names outside its name space.
The three main components of a DNS system are:
 Domain Name Space: defines the overall naming structure of the
Internet
 Name Server: maintains a portion of the domain name spaces,
resolves lookups, and maintains a cache
 Domain Name Resolution: maps a domain name to an IP address
The domain name space defines the overall naming structure of the Internet.
Domain name space is defined as a tree structure, with the root on the top, as
given in Figure 1.29. Each domain name consists of a sequence of so-called
labels, separated with dots. Each label corresponds to a separate node in the
tree (Figure 1.29). The domain name is written (or read) starting from the label of
a node in the tree and going up to the root (which is a null label and therefore it is
not written in the domain name, for the root servers locations worldwide see
Figure 1.30). For example, in domain name www.example.com one may observe
that it consists of three labels divided by dots. The label on the right side is
40

always higher in the name hierarchy. So, in the given example the top-level
domain is the domain "com".
The domain name of any node in the tree is the sequence of node labels
leading from that node all the way up to the root domain.

Figure 1.29 Domain name space hierarchy.

Figure 1.30 The world root servers locations.

41

The top-level node (appearing farthest to the right) identifies the


geography or purpose (for example, the nation covered by the domain, such as
.uk, or a company category, such as .com). The second-level node (appearing
second from the right) identifies a unique place within the top-level domain.
Domain names can contain up to 255 characters consisting of: characters
A to Z, 0 to 9, and/or -; 63 characters per node; and up to 127 node levels. To
ensure that each node is uniquely identified, DNS requires that sibling nodes nodes that are children of the same parents - be uniquely named.
As shown in the diagram presented in Figure 1.31, the name space tree is
sub-divided into zones. A zone consists of a group of linked nodes served by an
authoritative DNS name server (the final authority in providing information about
a set of domains). A zone contains domain names starting at a particular point in
the tree (Start Of Authority) to the end node or to a point in the tree where
another host has authority for the names.

Figure 1.31 Illustration of the DNS zones.

Furthermore, each node in the tree has one or more resource records
(RRs), which hold information about the domain name (for instance, the IP
address of www.incognito.com).
RRs can store a large variety of information about a domain: IP address,
name server, mail exchanger, alias, hostname, geo-location, service discovery,
certificates, and arbitrary text.
RRs contain information such as:
Start-of-Authority (SOA) Record

42

When a zone file indicates to a querying server that this is the authoritative
record for this domain, it says to the query, You Have Arrived. The SOA
contains the following data fields:
Serial Number: indicates number of changes to the zone file. The
number increases as the file is updated.
Refresh: tells the name server how often to check to update its data
Retry: tells server when to return if it is unable to refresh the data
Expire: tells how long the data can sit before it is too old to be valid
Time to Live: tells other servers how long to cache the data they have
downloaded
Name Server (NS) Record
An NS record is a record that indicates which computer is to be used to
retrieve information about the domain name space for a particular domain name.
A Host Name Server contains information about your computer and supplies IP
addresses that are associated with it.
Mail eXchange (MX) Record
MX records specify the mail server address for the domain name. This
record allows email addressed to a specific domain to be delivered to the mail
server that is responsible for it. The mail server is a host address. There can be a
number of mail servers associated with a MX record. Each server has a priority
set for mail receipt.
Address (A) Record
This record tells the name server the correct IP address for the domain.
The name server that is authoritative for the domain contains all the information
necessary to resolve this name.
Canonical (C-NAME) Record
CName records provide name-to-name-to-IP address mapping for any
domain name aliasing. The difference between CNAME and A records is that
the CNAME resolves to another domain name that then resolves to an IP
address.
Furthermore, the Name Servers (NSs) generally store complete
information about a zone. There are two types of name servers: primary and
secondary. Every zone MUST have its data stored on both a primary and a
secondary name server.
The Primary name servers hold authoritative information about set of
domains, as well as cached data about domains previously requested from other
servers. Each name server stores a portion of the overall name space (a zone
file), and can contact other name servers to lookup names outside its name
space. The name server listens for DNS queries, and if the queried name is in
the local zone data or cache, responds immediately with an answer. If the name
isnt in the local database or cache, the server uses its resolver to forward the
query to other authoritative name servers.
If domain data changes, the primary name server is responsible for
incrementing the Serial Number field in the SOA record in order to signal the
change to secondary name servers.

43

On the other side, the Secondary name servers can download a copy of
zone information from a primary name server using a process called a zone
transfer. Zone transfers allow secondary name servers to download complete
copies of zones. Secondary name servers perform zone transfers according to
the Expire Time parameter in the SOA record.
In order to resolve the IP address of a domain name, a name server works
on the domain name segment by segment, from highest-level domain appearing
on the right, to lowest-level domain on the left. The resolver usually has to query
several servers (in recursive or iterative way) that are authoritative for various
portions of the domain name to find all the necessary information.
One of the inherent abilities of DNS is the ability to store recently retrieved
domain names, a process called caching. This process is useful for speeding
up the resolution process. Each time a name server learns the authoritative
name servers for a zone and the addresses of those servers, it can cache this
information to help speed-up subsequent queries. Thus, the next time a resolver
queries for the same domain name, the name server is able to respond
immediately because the answer is stored in its cache.
Finally, the DNS system is a fundamental piece of the Internet framework.
The hierarchical structure of the DNS name space, worldwide network of name
servers, and efficient local caches allow broadband operators to provide highspeed, user-friendly Internet communications.
1.4.2. DHCP/DHCPv6
The Dynamic Host Configuration Protocol (DHCP) is a standardized
network protocol used on Internet for dynamically distributing network
configuration parameters, such as IP addresses (IPv4 and IPv6) for interfaces
and services. With DHCP, computers request IP addresses version 4 and
networking parameters automatically from a DHCP server, reducing the need for
a network administrator or a user to configure these settings manually.
The purpose of DHCP is to provide the automatic (dynamic) allocation of
IP client configurations for a specific time period (called a lease period) and to
eliminate the work necessary to administer a large IP network.
DHCP was created by the Dynamic Host Configuration Working Group of
the Internet Engineering Task Force (IETF: a volunteer organization which
defines protocols for use on the Internet).
When connected to a network, every computer must be assigned a unique
address. However, when adding a machine to a network, the assignment and
configuration of network (IP) addresses has required human action. The
computer user had to request an address, and then the administrator would
manually configure the machine. Mistakes in the configuration process are easy
for novices to make, and can cause difficulties for both the administrator making
the error as well as neighbors on the network. Also, when mobile computer users
travel between sites, they have had to relive this process for each different site
from which they connected to a network. In order to simplify the process of

44

adding machines to a network and assigning unique IP addresses manually,


there is a need to automate the task. The introduction of DHCP alleviated the
problems associated with manually assigning TCP/IP client addresses. Network
administrators have quickly appreciated the importance, flexibility and ease-ofuse offered in DHCP. The Figure 1.32 illustrates how does DHCP is working over
the four well known DHCP states. When a client needs to start up TCP/IP
operations, it broadcasts a request for address information. The DHCP server
receives the request, assigns a new address for a specific time period (i.e. a
lease period) and sends it to the client together with the other required
configuration information. This information is acknowledged by the client, and
used to set up its configuration. The DHCP server will not reallocate the address
during the lease period and will attempt to return the same address every time
the client requests an address. The client may extend its lease with subsequent
requests, and may send a message to the server before the lease expires telling
it that it no longer needs the address so it can be released and assigned to
another client on the network. Moreover, in Figure 1.33 is given the detail state
diagram of DHCP protocol. Also, in Figure 1.34 the procedures included in DHCP
renewing have been illustrated.

Figure 1.32 Illustration of the DHCP states.

45

Figure 1.33 Illustration of the DHCP client finite state machine.

Figure 1.34 Illustration of the DHCP renewing.

DHCP has several major advantages over manual configurations. Each


computer gets its configuration from a "pool" of available numbers automatically
for a specific time period (called a leasing period), meaning no wasted numbers.
When a computer has finished with the address, it is released for another
computer to use. Configuration information can be administered from a single
point. Major network resource changes (e.g. a router changing address), requires
only the DHCP server be updated with the new information, rather than every
system. Furthermore the DHCP message format is given in Figure 1.35, together
with the Table 1.5 where the DHCP types messages are given. The
DHCPDISCOVER is broadcasted by a client to find available DHCP servers.
DHCPOFFER is the Response from a server to a DHCPDISCOVER and is used
for offering IP address and other parameters. The DHCPREQUEST is the
message from a client to servers that does one of the following: Requests the
parameters offered by one of the servers and declines all other offers; and
Verifies a previously allocated address after a system or network change (a
reboot for example); and Requests the extension of a lease on a particular
address.

46

Figure 1.35 Illustration of the DHCP message format.


Table 1.5 Types of DHCP messages.

DHCPACK is the Acknowledgement from server to client with parameters,


including IP address. DHCPNACK is a negative acknowledgement from server to
client, indicating that the client's lease has expired or that a requested IP address
is incorrect. The DHCPDECLINE is a message from client to server indicating
that the offered address is already in use. DHCPRELEASE is a message from
client to server canceling remainder of a lease and relinquishing network
address. DHCPINFORM is a message from a client that already has an IP
address (manually configured for example), requesting further configuration
parameters from the DHCP server.
Moreover, someone might wonder why research needs done on DHCPv6,
specifically on how to use it for passive fingerprinting as we already can do this
with DHCPv4, TCP, and a ton of other protocols. While that is all true DHCPv4 is,
at least in theory, on its way out the door. That probably won't happen for years
now, but sooner or later all devices will be doing IPv6. When this happens
DHCPv6 clients will be the new norm and some of our old passive fingerprinting
methods will no longer be applicable.
As we mentioned previously (in the second chapter within this module),
the IPv6 clients have a link-local address that allows them to communicate with

47

each other on the same link only (same side of the router). This is what most
Linux IPv6 clients that I've run into do by default. They self assign and unless
configured to do so after the fact do not utilize DHCPv6 at all. This does help
keep network chatter to a minimum, but may leave clients unable to talk to
everywhere they want to go as they could be missing important pieces, such as
the DNS servers that may reside on a different network.
DHCPv6 works a lot like DHCPv4 did, especially from a fingerprinting
perspective. The basic concept is to find and request an IPv6 address and then
have the ability to ask for other pieces of information you may need.
The main difference between the DHCPv6 and DHCPv4 Message types is
given in Table 1.6.
Table 1.6 DHCPv6 vs DHCPv4 Message types.

DHCPv6 introduced, or more precisely replaced the message types from


DHCPv4. These new message types are much more readable on what they
mean. Some of these we will use or care about, others due to the complexity of
parsing the data from the wire have been ignored for now.
However, the details of DHCPv6 are very, very different from DHCP in
IPv4.
No baggage.
DHCP is based on an earlier protocol called BOOTP. This packet
layout is wasteful in a lot of cases.
A lot of the options turn out to be not useful, or not as useful as they
can be, but it is hard to change a protocol with such a large installed base.
There are a lot of "tweaks" that implementations need in order to be
compatible with the buggy clients. DHCPv6 leaves all this behind.
IPv6 is better.
Two features of IPv6 greatly improve DHCPv6:
IPv6 hosts have "link-local addresses". Every network interface has a
unique address, that can be used to send and receive on the link only. IPv6 hosts
can use this to send requests for "real" addresses. IPv4 hosts have to use
systemspecific hacks to work before they have an address.

48

All IPv6 systems support multicasting. All DHCPv6 servers register that
they want to receive DHCPv6 multicast packets. This means the network knows
where to send them. In IPv4, clients broadcast their requests, and networks do
not know how far to send them.
One exchange configures all interfaces. A single DHCPv6 request may
includes all interfaces on a client. This allows the server to offer addresses to all
interfaces in a single exchange. Each interface may also have different options.
Defines address allocation types.
DHCPv6 allows normal address allocation, as well as temporary address
allocation. In a sense, all addresses are "temporary", but the in this case it
means the IPv6 privacy addresses. DHCPv6 does not have as many options
defined as DHCP for IPv4, but there are quite a few.
You can find these by searching the IETF RFCs, and they include:
 IPv6 address, IPv6 prefix
 Rapid commit
 Vendor-specific options extension
 SIP servers
 DNS servers & search options
 NIS configuration
 SNTP servers
Finally, the DHCPv6 has a place in IPv6 networks. It is significantly
improved over DHCP in IPv4, and is useful either instead of or in addition to
stateless autoconfiguration. The software is there today, and getting better and
better.

49

1.5. World Wide Web (WWW)


The World Wide Web (www, or shorter Web) is the most popular and
world wide service over Internet. In the basic definitions of WWW we can said
that it is a global information system of interlinked hypertext documents (see
Figure 1.36) placed on millions of servers and clients that are accessed via the
Internet. Individual document pages on the WWW are called web pages and are
accessed with a software application running on the user's computer, commonly
called a web browser. Web pages may contain text, images, videos, and other
multimedia components, as well as web navigation features consisting of
hyperlinks.

Figure 1.36 Illustration of the basic hypertext model enhanced by searches.

The father of WWW is Tim Berners-Lee, a British computer scientist and


former CERN employee. On 12 March 1989, Berners-Lee wrote a proposal for
what would eventually become the World Wide Web. The 1989 proposal was
meant for a more effective CERN communication system but Berners-Lee also
realised the concept could be implemented throughout the world. Berners-Lee
and Belgian computer scientist Robert Cailliau proposed in 1990 to use hypertext
"to link and access information of various kinds as a web of nodes in which the
user can browse at will", and Berners-Lee finished the first website in December
of that year. The first test was completed around 20 December 1990 and
Berners-Lee reported about the project on the newsgroup alt.hypertext on 7
August 1991.
Moreover, the web services (SOAP, XML, UDDI, WSDL etc.) are predicted
to be the latest technological change that will revolutionize business. Technology
companies and visionaries have started their rhetoric. An entirely new era
appears to be emerging where anyone can publish their services using standard
Internet protocols and consumers around the world can easily combine these
services in any fashion to provide higher order services. These service network
chains will solve every business problem in the world and in the process
50

generating revenues to everyone involved in the chain. Moreover, today's Web


has terabytes of information available to humans, but hidden from computers. It
is a paradox that information is stuck inside HTML pages, formatted in esoteric
ways that are difficult for machines to process. The so called Web 3.0 (and other
Future web applications), which is likely to be a pre-cursor of the real semantic
web, is going to change this. What we mean by Web 3.0 is that major web sites
are going to be transformed into web services - and will effectively expose their
information to the world.
In order to understand the impact of nowadays and future Web services,
we need to look at the way the real world services are operating. Web services
technically allow the same mix and match approach of the real world services.
However, some of the major differences that we should keep in mind while
comparing the ultimate user of a real world service say a human being with that
of the user of the web service say a computer. To emphasize that the ultimate
end user may still be the human being even in web services. But it is another
computer that aggregates the services before presenting to the end user of the
web services.
 The lowermost denominator, the human being, ultimately limits a
real world service. With all the technologies and tools, still the
human being has limited mental space, time and energy to do
things locally. Even if I can save couple of dollars by doing the
airline booking by personally checking all the airlines and all the
deals instead of relying on a travel agent, I am never going to do it.
But a computer is different. It has the space, processing power and
determination to do it. So, by doing a service locally if the machine
can save something, that will be the normal way of implementation.
 The processing power and storage capacity of the computer is
steadily increasing. So an implementation that we think needs
remote help today will very well be implemented locally tomorrow.
In a computer with less than 1MB RAM, storing the entire dictionary
in the memory may be overkill. A service to access it remotely may
not sound that crazy. But with todays type of processors and hard
disk capabilities, a local storage of the dictionary will be the most
common implementation.
The WWW is essentially a huge client-server system with millions of
servers distributed worldwide. Each server maintains a collection of
documents; each document is stored as a file (although documents can
also be generated on request). A server accepts requests for fetching a
document and transfers it to the client. In addition, it can also accept requests for
storing new documents. The simplest way to refer to a document is by means of
a reference called a Uniform Resource Locator (URL). A URL is comparable
to an IOR in CORBA and a contact address in Globe. It specifies where a
document is located, often by embedding the DNS name of its associated
server along with a file name by which the server can look up the document
in its local file system. Furthermore, a URL specifies the application-level
protocol for transferring the document across the network.

51

A client interacts with Web servers through a special application known as


a browser. A browser is responsible for properly displaying a document. Also, a
browser accepts input from a user mostly by letting the user select a reference to
another document, which it then subsequently fetches and displays. This leads to
the overall organization shown in Figure 1. 37.

Figure 1.37 The overall organization of the Web.

Most Web documents are expressed by means of a special language


called HyperText Markup Language (HTML). Being a markup language means
that HTML provides keywords to structure a document into different sections. For
example, each HTML document is divided into a heading section and a main
body. HTML also distinguishes headers, lists, tables, and forms. It is also
possible to insert images or animations at specific positions in a document.
Besides these structural elements, HTML provides various keywords to
instruct the browser how to present the document. For example, there are
keywords to select a specific font or font size, to present text in italics or
boldface, to align parts of text, and so on. HTML is no longer what it used to be: a
simple markup language. By now, it includes many features for producing glossy
Web pages. One of its most powerful features is the ability to express parts of a
document in the form of a script. To give a simple example, consider the HTML
document shown in Figure 1.38.

Figure 1.38 A simple Web page embedding a script written in JavaScript.

52

Hypertext alone is not practical when dealing with large sets of structured
information such as are contained in data bases: adding a search to the
hypertext model gives W3 its full power (Figure 1.36). Indexes are special
documents which, rather than being read, may be searched. To search an index,
a reader gives keywords (or other search criteria). The result of a search is
another document containing links to the documents found.
The architecture of WWW (Figure 1.39) is one of browsers (clients) which
know how to present data but not what its origin is, and servers which know how
to extract data but are ignorant of how they will be presented. Servers and clients
are unaware of the details of each others operating system quirks and exotic
data formats.

Figure 1.39 Architecture of WWW.

All the data in the Web is presented with a uniform human interface
(Figure 1.40). The documents are stored (or generated byalgorithms) throughout
the internet by computers with different operating systems and data formats.
Following a link from the SLAC home page (the entry into the Web of a SLAC
user) to the NIKHEF telephone book is as easy and quick as following the link to
a SLAC Working Note.
All communication in the Web between clients and servers is based on the
Hypertext Transfer Protocol (HTTP). HTTP is a relatively simple client-server
protocol; a client sends a request message to a server and waits for a response
message. An important property of HTTP is that it is stateless. In other words, it
does not have any concept of open connection and does not require a server to
maintain information on its clients.

53

Figure 1.40 Unification for the user.

HTTP is based on TCP. Whenever a client issues a request to a server, it


sets up a TCP connection to the server and sends its request message along
that connection. The same connection is used for receiving the response. By
using TCP as its underlying protocol, HTTP need not be concerned about lost
requests and responses. A client and server may simply assume that their
messages make it to the other side. If things do go wrong, for example, the
connection is broken or a time-out occurs an error is reported. However, in
general, no attempt is made to recover from the failure.
One of the problems with the first versions of HTTP was its inefficient use
of TCP connections. Each Web document is constructed from a collection of
different files from the same server. To properly display a document, it is
necessary that these files are also transferred to the client. Each of these files is,
in principle, just another document for which the client can issue a separate
request to the server where they are stored.
In HTTP version 1.0 and older, each request to a server required setting
up a separate connection, as shown in Fig. 1.41 a). When the server had
responded, the connection was broken down again. Such connections are
referred to as being non-persistent. A major drawback of nonpersistent
connections is that it is relatively costly to set up a TCP connection. As a
consequence, the time it can take to transfer an entire document with all its
elements to a client may be considerable. Note that HTTP does not preclude that
a client sets up several connections simultaneously to the same server. This
approach is often used to hide latency caused by the connection setup time, and
to transfer data in parallel from the server to the client.

54

Figure 1.41 (a) Using nonpersistent connections. (b) Using persistent connections.

A better approach that is followed in HTTP version 1.1 is to use a


persistent connection, which can be used to issue several requests (and their
respective responses), without the need for a separate connection per (request,
response)-pair. To further improve performance, a client can issue several
requests in a row without waiting for the response to the first request (also
referred to as pipelining). Using persistent connections is illustrated in Figure
1.41 b).
HTTP has been designed as a general-purpose client-server protocol
oriented toward the transfer of documents in both directions. A client can request
each of these operations to be carried out at the server by sending a request
message containing the operation desired to the server. A list of the most
commonly used request messages is given in Table 1.7.
Table 1.7. Operations supported by HTTP.

HTTP assumes that each document may have associated metadata,


which are stored in a separate header that is sent along with a request or
response. The head operation is submitted to the server when a client does not
want the actual document, but rather only its associated metadata. For example,
using the head operation will return the time the referred document was modified.
This operation can be used to verify the validity of the document as cached by
the client. It can also be used to check whether a document exists, without
having to actually transfer the document.

55

We can now give a more complete architectural view of the organization of


clients and servers in the Web. This organization is shown in Figure 1.42.
Whenever a user issues a request for a document, the Web server can generally
do one of three things, depending on what the document specifies it should do.
First, it can fetch the document directly from its local file system. Second, it can
start a CGI program that will generate a document, possibly using data from a
local database. Third, it can pass the request to a servlet.

Figure 1.42 Architectural details of a client and server in the Web.

As more and more of the Web is becoming remixable, the entire Internet
system is turning into both a platform and the database. Yet, such
transformations are never smooth. For one, scalability is a big issue. And of
course legal aspects are never simple. But it is not a question of if web sites
become web services, but when and how. APIs are a more controlled, cleaner
and altogether preferred way of becoming a web service. However, when APIs
are not available or sufficient, scraping is bound to continue and expand. In the
same time all possibilities and ideas for Future Web services are open, and as
always time will be best judge.

56

1.6. Important Internet services (E-mail, FTP, BitTorrent,


Skype, Youtube, social networking)
As we mentioned, we could use separate software applications to access
the Internet with each of used Internet protocols, though we probably wouldn't
need to. Many Internet Web browsers allow users to access files using most of
the protocols. In the following are given several categories of important Internet
services which are present together with the WWW (given in the previous
section) and examples of types of services in each category. The following
Figure 1.43 shows the various services offered by the Internet. In the following
we will describe the Important Internet multimedia services: e-mail, FTP,
BitTorrent, Skype, Youtube and social networking, but of course the sea of
nowadays Internet multimedia services is endless, so here presented services
are not all.

Figure 1.43 Important Internet services.

E-mail service
The e-mail service - interactions between email servers and clients are
governed by email protocols. The three most common email protocols are POP,
IMAP and MAPI. One example of using Post Office Protocol 3 (POP3) service
with the Simple Mail Transfer Protocol (SMTP) service, which sends outgoing email is given in Figure 1.44. Most email software operates under one of these
(and many products support more than one). To understand that the correct
protocol must be selected, and correctly configured, if you want your email
account to work, we must to know about those protocols. The Post Office
Protocol (currently in version 3, hence POP3) allows email client software to
retrieve email from a remote server.

57

Figure 1.44 Illustration of one e-mail service communication.

The Internet Message Access Protocol (now in version 4 or IMAP4) allows


a local email client to access email messages that reside on a remote server.
The Messaging Application Programming Interface (MAPI) is a proprietary email
protocol of Microsoft, that can be used by Outlook (Microsoft's email client
software) to communicate with Microsoft Exchange (its email server software). It
provides somewhat more functionality than an IMAP protocol; unfortunately, as a
proprietary protocol, it works only for Outlook-Exchange interactions. Moreover,
at the risk of overloading you with information, you should know that strictly
speaking it's only the incoming mail that is handled by a POP or IMAP protocol.
Outgoing mail for both POP and IMAP clients uses the Simple Mail Transfer
Protocol (SMTP). When you set up a POP or IMAP email account on email client
software, you must specify the name of the (POP or IMAP) mail server (the
functional architecture of one mail server is given in Figure 1.45) computer for
incoming mail. You must also specify the name of the (SMTP) server computer
for outgoing mail. These names are typically in the same form as Web
addresses. Depending on the client, there may also be specifications for email
directories and searching.

Figure 1.45 Illustration of a functional architecture of mail server.

58

FTP service
FTP (File Transfer Protocol) - this was one of the first Internet services
developed and it allows users to move files from one computer to another. Using
the FTP program, a user can logon to a remote computer, browse through its
files, and either download or upload files (if the remote computer allows). These
can be any type of file, but the user is only allowed to see the file name; no
description of the file content is included. You might encounter the FTP protocol if
you try to download any software applications from the World Wide Web. Many
sites that offer downloadable applications use the FTP protocol. An example of a
FTP Protocol Window is given in figure 1.46.

Figure 1.46 Illustration of one FTP Window.

Figure 1.47 Illustration of BitTorrent service work.

BitTorrent
BitTorrent is not an is a network protocol that facilitates decentralized (or
distributed) file sharing over the Internet (see Figure 1.47). In this way it is similar
to the functionality provided by traditional peer-to-peer (P2P) applications like
59

Napster and Kazza in the 1990s. However, BitTorrent differs fundamentally from
these older P2P sharing applications because it introduces components such as
BitTorrent websites, torrents, trackers, seeders, and leeches. BitTorrent is also
unique in how it efficiently uses bandwidth to achieve high data transfer rates. If
the file you want is available from multiple hosts, BitTorrent establishes
connections with them and downloads chunks of the file simultaneously.
Skype service
One of the most important emerging trends in telecommunications, which
development represents a major change in the emerging information and
communication technologies, undoubtedly is Voice over IP the transmission of
voice over packet-switched IP networks. VoIP has developed considerably in
recent years and is gaining widespread public recognition and adoption through
consumer solutions such as Skype and BTs strategy of moving to an IP-based
network.
But, let we begin with the basic essence of VoIP. VoIP is using the IP
protocols, originally designed for the Internet, to break voice calls up into digital
packets. In order for a call to take place the separate packets travel over an IP
network and are reassembled at the far end. The breakthrough was in being able
to transmit voice calls, which are much more sensitive to any time delays or
problems on the network, in the same way as data. High-availability solutions for
VoIP networks address the need for users to be able to place and receive calls
under peak-load call rates or during device maintenance or failure. In addition to
lost productivity, voice-network downtime often results in lost revenue, customer
dissatisfaction, and even a weakened market position. Various situations can
take devices off line, ranging from planned downtime for maintenance to
catastrophic failure. There are two key elements that contribute to availability in a
VoIP network: capacity and redundancy. These concepts will now be explored
further, because we will outcome this section. We will just mentioned that in VoIP
communications the most frequently used protocol is Session Initiation Protocol
(SIP). SIP is a signalling communications protocol, widely used for controlling
multimedia communication sessions such as VoIP and video calls over IP
networks. The protocol defines the messages that are sent between peers which
govern establishment, termination and other essential elements of a call. SIP can
be used for creating, modifying and terminating two-party (unicast) or multiparty
(multicast) sessions consisting of one or several media streams. Other SIP
applications include video conferencing, streaming multimedia distribution,
instant messaging, presence information, file transfer, fax over IP and online
games. Originally designed by Henning Schulzrinne and Mark Handley in 1996,
SIP has been developed and standardized in RFC 3261 under the auspices of
the Internet Engineering Task Force (IETF).
For example, Skype is a peer-to-peer VoIP client developed by KaZaa
that allows its users to place voice calls and send text messages to other users
of Skype clients (see Figure 1.48). In essence, it is very similar to the MSN and
Yahoo IM applications, as it has capabilities for voicecalls, instant messaging,
audio conferencing, and buddy lists.

60

Skype is a telecommunications application product that specializes in


providing video chat and voice calls from computers, tablets, and mobile devices
via the Internet to other devices or telephones/smartphones. Users can also send
instant messages, exchange files and images, send video messages, and create
conference calls. Skype is available to download onto computers running
Microsoft Windows, Mac, or Linux, as well as Android, Blackberry, iOS, and
Windows Phone smartphones and tablets. Much of the service is free, but users
require Skype Credit or a subscription to call landline or mobile numbers. Skype
is based on a freemium model.

Figure 1.48 Illustration of the skype network architecture. There are three main entities:
supernodes, ordinary nodes, and the login server.

First released in August 2003, Skype was created by Dane Janus Friis
and Swede Niklas Zennstrm in cooperation with Estonians Ahti Heinla, Priit
Kasesalu, and Jaan Tallinn, who developed the backend, which was also used in
music-sharing application Kazaa. Registered users of Skype are identified by a
unique Skype Name and may be listed in the Skype directory. Skype allows
these registered users to communicate through both instant messaging and
voice chat. Voice chat allows telephone calls between pairs of users and

61

conference calling and uses a proprietary audio codec. Skype's text chat client
allows group chats, emoticons, storing chat history, and editing of previous
messages. Offline messages were implemented in a beta of version 5 but
removed after a few weeks without notification. The usual features familiar to
instant messaging usersuser profiles, online status indicators, and so onare
also included.
However, the underlying protocols and techniques it employs are quite
different.
Furthermore, are presented the main factors that have been promoted by
Skype (VoIP) and its main barriers. So, the main factors that have been
promoting VoIP include:
 Low cost/no cost software (softphone and configuration tools) for PCs and
PDAs;
 Wide availability of analogue telephone adapters;
 Growing availability of broadband, wireless hot spots and other forms of
broadband access;
 Packetised voice enables much more efficient use of the network (bandwidth
is only used when something is actually being transmitted);
 The VoIP network can handle connections from many applications and many
users at the same time (unlike the dedicated circuit-switch approach).
 Relative high cost of PSTN calls.
On the other hand, the main barriers opposing Skype and other VoIP services
are including:
 High quality and reliability of the PSTN;
 VoIP quality of service can be variable;
 Lack of intrinsic QoS in many of IP networks around the world;
 Many challenges in wireless VoIP users;
 Some VoIP feature, service and VoIP service provider
interconnection limitations;
 Relative difficulty in setup and use;
 End-2-end integrity of the signalling and bearer path problems;
 Introduction of call plans and flat rates charges by traditional PSTN
operators.

Figure 1.49 Worldmap of super nodes to which Skype establishes a TCP connection at
login.

62

Overall, the Skype is a selfish application and it tries to obtain the best
available network and CPU resources for its execution. It changes its application
priority to high priority in Windows during the time call is established. It evades
blocking by routing its login messages over Super Nodes (shortly SNs, the super
nodes worldmap is presented in Figure 1.49). This also implies that Skype is
relying on SNs, who can misbehave, to route login messages to the login server.
Skype does not allow a user to prevent its machine from becoming a SN
although it is possible to prevent Skype from becoming a SN by putting a
bandwidth limiter on the Skype application when no call is in progress.
Theoretically speaking, if all Skype users decided to put bandwidth limiter on
their application, the Skype network can possibly collapse since the SNs hosted
by Skype may not have enough bandwidth to relay all calls.
Youtube
YouTube services is a video-sharing website headquartered in San Bruno,
California, United States. The service was created by three former PayPal
employees in February 2005. In November 2006, it was bought by Google for
US$1.65 billion. The site allows users to upload, view, and share videos, and it
makes use of WebM, H.264, and Adobe Flash Video technology to display a
wide variety of user-generated and corporate media video. Available content
includes video clips, TV clips, music videos, and other content such as video
blogging, short original videos, and educational videos. The YouTube video
download mechanisms are illustrated over one example of possible evolution
when accessing to youtube.com from a PC (top) and m.youtube.com from a
smartphone (bottom) in Figure 1.50.

Figure 1.50 The YouTube video download mechanisms [53].

63

YouTube is free, though people who want to post videos or


comments must register with the site, creating a profile. Videoswhich include
tags, a category, and a brief descriptioncan be public or restricted to
members of specified contact lists. Several tools allow viewers to sort
through videos to locate those of interest. Links allow a user to share a
movie through e-mail, add it to a list of favorites, post a text-based or
video comment about it, and read (or watch) the comments others have
posted. A user can subscribe to all of another users postings or to content that is
tagged with particular terms. Each of these actions becomes a part of the users
profile. When others look at a users profile, they see his favorites,
comments, and videos he has posted. As a result, profiles are constantly
updated to reflect each users history and tastes. YouTube also allows videos
hosted on its site to be embedded in other Web pages, such as blogs or personal
Web sites.
Most of the content on YouTube has been uploaded by individuals, but
media corporations including CBS, the BBC, Vevo, Hulu, and other organizations
offer some of their material via YouTube, as part of the YouTube partnership
program. Unregistered users can watch videos, and registered users can upload
videos to their channels. Videos considered to contain potentially offensive
content are available only to registered users affirming themselves to be at least
18 years old.
The ease of watching and sharing videos, combined with the fact
that the site is free, opens the experience of online video to a wide range of
users. YouTube offers opportunities for expression through videoa new spin on
the notion self-publishing, making content available for anyone interested in
consuming it. The social-networking tools further engage users, drawing them in
to an environment that encourages them to meet new people, read and share
opinions, and be part of a community. The interactive features allow members of
communities to earn the respect of peers and increase their stature in the group.
YouTube draws users into the experience of viewing videos and engaging
with the content as commentators and creators, activities that heighten
students visual literacyan important skill in todays electronic culture. Even
if most of the content on YouTube lacks an educational goal, the application
encourages experimentation with new media.
Social networking
Social Networking Services are changing the ways in which people use
and engage with the internet and with each other. Mainly, Social networking
services can be broadly defined as internet-or mobile-based social spaces
designed to facilitate communication, collaboration, and content sharing across
networks of contacts.
Young people particularly are quick to use the new technology in ways
which increasingly blur the boundaries between their online and offline activities.
Social networking services are also developing rapidly as technology changes
with new mobile dimensions and features. Children and young people around the
world, who have grown up taking the internet and mobile technologies for
granted, make up a significant segment of the beta generation, -the first to
64

exploit positive opportunities and benefits of new and emerging services, but also
the first to have to negotiate appropriate behaviours within new communities, and
to have to identify and manage risk.
The most popular dedicated social network sites worldwide are
Facebook, MySpace, Twitter, LinkedIn, Instagram, Google+, Bebo, Vkontakte,
Odnoklassniki and etc. Also, the most popular social networking sites by country
are given in Figure 1.51. These types of social networking services are profilefocussed activity centres around web pages that contain information about the
activities, interests and likes (and dislikes) of each member. While the number of
visitors to social networking sites is increasing, so too are the numbers of new
services being launched, along with the number of longstanding (within the
relatively brief lifespan of the internet) websites that are adding, developing or
refining social network service features or tools. The ways in which we connect
to social networking services are expanding too. Games-based and mobile
phone-based social networking services that interact with existing web-based
platforms, or with new mobile-focused communities, are rapidly developing
areas.

Figure 1.51 Illustration of the most popular social networking sites by country.

However, it is important to remember that services differ and may be


characterised by more than one category, such as Profile-based SNS, Content based SNS, White - label SNS, Multi-User Virtual Environments (for Internet
Games), Mobile SNS, Micro-blogging/Presence updates, Social search and etc.
Users are also quite happy to tailor the intended use of platforms to suit their own
interests so for instance, sites that are primarily profile focused may be used by
individuals to showcase media collections or be used as workspace for particular
topics or events. Educators setting up private groups in order to make use of
collaborative space and tools are a great example of how social networking
services can be tailored for users own ends.

65

1.7. Internet regulation and network neutrality

The consumer Internet is changing rapidly, from a narrowband to a


broadband network, as the number of citizens online exceeds a billion. Users
expect constant connectivity via fixed networks, and increasingly on mobile
networks with smartphones such as the Apple iPhone. The use of online social
networking has exploded in the past five years, with more then 800 million users
of Facebook and even more registered accounts using VoIP software supplied by
Skype.
In that way, the Network neutrality has received sustained attention from
both policymakers and academic commentators for the past several years, and it
shows no signs of retreating from the forefront of the policy debate. Network
neutrality (also net neutrality, Internet neutrality, or net equality) is the principle
that Internet service providers and governments should treat all data on the
Internet equally, not discriminating or charging differentially by user, content,
site, platform, application, type of attached equipment, or mode of
communication. The term was coined by Columbia University media law
professor Tim Wu in 2003, as an extension of the longstanding concept of a
common carrier. Moreover, this principle is the central reason for the success of
the Internet. Net Neutrality is crucial for innovation, competition and for the free
flow of information. Most importantly, Net Neutrality gives the Internet its ability to
generate new means of exercising civil rights such as the freedom of expression
and the right to receive and impart information. In figure 1.52 the Open neutral
access model versus non-neutral access network is given.

a)

b)
Figure 1.52 Illustration of the a) Open neutral access model; b) Non-neutral access
model [62].

66

There are many reasons why Net Neutrality is not respected, among the
most frequent ones are:
Access providers violate Net Neutrality to optimise profits: Some
Internet access providers demand the right to block or slow down Internet traffic
for their own commercial benefit. Internet access providers are not only in control
of Internet connections, they also increasingly start to provide content, services
and applications. They are increasingly looking for the power to become the
gatekeepers of the Internet.
Access providers violate Net Neutrality to comply with the law:
Governments are increasingly asking access and service providers to restrict
certain types of traffic, to filter and monitor the Internet to enforce the law. A
decade ago, there were only four countries filtering and censoring the Internet
worldwide today, they are over forty. In Europe, website blocking has been
introduced for instance in Belgium, France, Italy, the UK and Ireland. This is done
for reasons as varied as protecting national gambling monopolies and
implementing demonstrably ineffective efforts to protect copyright. Some
politicians call for Net Neutrality and demand filtering or blocking for law
enforcement purposes at the same time. However, it is a paradox to create legal
incentives for operators to invest in monitoring and filtering or blocking
technology, while at the same time demanding that they do not use this
technology for their own business purposes.
Access providers violate Net Neutrality for privatised censorship: In
the UK, blocking measures by access providers have frequently been misused to
block unwanted content.
Despite all those lacks of respects for Net Neutrality, furthermore we give
the 10 reasons for network neutrality:
1) No discrimination Net Neutrality is the principle that all types of
content and all senders and recipients of information are treated
equally.
2) Free Expression The history of the Internet shows very clearly that
Net Neutrality encourages creative expression. The ability to publish
content and to express opinions online does not depend on financial or
social status and is not restricted to an elite. There is a huge trend
towards people sharing information and experiences online,
sometimes referred to as web 2.0.
3) Privacy Measures to undermine Net Neutrality can have a direct
impact on our privacy. In a non-neutral Internet, providers would be
able to monitor our communications in order to differentiate between
messaging, streaming, P2P, e-mails and so on.
4) Access to Information Net Neutrality is also the catalyst for the
creation of diverse and abundant online content. Non-profit projects
like Wikipedia, blogs and user-generated content in general have the
same conditions to access and publish information as large,
commercial Internet players. Without Net Neutrality, we would have a
two-tier Internet where only those who can pay would be able to
access information or get content delivered faster than other users.

67

5) Democratic Process Net Neutrality improves the quality of


democracy by ensuring that the Internet remains an open forum in
which all voices are treated equally. It ensures that the ability to voice
opinions and place content online does not depend on ones financial
capacity or social status. It is therefore a powerful tool in facilitating
democracy, enabling diverse ideas to be expressed and heard.
6) Tool against censorship Without Net Neutrality, network operators
can block or throttle not only services, but also content.
7) Consumer choice Net Neutrality ensures access to content and
offers greater consumer choice by allowing more players to enter the
marketplace.
8) Innovation and competition Net Neutrality continues to foster
innovation, as individuals and companies alike can create content and
provide new services with the online world as their audience. Any
individual can upload content at relatively little cost.
9) Digital Single Market Net Neutrality is a cornerstone for the
completion of the Digital Single Market. It removes barriers and allows
users to freely communicate, fully express themselves, access
information and participate in the public debate without unnecessary
interference by gatekeepers or middlemen.
10) Protecting a global Internet As soon as access providers start
making use of traffic discrimination tools to interfere in global
communications for their own commercial benefit, governments will be
tempted to use the technology for public policy goals in fact, Western
governments are more and more often asking providers to restrict
certain types of traffic, and to filter and monitor the Internet to enforce
the law. In other parts of the world this has lead to national Internets,
such as the Chinternet in China and the halal Internet in Iran. The
principle of Net Neutrality will help protect the global Internet.
On the other site, given the virtual nature of its existence, the first
important legal discussion about the Internet focused on its natural resistance to
regulation. Despite this supposed resistance, national laws have been erected
throughout the world with the aim and effect of subjecting the Internet to real
regulation. Considering the global character of the Internet, however,
International Law could be a more suitable tool for regulation in some of the
various Internet-related issues. Moreover, there have been many initiatives to
deal specifically with the existence of illegal and harmful content over the Internet
and these include an emphasis on self-regulation by the Internet industry with the
creation of Internet hotlines for reporting illegal Internet content to assist law
enforcement agencies and the development of filtering and rating systems to
deal with childrens access to content which may be deemed as harmful. These
two issues are different in nature and should be addressed separately as what
may not be appropriate for children may certainly be legal and therefore
accessible by willing adults.
Broadly speaking, Internet regulation today can be conceived of as
involving three related spheres: Direct regulation of the internet infrastructure

68

itself; regulation of activities that can be conducted only over the internet; and,
regulation of activities which can be, but need not be, conducted over the
Internet.
 The first sphere: Direct regulation of the internet infrastructure itself,
including:
a. the standards of communication,
b. the equipment used to provide and access Internet
communication,
c. intermediaries engaged in the provision of Internet
communications, e.g. Internet Service Providers (ISPs)
 The second sphere: Regulation of activities that can be conducted
only over the internet and which have no significant off-line
analogues. An example is the regulation of anonymous online
communication via anonymizing re-mailers.
 The third sphere: Finally, there is the regulation of the enormous
category of activities which may or may not be conducted over the
internet, e.g. e-commerce in both tangible and intangible goods. In
many cases the Internet version of an activity often will simply be
swept up in the general regulation of the type of conduct.
(a) In some cases, however, the Internet version may be subject to
special or additional regulation because the use of the Internet is seen as
somehow aggravating an underlying problem or offense. An example of this is
US attempts to regulate the provision of obscene or "indecent" content to minors
via the Internet.
(b) In other cases, there may be attempts to craft special regulations
for the Internet version of an activity because of fears that its international
character (and concomitant regulatory arbitrage), the ease of anonymization, or
the elimination of formerly prohibitive transactions costs changes the danger,
incidence, or character of the activity -- or, most commonly, makes the
enforcement of the pre-existing rules difficult or impossible. Examples of this
include attempts to regulate peer-to-peer sharing of material copyrighted by
others and regulation (or in some cases discouragement) of e-cash.
These spheres of regulation are obviously related in many ways. What
matters most for current purposes, however, is that this schema underlines why
approaches to the first sphere of regulation, direct regulation of the infrastructure,
have two sometimes radically different sets of motives even though the
regulatory techniques and tools often may overlap or even interfere with one
another.
On the one hand, some regulatory (or de-regulatory) strategies pursue
goals that are primarily internal to the first sphere. For example, as described
below, the current Internet architecture depends on the unique assignment of
Internet Protocol numbers; the regulation of the mechanisms that control
assignment of these potentially valuable resources -- and which determine when
and how the underlying standards might be modified -- is a matter of critical
importance to the Internet, one that is (currently) internal to the first sphere.

69

Similarly, the regulation of the creation of new Top-Level Domains (TLDs)


and the regulation of the assignment of Second-Level Domains (SLDs) are in the
first instance an issue in the first sphere, albeit one influenced by external rules
such as trademark law.
More generally, a number of independent, private, non-profit, standards
bodies define the technical standards for various parts of the Internet. These
groups include the Internet Engineering Task Force (IETF), an unincorporated
international volunteer organization of software and network engineers, and the
W3C, a consortium of corporations and interested individuals who concentrate on
HTML and WWW-oriented standards. These bodies do not, however, tend to
venture beyond classic standard-setting activities.
In contrast, other bodies, notably governments and industry pressure
groups, seek to facilitate and deploy regulatory strategies that regulate the
Internet infrastructure. Their goal is to leverage control over that infrastructure to
achieve social goals external to the infrastructure itself. An example of this are
calls to expand the information that domain name registrants must publish in the
WHOIS database in order, for example, to allow copyright owners to know to
what address they should address their writs in the event that they believe that
their rights are being infringed online.
The contrast between what I have labelled the internal and external
motivations not only influences the type of rule likely to be advanced, but more
importantly has institutional implications. Of these, the most critical is the type of
regulatory body likely to be seen as a legitimate source of the rule in question.
Questions about the mis-match between legitimacy and effectiveness lie at the
heart of both current and future debates about the regulation of the Internet
infrastructure. Many bodies - governments - with legitimacy to make rules in the
second and third spheres lack, or believe they lack, the ability to regulate the
infrastructure effectively; the most apparently effective bodies extant today, the
Internet Corporation for Assigned Names and Numbers (ICANN) and its
seemingly subsidiary body, the Internet Assigned Numbers Authority (IANA) face
substantial questions about their legitimacy, especially when they venture out of
the first sphere. There is more acceptance of the legitimacy of established
technical standard setting bodies such as the IETF and the W3C but this is in
large part because they tend to restrict their activities firmly to the first sphere,
and also because there is greater respect for the quality of their decisions (or,
perhaps, less general knowledge of them).
In contrast, ICANN already acts like a market regulator, and faces
pressures to expand its remit further into realms ordinarily occupied by
governments. Simultaneously, governments are taking an increasingly direct
role in this supposedly private body's decision-making via the "Government
Advisory Committee" (GAC), but are doing so in a manner notably lacking in
transparency. Dissatisfaction with ICANN, and the US government's, role as the
most powerful and only truly global regulator in the first sphere has led to many
calls for a new system of regulation. One approach has been to try to reform
ICANN, although it is unlikely that the most recent set of 'reforms' successfully
addresses the legitimacy problem. Another approach has been to find alternate

70

institutions that might take on the jobs ICANN handles, and perhaps others more
global Internet regulation also. One self-nominated candidate is the ITU, which is
currently sponsoring the World Summit on the Information Society. A third
approach uses the traditional apparatus of bilateral and multilateral treaties to
address particular issues arising from the Internet that are thought to require
trans-national regulation.
One fact remains, the Internets evolution is dynamic and complex. The
availability and design of a suitable regulatory response must reflect this
dynamism, and also the responsiveness of regulators and market players to each
other. Therefore, national legislation should be future proof and avoid being
overly prescriptive, to avoid a premature response to the emerging environment.
The European legal basis for regulatory intervention in Directives 2009 136 EC
and 2009 140 EC is an enabling framework to prevent competition abuses and
prevent discrimination, under which national regulators need the skills and
evidence base to investigate unjustified discrimination. Regulators expecting a
smoking gun to present itself should be advised against such a reactive
approach. A more proactive approach to monitoring and researching non-neutral
behaviours will make network operators much more cognisant of their duties and
obligations. The pace of change in the relation between architecture and content
on the Internet requires continuous improvement in the regulators research and
technological training. This is in part a reflection of the complexity of the issue
set, including security and Internet peering issues, as well as more traditional
telecoms and content issues.
Regulators can monitor both commercial transactions and traffic shaping
by ISPs to detect potentially abusive discrimination. No matter what theoretical
powers may exist, their usage in practice and the issue of forensic gathering of
evidence may ultimately be more important. An ex ante requirement to
demonstrate internal network metrics to content provider customers and
consumers may be a practical solution. Should packet discrimination be
introduced, the types of harmful discrimination that can result may be
undetectable by consumers and regulators. Blocking is relatively easy to spot,
but throttling or choking bandwidth may be more difficult. A solution may be to
require network operators to provide their Service Level Agreements both to
content providers and more transparently to the end-user via a regulatory or coregulatory reporting requirement. Strong arguments remain for ensuring that
ISPs inform consumers when they reach a monthly download limit, ensuring no
return to the rationed per-minute or per-byte Internet use that Europe
experienced in the 1990s with dial-up. As the law and practice stand today, it
seems that most customers do not know when they have been targeted as overstrenuous users of the Internet, only that their connection has slowed. Once
targeted, customers generally cannot prove their innocence they have to
accept the Terms of Use of the ISP without appeal (except theoretically via
courts for breach of contract, or regulator for infringement of their consumer
rights). The number of alternative ISPs is shrinking not only is the ISP business
expensive, leading to concentration in the industry, but the costs of renting
backhaul from dominant operators is sufficiently high that no ISP would want to

71

offer service to a suspected bandwidth hog. We may expect to see more protest
behaviour by netizens who do not agree with these policies, especially where
ISPs are seen to have failed to inform end-users fully about the implications of
policy changes. Regulators and politicians are challenged publicly by such
problems, particularly given the ubiquity of email, Facebook, Twitter and social
media protests against censorship, and there are two Pirate Party MEPs elected
to the European Parliament for 200914 (the Pirate Party is originally a Swedish
political group dedicated to open and interchangeable digital information, notably
a reduction in copyright enforcement). Regulators will need to ensure that the
network operators report more fully and publicly the levels of connectivity that
they provide between themselves as well as to end-users. Internet architecture
experts have explained that discrimination is most likely to occur at this level as it
is close to undetectable by those not in the two networks concerned in the
handover of content. A reporting requirement will need to be imposed if voluntary
agreement is not possible. As this information is routinely collected by the
network operators for internal purposes, it should not impose a substantial
burden. Regulators should be wary of imposing costs on ISPs that are
disproportionate. Very high entry barrier coregulation and self-regulation can curb
market entry. Onerous regulation (including self-regulation) leads towards closed
and concentrated structures, for three reasons [56]:
1. larger companies are better able to bear compliance costs;
2. larger companies have the lobbying power to seek to influence
regulation;
3. dominant and entrenched market actors in regulated bottlenecks play
games with regulators in order to increase the sunk costs of market entry for
other actors, and can pass through costs to consumers and innovators in noncompetitive markets.
Therefore any solution needs to take note of the potential for larger
companies to game a co-regulatory scheme and create additional compliance
costs for smaller companies (whether content or network operators, and the
combination of sectors makes this a particularly complex regulatory game). The
need for greater research towards understanding the nature of congestion
problems on the Internet and their effect on content and innovation is clear.
Finally, let we summarise this section, there are incentives for network
providers to police the traffic by type, if not by content. It enables the network
providers, many of whom also operate their own proprietary applications, to
charge a different price to non-affiliated content owners than to affiliated owners.
This differential pricing could make the profitable operation of non-affiliated
providers more difficult. On that basis, a walled garden of ISP services and
those of its preferred content partners might become the more successful
business model. That model makes regulation much easier to enforce, but also
prevents some of the interoperability and open access for users that is held to
lead to much Web 2.0 innovation for businesses. The answer must be
contingent on political, market and technical developments. The issue of
uncontrolled Internet flows versus engineered solutions is central to the question
of a free versus regulated Internet.

72

Abbreviations
3G Third Generation
4G Fourth Generation
5G Fifth Generation
AAA Authentication, Authorization, Accounting
AP Access Point
APDV Application Protocol Data Unit
API Application Programming Interface
ARM Advanced RISC Machine
ATM Asynchronous Transfer Mode
BTS Base Transceiver Station
CaaS Communications as a Service
CC Cloud Computing
CDN Content Delivery Network
CPU Central Processing Unit
CRM Customer Relationship Management
CSC Cloud Service Customer
CSN Cloud Service Partner
CSP Cloud Service Provider
CSU Cloud Service User
DaaS Desktop as a Service
DFS Distributed File System
DHT Distributed Hash Table
DNS Domain Name System
EC2 Elastic Compute Cloud
ET Emergency Telecommunications
ETS Emergency Telecommunications Service
FI Functional Interface
GPS Global Positioning System
HA Home Agent
I/O Input/Output
IA Integrated Authenticated
IaaS Infrastructure as a Service
IAM Identity and Access Management
IANA Internet Assigned Numbers Authority
ICANN Internet Corporation for Assigned Names and Numbers
ICT Information and Communication Technology
ID Identifier
IMERA French acronym for Mobile Interaction in Augmented Reality Environment
IP Internet Protocol
IPv4 (IP version 4)
IPv6 (IP version 6)
IRNA Intelligent Radio Network Access
iSCSI Internet Small Computer System Interface
ISP Internet service provider

73

IT Information Technology
JME Java ME, a Java platform
LAN Local Area Network
LBS Location Base Service
LTE Long Term Evolution
LTS Location Trusted Server
MAUI Memory Arithmetic Unit and Interface
MC Mobile Computing
MCC Mobile Cloud Computing
MDP Markov Decision Process
MPLS Multi-Protocol Label Switching
MSC Mobile Service Cloud
NaaS Network as a Service
NAS Network Attached Storage
NFS Network File System
NTP Network Time Protocol
OS Operating System
P2P Peer-to-Peer
PaaS Platform as a Service
PHP Hypertext Preprocessor
PII Personally Identifiable Information
PKI Public Key Infrastructure
QoE Quality of Experience
QoS Quality of Service
REST Repretational State Transfer
RFS Random File System
S3 Simple Storage Service
SaaS Software as a Service
SAN Storage Area Network
SES Software Enabled Services
SIM Subscriber Identity Module
SLA Service Level Agreement
SLA Service Level Agreement
SMI Service Management Interface
TCC Truster Crypto Coprocessor
URI Uniform Resource Identifier
vCPU virtual CPU
VI Virtual Infrastructure
VLAN Virtual Local Area Network
VM Virtual Machine
VoIP Voice over IP
VPN Virtual Private Network
WAN Wide Area Network
WLAN Wireless Local Area Network
WiFi Wireless Fidelity

74

References
[1] Toni Janevski, "NGN Architectures, Protocols and Services", John Wiley & Sons, UK,
April 2014.
[2] IEEE Communications Magazine, pp.:24-62, July 2011.
[3] Internet architecture (2000): http://www.livinginternet.com/i/iw_arch.htm
[4] RFC 1958; B. Carpenter, et. al.; Architectural Principles of the Internet; Jun 1996,
link: http://www.rfc-editor.org/rfc/rfc1958.txt
[5] Barath Raghavan, Teemu Koponen, Ali Ghodsi, Martn Casado, Sylvia Ratnasamy,
and Scott Shenker, Software-Defined Internet Architecture: Decoupling Architecture
from Infrastructure, Hotnets 12, Seattle, WA, USA, October 2930, 2012.
[6] RFC 3426; S. Floyd; General Architectural and Policy Considerations; Nov 2002, link:
http://www.rfc-editor.org/rfc/rfc3426.txt
[7] RFC 3439; R. Bush, D. Meyer; Some Internet Architectural Guidelines and
Philosophy; Dec 2002, link: http://www.rfc-editor.org/rfc/rfc3439.txt
[8] RFC 3819; P. Karn, Ed.; Advice for Internet Subnetwork Designers; July 2004, link:
http://www.rfc-editor.org/rfc/rfc3819.txt
[9] ITU-T Rec. Y.1001 (11/2000): IP framework A framework for convergence of
telecommunications network and IP network technologies.
[10] ITU-T Rec. Y.3001 (05/11): Future networks: Objectives and design goals.
[11] TCP/IP tutorial and technical overview, chapter 5 : Transport layer protocols, link:
http://www.cs.virginia.edu/~cs458/material/Redbook-ibm-tcpip-Chp5.pdf, last accessed:
05.05.2015
[12] Microsoft Developer Network: Internet Protocol version 4 Address Classes,
http://msdn.microsoft.com/en-us/library/aa918342.aspx, last accessed: 08.05.2015
[13] Americas Headquarters Cisco Systems, Inc., IP Addressing: IPv4 Addressing
Configuration
Guide,
Cisco
IOS
XE
Release
3S,
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_ipv4/configuration/xe-3s/ipv4xe-3s-book.pdf , last accessed: 09.05.2015.
[14] ITU-T Rec. Y.2051 (02/2008): General overview of IPv6-based NGN.
[15] ITU-T Rec. Y.2053 (02/2008): Functional requirements for IPv6 migration in NGN.
[16] Requirements for Internet Hosts -- Communication Layers:
http://tools.ietf.org/html/rfc1122
[17] Internet Protocol, Version 6 (IPv6) Specification: https://tools.ietf.org/html/rfc2460
[18] A TCP/IP Tutorial: https://tools.ietf.org/html/rfc1180
[19] User Datagram Protocol (UDP): https://www.ietf.org/rfc/rfc768.txt
[20] The Lightweight User Datagram Protocol (UDP-Lite):
https://tools.ietf.org/html/rfc3828

75

[21] Transmission Control Protocol, DARPA Internet Program, Protocol Specification,


September 1981: https://www.ietf.org/rfc/rfc793.txt
[22] J. W. Stewart, III. BGP4: Inter-Domain Routing in the Internet. Addison-Wesley
Longman Publishing Co., Inc., 1998.
[23] Internet Routing and Traffic Engineering,
http://www.awsarchitectureblog.com/2014/12/internet-routing.html
[24] W. Sun, Z. Mao, and K. Shin. Differentiated BGP update processing for improved
routing convergence. In Proc. of ICNP, pages 280-289, 2006.
[25] J. T. Moy. OSPF version 2, 1991. IETF RFC 1247.
[26] E. Rosen, A. Viswanathan, and R. Callon. Multiprotocol label switching architecture,
2001. IETF RFC 3031.
[27] R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin. Resource reservation
protocol (RSVP), 1997. IETF RFC 2205.
[28] https://www.cs.princeton.edu/~jrex/thesis/martin-suchara-thesis.pdf
http://web.mit.edu/6.173/www/currentsemester/readings/R07-interconnection[29]
networks-hennessy-patterson.pdf
[30] http://web.mit.edu/dimitrib/www/Routing_Data_Nets.pdf
[31] A white paper by Incognito Software, Understanding DNS (the Domain Name
System), January, 2007. http://www.incognito.com/wp-content/uploads/understandingdns.pdf
[32] David Conrad, A Quick Introduction to the Domain Name System, ITU ENUM
Workshop, Jan 17, 2000, https://archive.icann.org/en/meetings/saopaulo/presentationdns-conrad-07dec06.pdf
[33] DNS Configuration options for Dynamic Host Configuration Protocol for Ipv6
(DHCPv6); https://tools.ietf.org/html/rfc3646
[34]
John
Jason
Brzozowski
,
DHCPv6,
NANOG46,
June
2009
https://www.nanog.org/meetings/nanog46/presentations/Tuesday/Brzozowski_introDHC
P_N46.pdf
[35] http://meetings.ripe.net/ripe-53/presentations/dhcpv6.pdf
[36] Eric Kollmann, Chatter on the Wire: A look at DHCPv6 traffic, November 2010.
http://chatteronthewire.org/download/chatter-dhcpv6.pdf
[37] Dynamic Host Configuration Protocol (DHCPv6) Options for Session Initiation
Protocol (SIP) Servers; https://tools.ietf.org/html/rfc3319
[38] IPv6 Prefix Options for Dynamic Host Configuration Protocol (DHCP) version 6;
https://www.ietf.org/rfc/rfc3633.txt
[39] DHCPv6 Leasequery; https://tools.ietf.org/html/rfc5007
[40] Node-specific Client Identifiers for Dynamic Host Configuration Protocol Version
Four (DHCPv4); https://www.ietf.org/rfc/rfc4361.txt
[41] Dynamic Host Configuration Protocol; https://tools.ietf.org/html/rfc2131
[42] Raj Jain, Chapter 32: Initialization (BOOTP and DHCP);

76

http://www.cse.wustl.edu/~jain/cis678-97/ftp/f32_dhc.pdf
[43] DHCP by learning centre of vicomsoft; http://www.vicomsoft.com/learningcenter/dhcp/
[44] World-Wide Web, Tim Berners-Lee, Robert Cailliau, C.E.R.N.
http://www.freehep.org/chep92www.pdf
[45] T.J. Berners-Lee, R. Cailliau, J-F Groff, B. Pollermann, CERN, "World-Wide Web:
The Information Universe", published in Electronic Networking: Research, Applications
and Policy, Vol. 2 No 1, Spring 1992, Meckler Publishing, Westport, CT, USA.
[46] T.J. Berners-Lee, R. Cailliau, J-F Groff, B. Pollermann, CERN, "World-Wide Web:
An Information Infrastructure for High-Energy Physics", Presented at "Artificial
Intelligence and Software Engineering for High Energy Physics" in La Londe, France,
January 1992. Proceedings published by World Scientific, Singapore, ed. D Perret-Gallix
[47] Distributed Document-Based Systems, Chap. 11
http://www.cs.vu.nl/~ast/books/ds1/11.pdf
[48] Salman A. Baset and Henning G. Schulzrinne, "An Analysis of the Skype Peer-toPeer Internet Telephony Protocol" link:
http://www1.cs.columbia.edu/~salman/publications/skype1_4.pdf
[49] Skype. http://www.skype.com
[50] Kazaa. http://www.kazaa.com
[51] SkypeOut. http://www.skype.com/products/skypeout/
[52] SkypeIn. http://www.skype.com/products/skypein/
[53] Alessandro Finamore et al. YouTube Everywhere: Impact of Device a nd
Infrastructure Synergies on User Experience, MC11, November 24, 2011, Berlin,
Germany. Link: http://conferences.sigcomm.org/imc/2011/docs/p345.pdf
[54] https://net.educause.edu/ir/library/pdf/ELI7018.pdf
[55] http://www.digizen.org/downloads/social-networking-overview.pdf
[56] Christopher T. Marsden, "Network Neutrality and Internet Service Provider Liability
Regulation: Are the Wise Monkeys of Cyberspace Becoming Stupid?" Global Policy
Volume 2 . Issue 1 . January 2011.
[57] Christopher S.Yoo, "Network Neutrality or Internet Innovation?"
http://object.cato.org/sites/cato.org/files/serials/files/regulation/2010/2/regv33n1-6.pdf
[58] Kathleen Ann Ruane, Legislative Attorney, Net Neutrality: The FCCs Authority to
Regulate
Broadband
Internet
Traffic
Management,
https://www.fas.org/sgp/crs/misc/R40234.pdf March 26, 2014
[59] Antonio Segura-Serrano, "Internet Regulation and the Role of International Law",
Max Planck Yearbook of United Nations Law, Volume 10, 2006, p. 191-272. link:
http://www.mpil.de/files/pdf3/06_antoniov1.pdf
[60] A. Michael Froomkin, International and National Regulation of the Internet, link:
http://law.tm/docs/International-regulation.pdf

77

[61] http://www.cyber-rights.org/documents/clsr17_5_01.pdf
[62] The EDRi papers, Net Neutrality, link:
https://edri.org/files/paper08_netneutrality.pdf
[63] Cheng, H. Kenneth, Bandyopadhyay, Subhajyoti and Guo, Hong. \The Debate on
Net Neutrality: A Policy Perspective" 25 Jun 2008. Information Systems Research,
Forthcoming. Available at: http://net.educause.edu/ir/library/pdf/CSD4854.pdf
[64] Hahn, Robert W. and Wallsten, Scott. \The Economics of Net Neutrality." The
Economists' Voice: Vol. 3: Iss. 6, Article 8. The Berkeley Electronic Press 2006. 20
Nov. 2011. http://www.bepress.com/ev/vol3/iss6/art8/

78

Vous aimerez peut-être aussi