Vous êtes sur la page 1sur 47

MODULE III

PART I

Network layer: -Virtual Circuits, Datagrams, Routing Algorithm Optimality principle


- Flooding - Flow Based Routing - Link state routing Distance vector routing
Multicasting Link state multicasting Distance vector multicasting

Network layer
The network layer is concerned with getting packets from the source all the way
to the destination. Getting to the destination may require making many hops at
intermediate routers along the way. This function clearly contrasts with that of the data
link layer, which has the more modest goal of just moving frames from one end of a
wire to the other. Thus, the network layer is the lowest layer that deals with end-to-end
transmission.
Network Layer Design Issues

The network layer has been designed with the following goals:
Store-and-Forward Packet Switching
Services Provided to the Transport Layer
Implementation of Connectionless Service
Implementation of Connection-Oriented Service

Store-and-Forward Packet Switching

The major components of the system are the carrier's equipment (routers
connected by transmission lines), shown inside the shaded oval, and the customers'
equipment, shown outside the oval. Host H1 is directly connected to one of the carrier's
routers, A, by a leased line. In contrast, H2 is on a LAN with a router, F, owned and
operated by the customer. This router also has a leased line to the carrier's equipment.
We have shown F as being outside the oval because it does not belong to the carrier, but
in terms of construction, software, and protocols, it is probably no different from the

1
carrier's routers. Routers on customer premises are considered part of the subnet
because they run the same algorithms as the carrier's routers.

Figure 1. The environment of the network layer protocols.

Services Provided to the Transport Layer

The network layer services have been designed with the following goals in mind.
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of
The routers present.
3. The network addresses made available to the transport layer should use a
uniform numbering plan, even across LANs and WANs.
The network layer provides two different types of service,

Connection oriented
Connectionless.
The subnet is organized in two different ways

Virtual circuits: used in subnets whose primary service is connection-oriented.


Datagrams correspond to the independent packets of the connectionless
organization.

2
Datagrams (Implementation of Connectionless Service)

If connectionless service is offered, packets are injected into the subnet


individually and routed independently of each other. No advance setup is needed. In this
context, the packets are frequently called datagrams (in analogy with telegrams) and
the subnet is called a datagram subnet.
If connection-oriented service is used, a path from the source router to the
destination router must be established before any data packets can be sent. This
connection is called a VC (virtual circuit), in analogy with the physical circuits set up
by the telephone system, and the subnet is called a virtual-circuit subnet. In this
section we will examine datagram subnets; in the next one we will examine virtual-
circuit subnets.
Suppose that the process P1 in Fig. 2 has a long message for P2. It hands the
message to the transport layer with instructions to deliver it to process P2 on host H2.

Figure 2. Routing within a datagram subnet.

Let us assume that the message is four times longer than the maximum packet
size, so the network layer has to break it into four packets, 1, 2, 3, and 4 and sends each
3
of them in turn to router A using some point-to-point protocol, for example, PPP. At this
point the carrier takes over. Every router has an internal table telling it where to send
packets for each possible destination. Each table entry is a pair consisting of a
destination and the outgoing line to use for that destination. Only directly-connected
lines can be used. For example, in Fig. 2, A has only two outgoing linesto B and C
so every incoming packet must be sent to one of these routers, even if the ultimate
destination is some other router. A's initial routing table is shown in the figure under the
label ''initially.''
As they arrived at A, packets 1, 2, and 3 were stored briefly (to verify their
checksums). Then each was forwarded to C according to A's table. Packet 1 was then
forwarded to E and then to F. When it got to F, it was encapsulated in a data link layer
frame and sent to H2 over the LAN. Packets 2 and 3 follow the same route.
However, something different happened to packet 4. When it got to A it was sent
to router B, even though it is also destined for F. For some reason, A decided to send
packet 4 via a different route than that of the first three. Perhaps it learned of a traffic
jam somewhere along the ACE path and updated its routing table, as shown under the
label ''later.'' The algorithm that manages the tables and makes the routing decisions is
called the routing algorithm.

Virtual Circuits (Implementation of Connection-Oriented Service)

For connection-oriented service, we need a virtual-circuit subnet. Let us see how


that works.When a connection is established, a route from the source machine to the
destination machine is chosen as part of the connection setup and stored in tables inside
the routers. That route is used for all traffic flowing over the connection, exactly the
same way that the telephone system works. When the connection is released, the virtual
circuit is also terminated. With connection-oriented service, each packet carries an
identifier telling which virtual circuit it belongs to.

4
As an example, consider the situation of Fig. 3. Here, host H1 has established
connection 1 with host H2. It is remembered as the first entry in each of the routing
tables. The first line of A's table says that if a packet bearing connection identifier 1
comes in from H1, it is to be sent to router C and given connection identifier 1.
Similarly, the first entry at C routes the packet to E, also with connection identifier 1.

Figure 3. Routing within a virtual-circuit subnet.

Now let us consider what happens if H3 also wants to establish a connection to


H2. It chooses connection identifier 1 (because it is initiating the connection and this is
its only connection) and tells the subnet to establish the virtual circuit. This leads to the
second row in the tables. Note that we have a conflict here because although A can
easily distinguish connection 1 packets from H1 from connection 1 packets from H3, C
cannot do this. For this reason, A assigns a different connection identifier to the
outgoing traffic for the second connection. Avoiding conflicts of this kind is why routers
need the ability to replace connection identifiers in outgoing packets. In some contexts,
this is called label switching.

5
Comparison of Virtual-Circuit and Datagram Subnets

Routing Algorithms
Routing is the act of moving information across an inter-network from a source to a
destination. Along the way, at least one intermediate node typically is encountered. Its
also referred to as the process of choosing a path over which to send the packets.
Routing is often contrasted with bridging, which might seem to accomplish precisely
the same thing to the casual observer.

Routing protocols use metrics to evaluate what path will be the best for a packet
to travel. A metric is a standard of measurement; such as path bandwidth, reliability,
delay, current load on that path etc; that is used by routing algorithms to determine the
optimal path to a destination. To aid the process of path determination, routing
algorithms initialize and maintain routing tables, which contain route information.
Route information varies depending on the routing algorithm used.

Routing algorithms fill routing tables with a variety of information. Mainly


Destination/Next hop associations tell a router that a particular destination can be
reached optimally by sending the packet to a particular node representing the "next hop"
6
on the way to the final destination. When a router receives an incoming packet, it
checks the destination address and attempts to associate this address with a next hop.
Some of the routing algorithm allows a router to have multiple next hop for a single
destination depending upon best with regard to different metrics. For example, lets say
router R2 is be best next hop for destination D, if path length is considered as the
metric; while Router R3 is the best for the same destination if delay is considered as the
metric for making the routing decision.

Figure Typical routing in a small network

Figure shows a small part of a network where packet destined for node D, arrives at
router R1, and based on the path metric i.e. the shortest path to destination is forwarded
to router R2 which forward it to the final destination. Routing tables also can contain
other information, such as data about the desirability of a path. Routers compare metrics
to determine optimal routes, and these metrics differ depending on the design of the
routing algorithm used. Routers communicate with one another and maintain their
routing tables through the transmission of a variety of messages. The routing update
message is one such message that generally consists of all or a portion of a routing
table. By analyzing routing updates from all other routers, a router can build a detailed
picture of network topology.

Routing is the act of moving information across an internetwork from a source to


a destination. Along the way, at least one intermediate node typically is encountered.
Routing is often contrasted with bridging, which might seem to accomplish precisely
7
the same thing to the casual observer. The primary difference between the two is that
bridging occurs at Layer 2 (the datalink layer) of the OSI reference model, whereas
routing occurs at Layer 3 (the network layer).

Routing involves two basic activities: determining optimal routing paths and
transporting information groups (typically called packets) through an internetwork. In
the context of the routing process, the latter of these is referred to as packet switching.

The routing algorithm is that part of the network layer software responsible for
deciding which output line an incoming packet should be transmitted on.

Routing algorithms often have one or more of the following design goals:

Correctness are the packets getting to the right destination.


Simplicity the simpler the algorithm and steps, the easier to understand, to
extend, modify and improve. Routing algorithms also are designed to be as
simple as possible.
Robustness If something happens to the network (eg. some lines go down,
some user in a machine starts sending out a huge amount of data, etc), does the
algorithm cater for that? This is good if a network is very dynamic and changes
occur frequently. Which means that they should perform correctly in the face of
unusual or unforeseen circumstances, such as hardware failures, high load
conditions, and incorrect implementations. Because routers are located at
network junction points, they can cause considerable problems when they fail.
The best routing algorithms are often those that have withstood the test of time
and that have proven stable under a variety of network conditions.
Stability Does the routes change all the time? If the routes are unstable, even
when there are little changes to the network, the routers will incur the overhead

8
of having to keep changing their routing tables. This must be balanced with
robustness of the algorithm.
Fairness How well does the algorithm ensure all routers have the same share of
network utilization, and the same consideration when finding routes for them?
Optimality How well does the algorithm ensure that the cost (time, hops, distance,
etc) is kept to a minimal? This is balanced against fairness

Routing algorithms can be divided into two major classes:

Non-adaptive algorithms or Static Routing algorithms


Adaptive algorithms or Dynamic Routing algorithms

The Optimality Principle


Optimality principle states that if router J is on the optimal path from router
I to router K, then the optimal path from J to K also falls along the same route. To see
this, call the part of the route from I to J r1 and the rest of the route r2. If a route better
than r2 existed from J to K, it could be concatenated with r1 to improve the route from I
to K, contradicting our statement that r1r2 is optimal.

Figure 4. (a) A subnet. (b) A sink tree for router B.

As a direct consequence of the optimality principle, we can see that the set of
optimal routes from all sources to a given destination form a tree rooted at the
destination. Such a tree is called a sink tree. Where the distance metric is the number of

9
hops. Note that a sink tree is not necessarily unique; other trees with the same path
lengths may exist. The goal of all routing algorithms is to discover and use the sink
trees for all routers.
Since a sink tree is indeed a tree, it does not contain any loops, so each packet
will be delivered within a finite and bounded number of hops. The issue of whether
each router has to individually acquire the information on which to base its sink tree
computation or whether this information is collected by some other means.

Static Routing Algorithms (Non - adaptive algorithms)


Routing algorithms can be grouped into two major classes: nonadaptive and
adaptive. Non-adaptive algorithms do not base their routing decisions on
measurements or estimates of the current traffic and topology. Instead, the choice of the
route to use to get from I to J (for all I and J) is computed in advance, off-line, and
downloaded to the routers when the network is booted. This procedure is sometimes
called static routing.
1. Shortest Path Routing
This is the simplest and widely used technique. The basic idea:
Build a graph of the subnet, with each node of the graph representing a router and
each arc of the graph representing a communication line.
To choose a route between a given pair of routers, the algorithm finds the shortest
path between them.

Metrics for measuring the shortest path can be any one of the following,

The number of hops.


The geographic distance in kilometers.
The mean queuing and transmission delay.

10
A function of the distance, bandwidth, average traffic, communication cost, mean
queue length, measured delay, etc.
The Dijkstras algorithm is used to find the shortest path. The following steps are
performed:

1. The router builds a status record set for every node on the network. The record
contains three fields:
Predecessor field - This field shows the previous node.
Length field - This field shows the sum of the weights from the source to
that node.
Label field - This field shows the status of node. Each node can have one
status mode: "permanent" or "tentative."
2. The router initializes the parameters of the status record set (for all nodes) and
sets their length to "infinity" and their label to "tentative."
3. The router sets a T-node. For example, if V1 is to be the source T-node, the router
changes V1's label to "permanent." When a label changes to "permanent," it
never changes again. A T-node is an agent and nothing more.
4. The router updates the status record set for all tentative nodes that are directly
linked to the T-node.
5. The router looks at all of the tentative nodes in the entire network and chooses
the one whose weight to V1 is lowest. That node is then the destination T-node.
6. If this node is not V2 (the intended destination), the router goes back to step 4.
7. If this node is V2, the router extracts its previous node from the status record set
and does this until it arrives at V1. This list of nodes shows the best route from
V1 to V2.
Example: Dijkstra Algorithm

Here we want to find the best route between A and E (see below). You can see that there
are six possible routes between A and E (ABE, ACE, ABDE, ACDE, ABDCE,
11
ACDBE), and it's obvious that ABDE is the best route because its weight is the lowest.
But life is not always so easy, and there are some complicated cases in which we have
to use algorithms to find the best route.

1. As you see in the image below, the source node (A) has been chosen as T-node,
and so its label is permanent (we show permanent nodes with filled circles and T-
nodes with the --> symbol).

2. In this step, you see that the status record set of tentative nodes directly linked to
T-node (B, C) has been changed. Also, since B has less weight, it has been chosen
as T-node and its label has changed to permanent (see below).

3. In this step, like in step 2, the status record set of tentative nodes that have a
direct link to T-node (D, E), has been changed. Also, since D has less weight, it
has been chosen as T-node and its label has changed to permanent (see below).

12
4. In this step, we don't have any tentative nodes, so we just identify the next T-
node. Since E has the least weight, it has been chosen as T-node.

5. E is the destination, so we stop here.


Now we have to identify the route. The previous node of E is D, and the previous node
of D is B, and B's previous node is A. So the best route is ABDE. In this case, the total
weight is 4 (1+2+1).

2. Flooding

Another static routing algorithm is flooding. In flooding every incoming


packet is sent out on every outgoing line except the one it arrived on. Flooding requires
no network information whatsoever. Every incoming packet to a node is sent out on
every outgoing line except the one it arrived on. All possible routes between source and
destination are tried. A packet will always get through if a path nodes, directly or
indirectly connected, are visited. Main limitation flooding is that it generates vast
number of duplicate packets. It is necessary to use suitable damping mechanism to
overcome this limitation. One simple is to use hop-count; a hop counter may be
contained in the packet header, which is decremented at each hop, with the packet being
discarded when the counter becomes zero. The sender initializes the hop counter. If no
estimate is known, it is set to the full diameter of the subnet. Another approach is keep
track of packets, which are responsible for flooding using a sequence number and avoid
sending them out a second time. A variation, which is slightly more practical, is
selective flooding. The routers do not send every incoming packet out on every line,

13
only on those lines that go in approximately in the direction of destination. Some of the
important utilities of flooding are:

Flooding is highly robust, and could be used to send emergency messages (e.g.,
military applications).
It may be used to initially set up the route in a virtual circuit.
Flooding always chooses the shortest path, since it explores every possible path
in parallel.
Can be useful for the dissemination of important information to all nodes (e.g.,
routing information).

Measures for damming the flood:

A hop counter is included in the header of each packet, which is decremented at


each hop. A packet is discarded when the counter reaches zero.
A sequence number is included in each packet. Each router maintains a list per
source router telling which sequence numbers originating at that source have
already been seen. A packet is discarded when it contains a sequence number that
is in the list.

Selective flooding - an incoming packet is sent on those lines that are going
approximately in the right direction.

Possible applications of flooding:

In military applications, to withstand large numbers of routers crashes at any


instant.
As a metric (always choose the shortest path) against which other routing
algorithms can be compared.

14
In distributed database flooding used to update the databases concurrently.

3. Flow Based Routing

It takes into account of the load. It is suitable in some networks (e.g., a corporate
network for a retail store chain), in which the mean data flow between each pair of
nodes is relatively stable (approximately constant in time) and predictable.

Information known in advance:

The subnet topology.


The capacity for each line in the subnet.
The average traffic (pkts/sec) between any pair of nodes.

Main calculations involved:

Tentatively choose a routing algorithm.


Apply the tentatively chosen routing algorithm to the known subnet topology to
select a path from each node to all other nodes.
For each line, calculate the average flow according to the selected path and the
known average traffic between any pair of nodes.
For each line, compute the mean packet delay on that line from queuing theory
(assuming a mean packet delay)
For each line, calculate the flow-weight - the fraction of the total traffic using that
line.
For the whole subnet, calculate mean packet delay according to the flow-weight
and the mean packet delay for each line.

15
The routing problem reduces to finding the routing algorithm (from a collection of
known single path routing algorithms) that produces the minimum average delay for the
entire subnet. Since the calculations can be done off-line, the fact that it may be time
consuming is not necessarily a serious problem.

Dynamic Routing Algorithms (Adaptive algorithms)


Adaptive algorithms, in contrast, change their routing decisions to reflect
changes in the topology, and usually the traffic as well. Adaptive algorithms differ in
where they get their information (e.g., locally, from adjacent routers, or from all
routers), when they change the routes (e.g., every T sec, when the load changes or
when the topology changes), and what metric is used for optimization (e.g., distance,
number of hops, or estimated transit time).
Modern computer networks generally use dynamic routing algorithms rather than
the static ones described above because static algorithms do not take the current
network load into account.
Two dynamic algorithms in particular are,
Distance Vector Routing
Link State Routing

1. Distance Vector Routing


Distance vector routing algorithms operate by having each router maintain a
table (i.e, a vector) giving the best known distance to each destination and which line to
use to get there. These tables are updated by exchanging information with the
neighbors.
The distance vector routing algorithm is sometimes called by other names, most
commonly the distributed Bellman-Ford routing algorithm and the Ford-Fulkerson
algorithm, after the researchers who developed it .
In distance vector routing, each router maintains a routing table indexed by, and
containing one entry for, each router in the subnet. This entry contains two parts: the
16
preferred outgoing line to use for that destination and an estimate of the time or distance
to that destination. The metric used might be number of hops, time delay in
milliseconds, total number of packets queued along the path, or something similar.
As an example, assume that delay is used as a metric and that the router knows
the delay to each of its neighbors. Once every T msec each router sends to each
neighbor a list of its estimated delays to each destination. It also receives a similar list
from each neighbor. Imagine that one of these tables has just come in from neighbor X,
with Xi being X's estimate of how long it takes to get to router i. If the router knows that
the delay to X is m msec, it also knows that it can reach router i via X in Xi + m msec.
This updating process is illustrated in Fig. 5. Part (a) shows a subnet. The first
four columns of part (b) show the delay vectors received from the neighbors of router J.
A claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to D, etc.
Suppose that J has measured or estimated its delay to its neighbors, A, I, H, and K as 8,
10, 12, and 6 msec, respectively.

Figure 5. (a) A subnet. (b) Input from A, I, H, K, and the new routing
table for J.

Consider how J computes its new route to router G. It knows that it can get to A
in 8 msec, and A claims to be able to get to G in 18 msec, so J knows it can count on a
delay of 26 msec to G if it forwards packets bound for G to A. Similarly, it computes the

17
delay to G via I, H, and K as 41 (31 + 10), 18 (6 + 12), and 37 (31 + 6) msec,
respectively. The best of these values is 18, so it makes an entry in its routing table that
the delay to G is 18 msec and that the route to use is via H.
The Count-to-Infinity Problem
Distance vector routing works in theory but has a serious drawback in practice:
although it converges to the correct answer, it may do so slowly. In particular, it reacts
rapidly to good news, but leisurely to bad news. Consider the five-node (linear) subnet
of Fig. 6, where the delay metric is the number of hops. Suppose A is down initially and
all the other routers know this. In other words, they have all recorded the delay to A as
infinity.
When A comes up, the other routers learn about it via the vector exchanges. At
the time of the first exchange, B learns that its left neighbor has zero delay to A. B now
makes an entry in its routing table that A is one hop away to the left. All the other
routers still think that A is down. At this point, the routing table entries for A are as
shown in the second row of Fig. 6(a). On the next exchange, C learns that B has a path
of length 1 to A, so it updates its routing table to indicate a path of length 2, but D and E
do not hear the good news until later. Clearly, the good news is spreading at the rate of
one hop per exchange. Consider the situation of Fig. 6(b), in which all the lines and
routers are initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4,
respectively. Suddenly A goes down, or alternatively, the line between A and B is cut.

Figure 6. The count-to-infinity problem.

18
At the first packet exchange, B does not hear anything from A. Fortunately, C
says: Do not worry; I have a path to A of length 2. Little does B know that C's path runs
through B itself. For all B knows, C might have ten lines all with separate paths to A of
length 2. As a result, B thinks it can reach A via C, with a path length of 3. D and E do
not update their entries for A on the first exchange. On the second exchange, C notices
that each of its neighbors claims to have a path to A of length 3. It picks one of the them
at random and makes its new distance to A 4.
From this figure, it should be clear why bad news travels slowly: no router ever
has a value more than one higher than the minimum of all its neighbors. Gradually, all
routers work their way up to infinity, but the number of exchanges required depends on
the numerical value used for infinity.
1. Distance Vector Routing

In distance vector routing, the least-cost route between any two nodes is the
route with minimum distance. In this protocol, as the name implies, each node
maintains a vector (table) of minimum distances to every node.

Figure 1. Distance vector routing tables

19
In Figure 1, we show a system of five nodes with their corresponding tables. The table
for node A shows how we can reach any node from this node. For example, our least
cost to reach node E is 6. The route passes through C.

Initialization
The tables in Figure 1 are stable; each node knows how to reach any other node and the
cost. At the beginning, however, this is not the case. Each node can know only the
distance between itself and its immediate neighbors, those directly connected to it. So
for the moment, we assume that each node can send a message to the immediate
neighbors and find the distance between itself and these neighbors. Figure 2 shows the
initial tables for each node. The distance for any entry that is not a neighbor is marked
as infinite (unreachable).

Figure 2. Initialization of tables in distance vector routing

Sharing
The whole idea of distance vector routing is the sharing of information between
neighbors. Although node A does not know about node E, node C does. So if node C
shares its routing table with A, node A can also know how to reach node E. On the other
hand, node C does not know how to reach node D, but node A does. If node A shares its
routing table with node C, node C also knows how to reach node D. In other words,
20
nodes A and C, as immediate neighbors, can improve their routing tables if they help
each other.

There is only one problem. How much of the table must be shared with each
neighbor? A node is not aware of a neighbors table. The best solution for each node is
to send its entire table to the neighbor and let the neighbor decide what part to use and
what part to discard. However, the third column of a table (next stop) is not useful for
the neighbor. When the neighbor receives a table, this column needs to be replaced with
the senders name. If any of the rows can be used, the next node is the sender of the
table. A node therefore can send only the first two columns of its table to any neighbor.
In other words, sharing here means sharing only the first two columns.

In distance vector routing, each node shares its routing table with its immediate
neighbors periodically and when there is a change.

Updating
When a node receives a two-column table from a neighbor, it needs to update its routing
table. Updating takes three steps:

1. The receiving node needs to add the cost between itself and the sending node to
each value in the second column. The logic is clear. If node C claims that its
distance to a destination is x mi, and the distance between A and C is y mi, then
the distance between A and that destination, via C, is x + y mi.
2. The receiving node needs to add the name of the sending node to each row as the
third column if the receiving node uses information from any row. The sending
node is the next node in the route.
3. The receiving node needs to compare each row of its old table with the
corresponding row of the modified version of the received table.

21
a. If the next-node entry is different, the receiving node chooses the row
with the smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new
row. For example, suppose node C has previously advertised a route to
node X with distance 3. Suppose that now there is no path between C and
X; node C now advertises this route with a distance of infinity. Node A
must not ignore this value even though its old entry is smaller. The old
route does not exist any more. The new route has a distance of infinity.
Figure 3 shows how node A updates its routing table after receiving the partial table
from node C.

Figure 3. Updating in distance vector routing

There are several points we need to emphasize here. First, as we know from
mathematics, when we add any number to infinity, the result is still infinity. Second, the
modified table shows how to reach A from A via C. If A needs to reach itself via C, it
needs to go to C and come back, a distance of 4. Third, the only benefit from this
updating of node A is the last entry, how to reach E. Previously, node A did not know
how to reach E (distance of infinity); now it knows that the cost is 6 via C. Each node
can update its table by using the tables received from other nodes. In a short time, if
there is no change in the network itself, such as a failure in a link, each node reaches a
stable condition in which the contents of its table remains the same.
22
When to Share
The table is sent both periodically and when there is a change in the table.

Periodic Update A node sends its routing table, normally every 30 s, in a


periodic update. The period depends on the protocol that is using distance vector
routing.

Triggered Update A node sends its two-column routing table to its neighbors
anytime there is a change in its routing table. This is called a triggered update.
The change can result from the following.

1. A node receives a table from a neighbor, resulting in changes in its own table
after updating.
2. A node detects some failure in the neighboring links which results in a
distance change to infinity.

In distance vector routing, the least-cost route between any two nodes is the
route with minimum distance. In this protocol, as the name implies, each node
maintains a vector (table) of minimum distances to every node.

Figure 1. Distance vector routing tables

23
In Figure 1, we show a system of five nodes with their corresponding tables. The table
for node A shows how we can reach any node from this node. For example, our least
cost to reach node E is 6. The route passes through C.

Initialization
The tables in Figure 1 are stable; each node knows how to reach any other node and the
cost. At the beginning, however, this is not the case. Each node can know only the
distance between itself and its immediate neighbors, those directly connected to it. So
for the moment, we assume that each node can send a message to the immediate
neighbors and find the distance between itself and these neighbors. Figure 2 shows the
initial tables for each node. The distance for any entry that is not a neighbor is marked
as infinite (unreachable).

Figure 2. Initialization of tables in distance vector routing

Sharing
The whole idea of distance vector routing is the sharing of information between
neighbors. Although node A does not know about node E, node C does. So if node C
shares its routing table with A, node A can also know how to reach node E. On the other
hand, node C does not know how to reach node D, but node A does. If node A shares its
routing table with node C, node C also knows how to reach node D. In other words,
24
nodes A and C, as immediate neighbors, can improve their routing tables if they help
each other.

There is only one problem. How much of the table must be shared with each
neighbor? A node is not aware of a neighbors table. The best solution for each node is
to send its entire table to the neighbor and let the neighbor decide what part to use and
what part to discard. However, the third column of a table (next stop) is not useful for
the neighbor. When the neighbor receives a table, this column needs to be replaced with
the senders name. If any of the rows can be used, the next node is the sender of the
table. A node therefore can send only the first two columns of its table to any neighbor.
In other words, sharing here means sharing only the first two columns.

In distance vector routing, each node shares its routing table with its immediate
neighbors periodically and when there is a change.

Updating
When a node receives a two-column table from a neighbor, it needs to update its
routing table. Updating takes three steps:

4. The receiving node needs to add the cost between itself and the sending node to
each value in the second column. The logic is clear. If node C claims that its
distance to a destination is x mi, and the distance between A and C is y mi, then
the distance between A and that destination, via C, is x + y mi.
5. The receiving node needs to add the name of the sending node to each row as the
third column if the receiving node uses information from any row. The sending
node is the next node in the route.
6. The receiving node needs to compare each row of its old table with the
corresponding row of the modified version of the received table.

25
a. If the next-node entry is different, the receiving node chooses the row
with the smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new
row. For example, suppose node C has previously advertised a route to
node X with distance 3. Suppose that now there is no path between C and
X; node C now advertises this route with a distance of infinity. Node A
must not ignore this value even though its old entry is smaller. The old
route does not exist any more. The new route has a distance of infinity.
Figure 3 shows how node A updates its routing table after receiving the partial table
from node C.

Figure 3. Updating in distance vector routing

There are several points we need to emphasize here. First, as we know from
mathematics, when we add any number to infinity, the result is still infinity. Second, the
modified table shows how to reach A from A via C. If A needs to reach itself via C, it
needs to go to C and come back, a distance of 4. Third, the only benefit from this
updating of node A is the last entry, how to reach E. Previously, node A did not know
how to reach E (distance of infinity); now it knows that the cost is 6 via C. Each node
can update its table by using the tables received from other nodes. In a short time, if
there is no change in the network itself, such as a failure in a link, each node reaches a
stable condition in which the contents of its table remains the same.
26
When to Share
The table is sent both periodically and when there is a change in the table.

Periodic Update A node sends its routing table, normally every 30 s, in a


periodic update. The period depends on the protocol that is using distance vector
routing.

Triggered Update A node sends its two-column routing table to its neighbors
anytime there is a change in its routing table. This is called a triggered update.
The change can result from the following.

3. A node receives a table from a neighbor, resulting in changes in its own


table after updating.
4. A node detects some failure in the neighboring links which results in a
distance change to infinity.
The Count-to-Infinity Problem
Distance vector routing works in theory but has a serious drawback in practice:
although it converges to the correct answer, it may do so slowly. In particular, it reacts
rapidly to good news, but leisurely to bad news. Consider the five-node (linear) subnet
of Fig. 4, where the delay metric is the number of hops. Suppose A is down initially and
all the other routers know this. In other words, they have all recorded the delay to A as
infinity.
When A comes up, the other routers learn about it via the vector exchanges. At
the time of the first exchange, B learns that its left neighbor has zero delay to A. B now
makes an entry in its routing table that A is one hop away to the left. All the other
routers still think that A is down. At this point, the routing table entries for A are as
shown in the second row of Fig. 4(a). On the next exchange, C learns that B has a path
of length 1 to A, so it updates its routing table to indicate a path of length 2, but D and
E do not hear the good news until later. Clearly, the good news is spreading at the rate
27
of one hop per exchange. Consider the situation of Fig. 4(b), in which all the lines and
routers are initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4,
respectively. Suddenly A goes down, or alternatively, the line between A and B is cut.

Figure 6. The count-to-infinity problem.

At the first packet exchange, B does not hear anything from A. Fortunately, C
says: Do not worry; I have a path to A of length 2. Little does B know that C's path runs
through B itself. For all B knows, C might have ten lines all with separate paths to A of
length 2. As a result, B thinks it can reach A via C, with a path length of 3. D and E do
not update their entries for A on the first exchange. On the second exchange, C notices
that each of its neighbors claims to have a path to A of length 3. It picks one of the them
at random and makes its new distance to A 4.
From this figure, it should be clear why bad news travels slowly: no router ever
has a value more than one higher than the minimum of all its neighbors. Gradually, all
routers work their way up to infinity, but the number of exchanges required depends on
the numerical value used for infinity.

5.
The Count-to-Infinity Problem
Distance vector routing works in theory but has a serious drawback in practice:
although it converges to the correct answer, it may do so slowly. In particular, it reacts
rapidly to good news, but leisurely to bad news. Consider the five-node (linear) subnet
28
of Fig. 4, where the delay metric is the number of hops. Suppose A is down initially and
all the other routers know this. In other words, they have all recorded the delay to A as
infinity.
When A comes up, the other routers learn about it via the vector exchanges. At
the time of the first exchange, B learns that its left neighbor has zero delay to A. B now
makes an entry in its routing table that A is one hop away to the left. All the other
routers still think that A is down. At this point, the routing table entries for A are as
shown in the second row of Fig. 4(a). On the next exchange, C learns that B has a path
of length 1 to A, so it updates its routing table to indicate a path of length 2, but D and E
do not hear the good news until later. Clearly, the good news is spreading at the rate of
one hop per exchange. Consider the situation of Fig. 4(b), in which all the lines and
routers are initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4,
respectively. Suddenly A goes down, or alternatively, the line between A and B is cut.

Figure 6. The count-to-infinity problem.

At the first packet exchange, B does not hear anything from A. Fortunately, C
says: Do not worry; I have a path to A of length 2. Little does B know that C's path runs
through B itself. For all B knows, C might have ten lines all with separate paths to A of
length 2. As a result, B thinks it can reach A via C, with a path length of 3. D and E do
not update their entries for A on the first exchange. On the second exchange, C notices
that each of its neighbors claims to have a path to A of length 3. It picks one of the them
at random and makes its new distance to A 4.

29
From this figure, it should be clear why bad news travels slowly: no router ever
has a value more than one higher than the minimum of all its neighbors. Gradually, all
routers work their way up to infinity, but the number of exchanges required depends on
the numerical value used for infinity.

2. Link State Routing


Distance vector routing was used in the ARPANET when it was replaced by link
state routing. Two primary problems caused its demise. First, since the delay metric was
queue length, it did not take line bandwidth into account when choosing routes. A
second problem also existed, namely, the algorithm often took too long to converge (the
count-to-infinity problem). For these reasons, it was replaced by an entirely new
algorithm, now called link state routing.
The idea behind link state routing is simple and can be stated as five parts. Each
router must do the following:
1. Discover its neighbors and learn their network addresses.
2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router.
The complete topology and all delays are experimentally measured and
distributed to every router. Then Dijkstra's algorithm can be run to find the shortest path
to every other router.

Learning about the Neighbors


When a router is booted, its first task is to learn who its neighbors are. It
accomplishes this goal by sending a special HELLO packet on each point-to-point line.
The router on the other end is expected to send back a reply telling who it is. These
names must be globally unique because when a distant router later hears that three
routers are all connected to F, it is essential that it can determine whether all three mean

30
the same F. When two or more routers are connected by a LAN, the situation is slightly
more complicated. Fig. 7(a) illustrates a LAN to which three routers, A, C, and F, are
directly connected. Each of these routers is connected to one or more additional routers,
as shown.

Figure 7. (a) Nine routers and a LAN. (b) A graph model of (a).

One way to model the LAN is to consider it as a node itself, as shown in Fig.
7(b). Here we have introduced a new, artificial node, N, to which A, C, and F are
connected. The fact that it is possible to go from A to C on the LAN is represented by
the path ANC here.

Measuring Line Cost


The link state routing algorithm requires each router to know, or at least have a
reasonable estimate of, the delay to each of its neighbors. The most direct way to
determine this delay is to send over the line a special ECHO packet that the other side is
required to send back immediately. By measuring the round-trip time and dividing it by
two, the sending router can get a reasonable estimate of the delay. For even better
results, the test can be conducted several times, and the average used.
An interesting issue is whether to take the load into account when measuring the
delay. To factor the load in, the round-trip timer must be started when the ECHO packet
is queued. To ignore the load, the timer should be started when the ECHO packet
reaches the front of the queue. Including traffic-induced delays in the measurements

31
means that when a router has a choice between two lines with the same bandwidth, one
of which is heavily loaded all the time and one of which is not, the router will regard the
route over the unloaded line as a shorter path. This choice will result in better
performance.

Building Link State Packets


Once the information needed for the exchange has been collected, the next step is
for each router to build a packet containing all the data. The packet starts with the
identity of the sender, followed by a sequence number and age (to be described later),
and a list of neighbors. For each neighbor, the delay to that neighbor is given. An
example subnet is given in Fig. 8(a) with delays shown as labels on the lines. The
corresponding link state packets for all six routers are shown in Fig. 8(b).

Figure 8. (a) A subnet. (b) The link state packets for this subnet.

Building the link state packets is easy. The hard part is determining when to build
them.
One possibility is to build them periodically, that is, at regular intervals.
Another possibility is to build them when some significant event occurs,
such as a line or neighbor going down or coming back up again or
changing its properties appreciably.

Distributing the Link State Packets


The fundamental idea is to use flooding to distribute the link state packets. To
keep the flood in check, each packet contains a sequence number that is incremented for

32
each new packet sent. Routers keep track of all the (source router, sequence) pairs they
see. When a new link state packet comes in, it is checked against the list of packets
already seen. If it is new, it is forwarded on all lines except the one it arrived on. If it is
a duplicate, it is discarded. If a packet with a sequence number lower than the highest
one seen so far ever arrives, it is rejected as being obsolete since the router has more
recent data.
This algorithm has a few problems, but they are manageable.
First, if the sequence numbers wrap around, confusion will reign. The
solution here is to use a 32-bit sequence number. With one link state packet
per second, it would take 137 years to wrap around, so this possibility can
be ignored.
Second, if a router ever crashes, it will lose track of its sequence number. If
it starts again at 0, the next packet will be rejected as a duplicate.
Third, if a sequence number is ever corrupted and 65,540 is received
instead of 4 (a 1-bit error), packets 5 through 65,540 will be rejected as
obsolete, since the current sequence number is thought to be 65,540.

The solution to all these problems is to include the age of each packet after the
sequence number and decrement it once per second. When the age hits zero, the
information from that router is discarded. Normally, a new packet comes in, say, every
10 sec, so router information only times out when a router is down (or six consecutive
packets have been lost, an unlikely event). The Age field is also decremented by each
router during the initial flooding process, to make sure no packet can get lost and live
for an indefinite period of time (a packet whose age is zero is discarded).
Some refinements to this algorithm make it more robust. When a link state packet
comes in to a router for flooding, it is not queued for transmission immediately. Instead
it is first put in a holding area to wait a short while. If another link state packet from the
same source comes in before the first packet is transmitted, their sequence numbers are
compared. If they are equal, the duplicate is discarded. If they are different, the older
33
one is thrown out. To guard against errors on the router-router lines, all link state
packets are acknowledged. When a line goes idle, the holding area is scanned in round-
robin order to select a packet or acknowledgement to send.
Computing the New Routes
Once a router has accumulated a full set of link state packets, it can construct the
entire subnet graph because every link is represented. Every link is, in fact, represented
twice, once for each direction. The two values can be averaged or used separately. Now
Dijkstra's algorithm can be run locally to construct the shortest path to all possible
destinations. The results of this algorithm can be installed in the routing tables, and
normal operation resumed.
For a subnet with n routers, each of which has k neighbors, the memory required
to store the input data is proportional to kn. For large subnets, this can be a problem.
Also, the computation time can be an issue. Nevertheless, in many practical situations,
link state routing works well.

Mulicasting
Sending a message to such a group is called multicasting, and its routing
algorithm is called multicast routing. Multicasting requires group management. When
a process joins a group, it informs its host of this fact. It is important that routers know
which of their hosts belong to which groups. Either hosts must inform their routers
about changes in group membership, or routers must query their hosts periodically.
Either way, routers learn about which of their hosts are in which groups.
In multicast communication, there is one source and a group of destinations. The
relationship is one-to-many. In this type of communication, the source address is a
unicast address, but the destination address is a group address, which defines one or
more destinations. The group address identifies the members of the group. Figure 9
shows the idea behind multicasting.

34
Figure 9. Multicasting

A multicast packet starts from the source S1 and goes to all destinations that belong to
group G1. In multicasting, when a router receives a packet, it may forward it through
several of its interfaces.

Figure 9. (a) A network. (b) A spanning tree for the leftmost


router. (c) A multicast tree for group 1. (d) A multicast tree for group
35
To do multicast routing, each router computes a spanning tree covering all other routers.
For example, in Fig. 10(a) we have two groups, 1 and 2. Some routers are attached to
hosts that belong to one or both of these groups, as indicated in the figure. A spanning
tree for the leftmost router is shown in Fig. 10(b).

When a process sends a multicast packet to a group, the first router examines its
spanning tree and prunes it, removing all lines that do not lead to hosts that are
members of the group. In our example, Fig. 10(c) shows the pruned spanning tree for
group 1. Similarly, Fig. 10(d) shows the pruned spanning tree for group 2. Multicast
packets are forwarded only along the appropriate spanning tree.
Various ways of pruning the spanning tree are possible. The simplest one can be
used if link state routing is used and each router is aware of the complete topology,
including which hosts belong to which groups. Then the spanning tree can be pruned,
starting at the end of each path, working toward the root, and removing all routers that
do not belong to the group in question.
With distance vector routing, a different pruning strategy can be followed. The
basic algorithm is reverse path forwarding. However, whenever a router with no hosts
interested in a particular group and no connections to other routers receives a multicast
message for that group, it responds with a PRUNE message, telling the sender not to
send it any more multicasts for that group. When a router with no group members
among its own hosts has received such messages on all its lines, it, too, can respond
with a PRUNE message. In this way, the subnet is recursively pruned.
One potential disadvantage of this algorithm is that it scales poorly to large
networks. Suppose that a network has n groups, each with an average of m members.
For each group, m pruned spanning trees must be stored, for a total of mn trees. When
many large groups exist, considerable storage is needed to store all the trees.
An alternative design uses core-based trees (Ballardie et al., 1993). Here, a
single spanning tree per group is computed, with the root (the core) near the middle of

36
the group. To send a multicast message, a host sends it to the core, which then does the
multicast along the spanning tree. Although this tree will not be optimal for all sources,
the reduction in storage costs from m trees to one tree per group is a major saving.
One way of doing multi-cast routing to a particular group:
Compute a spanning tree joining all nodes there may be more than one.
Delete the links that do not lead to members of the group.
Using this pruned spanning tree forward the packet to all the neighbors in the
pruned spanning tree.

Routing Protocols
During the last few decades, several multicast routing protocols have emerged. Some of
these protocols are extensions of unicast routing protocols; others are totally new.

Figure 11. Taxonomy of common multicast protocols

Multicast Routing When a router receives a multicast packet, the situation is


different from when it receives a unicast packet. A multicast packet may have
destinations in more than one network. Forwarding of a single packet to members of a
group requires a shortest path tree. If we have n groups, we may need n shortest path
trees. We can imagine the complexity of multicast routing. Two approaches have been
used to solve the problem: source-based trees and group-shared trees.

Source-Based Tree. In the source-based tree approach, each router needs to have one
shortest path tree for each group. The shortest path tree for a group defines the next hop

37
for each network that has loyal member(s) for that group. In Figure 12, we assume that
we have only five groups in the domain: G1, G2, G3, G4, and G5. At the moment G1
has loyal members in four networks, G2 in three, G3 in two, G4 in two, and G5 in two.
We have shown the names of the groups with loyal members on each network. Figure
12 also shows the multicast routing table for router R1. There is one shortest path tree
for each group; therefore there are five shortest path trees for five groups. If router R1
receives a packet with destination address G1, it needs to send a copy of the packet to
the attached network, a copy to router R2, and a copy to router R4 so that all members
of G1 can receive a copy. In this approach, if the number of groups is m, each router
needs to have m shortest path trees, one for each group. We can imagine the complexity
of the routing table if we have hundreds or thousands of groups. However, we will show
how different protocols manage to alleviate the situation.

Figure 12. Source-based tree approach

address G1, it needs to send a copy of the packet to the attached network, a copy to
router R2, and a copy to router R4 so that all members of G1 can receive a copy. In this
approach, if the number of groups is m, each router needs to have m shortest path trees,
one for each group. We can imagine the complexity of the routing table if we have
hundreds or thousands of groups. However, we will show how different protocols
manage to alleviate the situation.
38
In the source-based tree approach, each router needs to have one shortest path tree
for each group.

Group-Shared Tree. In the group-shared tree approach, instead of each router having
m shortest path trees, only one designated router, called the center core, or rendezvous
router, takes the responsibility of distributing multicast traffic. The core has m shortest
path trees in its routing table. The rest of the routers in the domain have none. If a router
receives a multicast packet, it encapsulates the packet in a unicast packet and sends it to
the core router. The core router removes the multicast packet from its capsule, and
consults its routing table to route the packet. Figure 13 shows the idea.

Figure 13. Group-shared tree approach

In the group-shared tree approach, only the core router, which has a shortest path
tree for each group, is involved in multicasting.

39
Link state multicasting
In unicast link state routing each router creates a shortest path tree by using
Dijkstras algorithm. The routing table is a translation of the shortest path tree.
Multicast link state routing is a direct extension of unicast routing and uses a source-
based tree approach. Although unicast routing is quite involved, the extension to
multicast routing is very simple and straightforward.

Multicast link state routing uses the source-based tree approach.

In unicast routing, each node needs to advertise the state of its links. For multicast
routing, a node needs to revise the interpretation of state. A node advertises every group
which has any loyal member on the link. Here the meaning of state is what groups are
active on this link. The information about the group comes from IGMP . Each router
running IGMP solicits the hosts on the link to find out the membership status.

[ The Internet Group Management Protocol (IGMP) is one of the necessary, but not sufficient (as
we will see), protocols that is involved in multicasting. IGMP is a companion to the IP protocol.
Group Management: For multicasting in the Internet we need routers that are able to route multicast
packets. The routing tables of these routers must be updated by using one of the multicasting routing
protocols IGMP is not a multicasting routing protocol; it is a protocol that manages group
membership. In any network, there are one or more multicast routers that distribute multicast packets
to hosts or other routers. The IGMP protocol gives the multicast routers information about the
membership status of hosts (routers) connected to the network. A multicast router may receive
thousands of multicast packets every day for different groups. If a router has no knowledge about the
membership status of the hosts, it must broadcast all these packets. This creates a lot of traffic and
consumes bandwidth. A better solution is to keep a list of groups in the network for which there is at
least one loyal member. IGMP helps the multicast router create and update this list. IGMP is a group
management protocol. It helps a multicast router create and update a list of loyal members
related to each router interface.]

When a router receives all these LSPs (Link State Packet), it creates n (n is the
number of groups) topologies, from which n shortest path trees are made by using
40
Dijkstras algorithm. So each router has a routing table that represents as many shortest
path trees as there are groups. The only problem with this protocol is the time and space
needed to create and save the many shortest path trees. The solution is to create the trees
only when needed. When a router receives a packet with a multicast destination address,
it runs the Dijkstra algorithm to calculate the shortest path tree for that group. The result
can be cached in case there are additional packets for that destination.

MOSPF Multicast Open Shortest Path First (MOSPF) protocol is an


extension of the OSPF protocol that uses multicast link state routing to create source-
based trees. The protocol requires a new link state update packet to associate the unicast
address of a host with the group address or addresses the host is sponsoring. This packet
is called the group-membership LSA. In this way, we can include in the tree only the
hosts (using their unicast addresses) that belong to a particular group. In other words,
we make a tree that contains all the hosts belonging to a group, but we use the unicast
address of the host in the calculation. For efficiency, the router calculates the shortest
path trees on demand (when it receives the first multicast packet). In addition, the tree
can be saved in cache memory for future use by the same source/group pair. MOSPF is
a data-driven protocol; the first time an MOSPF router sees a datagram with a given
source and group address, the router constructs the Dijkstra shortest path tree.

Distance vector multicasting


Unicast distance vector routing is very simple; extending it to support multicast
routing is complicated. Multicast routing does not allow a router to send its routing
table to its neighbors. The idea is to create a table from scratch by using the information
from the unicast distance vector tables.
Multicast distance vector routing uses source-based trees, but the router never
actually makes a routing table. When a router receives a multicast packet, it forwards

41
the packet as though it is consulting a routing table. We can say that the shortest path
tree is evanescent. After its use (after a packet is forwarded) the table is destroyed.
To accomplish this, the multicast distance vector algorithm uses a process based on four
decision-making strategies. Each strategy is built on its predecessor. We explain them
one by one and see how each strategy can improve the shortcomings of the previous
one.
Flooding. Flooding is the first strategy that comes to mind. A router receives a
packet and, without even looking at the destination group address, sends it out
from every interface except the one from which it was received. Flooding
accomplishes the first goal of multicasting: every network with active members
receives the packet. However, so will networks without active members. This is a
broadcast, not a multicast. There is another problem: it creates loops. A packet
that has left the router may come back again from another interface or the same
interface and be forwarded again. Some flooding protocols keep a copy of the
packet for a while and discard any duplicates to avoid loops. The next strategy,
reverse path forwarding, corrects this defect.

Flooding broadcasts packets, but creates loops in the systems.

Reverse Path Forwarding (RPF). RPF is a modified flooding strategy. To


prevent loops, only one copy is forwarded; the other copies are dropped. In RPF,
a router forwards only the copy that has traveled the shortest path from the source
to the router. To find this copy, RPF uses the unicast routing table. The router
receives a packet and extracts the source address (a unicast address). It consults
its unicast routing table as though it wants to send a packet to the source address.
The routing table tells the router the next hop. If the multicast packet has just
come from the hop defined in the table, the packet has traveled the shortest path
from the source to the router because the shortest path is reciprocal in unicast
distance vector routing protocols. If the path from A to B is the shortest, then it is
42
also the shortest from B to A. The router forwards the packet if it has traveled
from the shortest path; it discards it otherwise.

This strategy prevents loops because there is always one shortest path from the source
to the router. If a packet leaves the router and comes back again, it has not traveled the
shortest path.

Figure 14 shows part of a domain and a source. The shortest path tree as
calculated by routers R1, R2, and R3 is shown by a thick line. When R1 receives a
packet from the source through the interface m1, it consults its routing table and finds
that the shortest path from R1 to the source is through interface m1. The packet is
forwarded. However, if a copy of the packet has arrived through interface m2, it is
discarded because m2 does not define the shortest path from R1 to the source. The
story is the same with R2 and R3. You may wonder what happens if a copy of a packet
that arrives at the m1 interface of R3, travels through R6, R5, R2, and then enters R3
through interface m1. This interface is the correct interface for R3. Is the copy of the
packet forwarded? The answer is that this scenario never happens because when the
packet goes from R5 to R2, it will be discarded by R2 and never reaches R3. The
upstream routers toward the source always discard a packet that has not gone through
the shortest path, thus preventing confusion for the downstream routers.

RPF eliminates the loop in the flooding process.

43
Figure 14. Reverse path forwarding (RPF)

Reverse Path Broadcasting (RPB). RPF guarantees that each network receives
a copy of the multicast packet without formation of loops. However, RPF does
not guarantee that each network receives only one copy; a network may receive
two or more copies. The reason is that RPF is not based on the destination
address (a group address); forwarding is based on the source address. To visualize
the problem, let us look at Figure 15.

Figure 15. Problem with RPF

Net3 in this figure receives two copies of the packet even though each router just
sends out one copy from each interface. There is duplication because a tree has not been
made; instead of a tree we have a graph. Net3 has two parents: routers R2 and R4.
44
To eliminate duplication, we must define only one parent router for each network.
We must have this restriction: A network can receive a multicast packet from a
particular source only through a designated parent router. Now the policy is clear. For
each source, the router sends the packet only out of those interfaces for which it is the
designated parent. This policy is called reverse path broadcasting (RPB). RPB
guarantees that the packet reaches every network and that every network receives only
one copy. Figure 16 shows the difference between RPF and RPB.

Figure 16. RPF Versus RPB

The reader may ask how the designated parent is determined. The designated parent
router can be the router with the shortest path to the source. Because routers
periodically send updating packets to each other (in RIP), they can easily determine
which router in the neighborhood has the shortest path to the source (when interpreting
the source as the destination). If more than one router qualifies, the router with the
smallest IP address is selected.

RPB creates a shortest path broadcast tree from the source to each destination. It
guarantees that each destination receives one and only one copy of the packet.

Reverse Path Multicasting (RPM). As you may have noticed, RPB does not
multicast the packet, it broadcasts it. This is not efficient. To increase efficiency,
the multicast packet must reach only those networks that have active members for
that particular group. This is called reverse path multicasting (RPM). To

45
convert broadcasting to multicasting, the protocol uses two procedures, pruning
and grafting. Figure 17 shows the idea of pruning and grafting.

Figure 17. RPF, RPB, and RPM

The designated parent router of each network is responsible for holding the membership
information. This is done through the IGMP protocol. The process starts when a router
connected to a network finds that there is no interest in a multicast packet. The router
sends a prune message to the upstream router so that it can exclude the corresponding
interface. That is, the upstream router can stop sending multicast messages for this
group through that interface. Now if this router receives prune messages from all
downstream routers, it, in turn, sends a prune message to its upstream router. What if a
leaf router (a router at the bottom of the tree) has sent a prune message but suddenly
realizes, through IGMP, that one of its networks is again interested in receiving the
multicast packet? It can send a graft message. The graft message forces the upstream
router to resume sending the multicast messages.

RPM adds pruning and grafting to RPB to create a multicast shortest path tree
that supports dynamic membership changes.

46
47

Vous aimerez peut-être aussi