Vous êtes sur la page 1sur 12

How to Do High-Speed Multicast Right!

Gundula Drries, Lothar Zier GMD German National Research Center for Information Technology Institute for Media Communication D - 53754 Sankt Augustin Email {Gundula.Doerries, Lothar.Zier}@gmd.de

Abstract
Multicast is an important technique for the distribution of data to groups of receivers. While in the past multicast traffic was restricted to slower transmission rates, newer networking equipment supports multicast in hardware and allows multicast traffic up to line rates of several hundreds of megabits and even gigabits. This evolution generates new opportunities for applications but also major risks for network operation. This paper presents several practical methods for high-speed multicast based on Gigabit Ethernet, IP Routing, and ATM. Important factors are discussed that limit the theoretical transmission rates in practice. Also methods are presented to avoid network problems, which may be caused by multicast rates of several hundred megabits.

Keywords
High-Speed Networking, Multicast, Asynchronous Transfer Mode, Gigabit Ethernet

Brave New World of high-speed multicast

Multicast is a widespread technology, enabling a broad range of applications such as the distribution of stock quotations and video conferencing to effectively send data to groups of receivers. In the past, multicast has been accomplished mainly by software, by processor based forwarding mechanisms and by the distribution of multicast packets as broadcast in layer 2 networks like Ethernet. Thus, the bandwidth available to multicast applications has been restricted to several megabits. This has changed with modern network equipment. Several technologies like Gigabit Ethernet and ATM offer multicast support even at high data rates. As these high-speed technologies become available for end-users, high-speed multicast applications will evolve quickly in most network environments. Typical examples for high-speed multicast are business TV or cooperative workspaces. The idea behind the 3V project ([1]) was to implement a distributed simulation and visualisation of road traffic. The simulation program runs on a fast central server, which distributes all data to the clients via multicast. On the clients, the data are visualised in real-time. The expected amount of data (100-400 Mbit/s) made it obvious that high-speed multicast technologies were needed. The experiences we had during our project made it clear that handling high-speed multicast applications differs significantly from conventional network applications. In a Gigabit Ethernet or 622 Mbit/s ATM network with LAN-wide multicast support, people may end up with a multicast disaster: The server 1

sends much less than expected, the production network is nearly down, but the client system does not even seem to receive much of the data, while being heavily overloaded. Therefore there are two main goals for a good implementation of high-speed multicast: to enable end systems to send and receive multicast with high data rates, to restrict the distribution of multicast traffic. - At best only end systems running the application should receive the multicast data and only a minimal burden of the backbone should be produced.

In this paper, we will first give an overview of several technologies for high-speed multicast (part 2). In part 3, we will describe in detail some reasons for the above multicast disaster and give hints how to avoid it.

Practical high-speed multicast technologies

Three technologies are currently dominating the high-speed networking market: Asynchronous Transfer Mode (ATM; used in LAN and WAN), Gigabit Ethernet (LAN), and Packet-Over-Sonet (POS; used in IP routers for WAN). High-speed interfaces for end systems are available for Gigabit Ethernet and ATM (622 Mbit/s). Multicast data distribution in these networks typically happens via IP multicast, which is supported by most Application Program Interfaces (APIs) for IP in end systems. It is also possible to use native ATM connections for multicast distribution. 2.1 IP multicast IP multicast ([2], [3]) uses special IP addresses (multicast groups) for data distribution. Multicast receivers register with the IGMP protocol at their next IP multicast router. The router uses special IP multicast routing protocols (like DVMRP and PIM) to dynamically build up distribution trees for multicast groups. The distribution trees are mostly sender-based, but some protocols also use core-based distribution, where a sender sends the packets first to a core (which may be a single IP Router), from which a distribution tree delivers the packets to all receivers. While this is a good solution for sparse groups with many senders, sender-based trees should be preferred for high-speed multicast. Modern IP routers support multicast in hardware and are able to forward multicast packets at or near line rate, so its no problem to build a high-speed IP multicast network from scratch. But in many existing networks older routers and slower links may exist, and the central problem is to restrict the multicast distribution and prevent a flooding of the multicast packets in the whole multicast network. The distribution of the multicast packets may be limited over the Time To Live (TTL) field in the IP header, but a more secure method is to use special multicast groups and access lists if the IP routers support this feature. 2.2 ATM The multicast distribution in ATM networks is based on point-to-multipoint connections. The multicast distribution tree realises shortest paths between the sender and each receiver. Only the sender may add new receivers. While several extensions to this basic concept exist like receiver-initiated point-tomultipoint (in UNI 4.0) and multipoint-to-point connections they are not implemented or not available in most networks. ATM point-to-multipoint connections are supported in hardware in almost all modern ATM switches, and ATM point-to-multipoint traffic is limited only by available trunk bandwidth and endsystem limitations. Multicast distribution in ATM networks may be used by applications via ATM-aware Application Program Interfaces (APIs) or via IP multicast over ATM. Two standardised APIs have extensions for ATM networks: XTI ([4]) for UNIX-based systems and WinSock2 ([5]) for Intel/Windows-based systems. The main advantages of using native ATM for multicast distribution are optimal distribution trees and large data frame sizes (up to 64 Kbytes). But in practice, only very few vendors have 622 Mbit/s ATM adapters 2

with native ATM support. Also a pure ATM network is needed between all end systems and the API programmer needs some ATM knowledge (especially about ATM signalling). IP multicast may be used in ATM networks via LAN Emulation. In LANE Version 1.0 (LANEv1, [4]) multicast packets are sent via a single, central Broadcast and Unknown Server (BUS), which distributes the packets via a point-to-multipoint connection to all members of an emulated LAN. High-speed multicast in LANEv1 networks has some shortcomings. The central BUS is a potential bottleneck and the multicast distribution tree is not optimal. All members of an emulated LAN including the sender receive all multicast packets. On the other hand LANE networks are simple to configure and it is easy to establish an emulated LAN. Also at least one vendor (Fore/Marconi) supports big frame sizes for LAN Emulation (4528, 9218) which allows high data rates. Despite these problems LANEv1 may be used for high-speed multicast, if a special emulated LAN is configured for multicast distribution and only end systems participating in the multicast distribution are members of this emulated LAN, a high-performance BUS (e.g. CISCO C5k or C6k with 622Mbit/s ATM) is located near the sender(s), serves only this ELAN, and no LANE client is running on this system, all clients of the emulated LAN use 622-Mbit/s ATM adapters.

Better methods for high-speed multicast distribution in ATM networks have been defined, but they are restricted to only a few (partial) implementations (for example, LANEv2, and MARS allows the use of separate point-to-multipoint connections for single IP/Ethernet multicast groups). 2.3 Ethernet In Ethernet-based networks IP multicast groups are mapped to special Ethernet (multicast) addresses. In older Ethernet switches packets with multicast addresses are handled like broadcast and flooded to all ports. Several techniques are available to improve this situation ([2]). The high-speed multicast traffic may be distributed via a separate VLAN using IEEE802.1Q. Most new Ethernet switches support VLANs, but some Gigabit Ethernet adapters for end systems may lack this feature. Modern Ethernet switches are able to restrict multicast traffic to those ports, where receivers are located. Two techniques exists: IGMP snooping works on the IP level and the switch filters the registration of IP multicast receivers (via IGMP messages). The GARP Multicast Registration Protocol (GMRP) allows receivers to register Ethernet multicast groups explicitly. IGMP Snooping is transparent to IP multicast hosts and more often implemented. Traditionally the payload of Ethernet packets is limited to 1500 bytes, but some vendors also support socalled Jumbo Frames (up to 9180 Bytes) to achieve higher throughput. 2.4 What can we expect from the technologies? The theoretical throughput of ATM and Gigabit Ethernet adapters are 622Mbit/s and 1 Gbit/s. Due to overhead in the underlying protocol layers, the bandwidth available for the application will always be smaller. For ATM, subtracting the overhead caused by Sonet/SDH (23,4 Mbit/s) and the ATM cell header leaves a maximum of 542,53 Mbit/s for AAL5. For packet sizes, which are not multiples of 48 bytes, this value additional decreases due to padding. Table 1 shows the resulting maximum throughput for some packet sizes over XTI/AAL5. For LANE, we furthermore have to add the LANE-, IP-, and UDP-Header (16 bytes + 20 bytes + 8 bytes) to the overhead. Sending multicast over Gigabit Ethernet via UPD and IP results in an overhead of 8 bytes (UDP) + 20 bytes (IP) + 26 bytes (GE).

Protocol XTI XTI without Padding UDP/LANE UDP/LANE without Padding UDP/GE

Packet size/bytes 50 1472 8192 1432 8152 50 1472 9190 1436 9212 50 1472

Throughput/(Mbit/s) 282.6 536.7 541.5 539.5 542.0 188.4 519.9 538.2 523.6 539.5 480.8 964.6

Table 1:

Theoretical Throughput for various packet sizes

2.5

Our practical environment

However, as our measurements show, these theoretical values cannot always be realised. We used different Sun servers and workstations to set up high-speed multicast with all technologies discussed above. As an example, Figure 1 shows our LANE design set up for the 3V project.
Uni Cologne DLR
ether2imk

GMD
Cisco C5505
3V

GE

28 km

22 km

atm2-giga

beethoven

Cisco Cisco LS1010 8540MSR


ls1010rzkj-1 ls1010ph

Fore ASX-4000

Cisco Cisco LS1010 8540MSR

SUN E450
SUN Gigabit Ethernet

Cisco LS1010

Cisco LS1010

Fore HE 622 squabble

Fore HE 622 baloo

Fore HE 622 linus

Fore HE 622 sarafina

SGI
Fore campfire HE 622 Fore HE 622 sparcy02

SUN Ultra 60

SUN Ultra 5

SUN E 250

SUN E 4500

SUN Ultra 10
ATM 622 Mbit/s /s 622 Mbit 2,4 Gbit /s Stand: 10.02.99 GE 1Gbit/s

LES/BUS Sender -> BUS P2MP distribution

Figure 1:

High-Speed Multicast Network in the 3V project

To generate controlled IP multicast traffic, we used a toolkit called MGEN developed at NRL [7]. MGEN consists of two binaries: mgen on the sender side and drec on the client side. mgen input parameters are the UDP packet size, the packet transmit rate and the destination IP address. As result, mgen returns a statistic on the average packet transmit rate, the sending duration and possible errors. drec on the receiver side listens on the specified IP address and calculates a statistic on the number of packets received. As MGENs sources are available (see [7]), we used it as a basis to develop a similar tool sending packets via the XTI interface provided with the Fore/Marconi ATM driver, thus enabling comparable performance measurements over native ATM. This modified tool first builds up a point-to-multipoint connection to the receiver systems, and then sends AAL5 packets just the way mgen sends out UDP packets. The XTI receiver is quite simple: The program accepts the connection, counts the received packets and discards them.

3
3.1

Hot spots in practical implementations


Avoid small packets!

Having a look at Table 1, one can become quite optimistic about the bandwidth, which will be available to our multicast application. Something between 500 and 900 Mbit/s seems to be possible. So what limits the server mentioned in our multicast disaster to throughput rates of 100Mb/s or even less? 4

One possible reason can be found in the diagrams Figure 2, Figure 3, and Figure 4, which show some of the measurements we did on a Sun E250. For XTI, LANE and GE we chose a set of packet sizes and for each of them varied the sending rate given as input to the tool mgen. We then plotted the achieved sending rate (y-axis) against the specified one (x-axis). Here, each line corresponds to a fixed packet size. We find regions of linear growth, indicating that the system has no problem to process the packet rate specified in mgen. But for each packet size, there is a maximum value for the throughput, indicated by a change into a horizontal line. This limit increases steadily with growing packet size. As the values in Table 1 show, this effect cannot be explained by increased overhead or padding alone. We rather suspect that small packets lead to more interrupts on the network adapter, which are meanwhile limited by hardware restrictions. Our measurements show a strong dependence between packet size and effective throughput. With a packet size lower than 100 byte, none of the technologies generates more than 10-30 Mbit/s. Values of 100-300 Mbit/s can be achieved with packets of about 1500 bytes. Both ATM technologies come very close to the theoretical values in Table 1 if packets of 5000 bytes or more are sent. With Gigabit Ethernet, the maximum UDP packet size is 1472 bytes. For this value, we get a throughput of 340 Mbit/s, which is far below the theoretical value of 964,6 Mbit/s, but still higher than the corresponding values for XTI and LANE, as shown in Figure 5. As a result, application designers should keep in mind this limiting effect on the achievable bandwidth. For an application with a bandwidth demand of 300 Mbit/s and more, it may be necessary to rethink the data model to make sure that the data can be send out effectively in amounts of at least 3000 bytes.
600

Achieved Bandwidth/(Mb/s)

500

8192 byte 7000 byte 6000 byte 5000 byte

4000 byte

400

300

3000 byte 2000 byte 1472 byte 1000 byte

200

100

500 byte 50 byte 100 byte


1 2 3 4 5 6 7 8

200 byte

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Specified Send Rate/(1000 packets/s)

Figure 2:

Achieved Throughput over native ATM (XTI) on a Sun E250 Server for various packet sizes (50 8192 bytes)
600

9000 byte

Achieved Bandwidth /(Mb/s)

500

8000 byte
400

5000 byte 4000 byte

7000 byte 6000 byte 3000 byte

300

2000 byte
200

1000 byte
100

500 byte 50 byte


1 2 3 4 5 6 7 8

100 byte

200 byte

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Specified Send Rate /(1000 packets/s)

Figure 3:

Achieved Throughput over LAN Emulation (LANE) on a Sun E250 Server for various packet sizes (50 9000 bytes) 5

400 350

1472 byte

Achieved Bandwith/(Mb)s)

300 250

1000 byte
200 150

500 byte
100 50

200 byte 100 byte 50 byte


11 13 15 17 19 21 23 25 27 29 31 33 33

0 1 3 5 7 9

Specified Send Rate/(1000 packets/s)

Figure 4:

Achieved Throughput over Gigabit Ethernet (GE) on a Sun E250 Server for various packet sizes (50 1472 bytes)
400

packet size: 1472 byte


350

Achieved Bandwidth/(Mb/s)

GE
300

XTI
250 200

LANE
150 100 50 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

Specified Send Rate/(1000 packets/s)

Figure 5: 3.2

Comparing the Throughput of XTI, LANE, and GE for a packet size of 1472 bytes

LANE servers see all packets twice

The multicast data distribution with LANE is not at all optimal. Especially the server system will see the packets both as outgoing and as incoming. Referring e.g. to Figure 1, if the SUN Ultra 60 (baloo) is the sending system, it first sends the multicast traffic to the BUS (Cisco C5505). The BUS then distributes this traffic to all systems belonging to this ELAN, including baloo. On baloo, the ATM driver has to discard the incoming packets, thereby burdening the CPU. To quantify the additional load of the server, we compared the sender throughput of multicast and unicast over LANE (Figure 6). The effect is obvious: The maximum multicast rate is about 40 Mbit/s below the corresponding unicast value. The difference may be partially caused by IP protocol differences, but additional measurements with Gigabit Ethernet showed that this second effect is neglectable. Nevertheless, we should keep in mind that-high-speed multicast over LANE is possible, but that the server is under heavier load compared to XTI or Gigabit Ethernet.

35

35

300

packet size: 1472 byte

Achieved Bandwidth/(Mb/s)

250

200

150

LANE, Unicast
100

LANE, Multicast
50

0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

Specified Sending Rate/(1000 packets/s)

Figure 6: 3.3

Throughput over LAN Emulation: Multicast versus Unicast with a packet sizes of 1472 bytes

How much of the data will the client receive?

In the transmission process, losses may occur inside the network, in the receiving systems adapter hardware and during protocol processing. Where do the data get lost in our multicast disaster described in chapter 1? In our high-speed multicast environment, the main effect was losses in the receiving application caused by CPU overload. In our measurements, we could verify that all multicast data reached the clients all right, but some part of it was lost between the adapters hardware and the application. In fact, it is quite easy to overload client systems with high-speed multicast, ending up with high data losses. As an overall tendency, losses get worse for smaller packets on one hand and higher sending rates on the other (Figure 7, Figure 9). In our test scenario, a Sun ULTRA-60 (the medium system with regard to CPU speed) was set up as a multicast server and we had one slower client (an ULTRA-5) and a faster one (an E250). The ULTRA-5 lost up to 60% of the data with XTI and up to 90% with UDP/LANE. Even the E250 experienced loss rates of 4% over XTI and 50% with UDP/LANE. On many end systems, two parameters offer a way to reduce these losses. First, interrupt coalescing can be enabled on a network adapter. Normally, every single packet arriving at the receivers adapter generates an interrupt to start the normal protocol processing, which minimises packet delay. With interrupt coalescing enabled, this interrupt is suppressed for some time to gather several packets on the adapter, which can then be processed together. This reduces the CPU load and thereby the number of lost packets, as can be seen in Figure 7 (no interrupt coalescing enabled) in comparison to Figure 8 (interrupt coalescing enabled). On UDP level, a second tuning method is to increase the size of the UDP receive buffer (e.g. on Suns the default value of 8192 byte to 65535 byte). As our measurement show (Figure 9 and Figure 10), this may help to reduce the load on the E250, but the ULTRA-5 behaves more or less indifferently. The reason is that an increased buffer size can only smooth out burst from the network and on the system load, but it does not reduce the overall CPU load on the receiver, as interrupt coalescing does. In sum, end system parameter tuning may help to increase client performance, but to avoid high loss rates on the receiver side, the application should additionally offer a control mechanism informing the server in case of client overload. The server can then adapt the throughput. This makes sense especially for scenarios with a fast server and relatively slow receivers, which is not unlikely to be found in multicast applications.

35

100 90 80

packet size: 1024 byte

ULTRA-5

E250

Loss Rate/%

70 60 50 40 30 20 10 0 98 172

packet size: 8192 byte

328

539

ULTRA-60: Sending Rate/(Mb/s)

Figure 7:

Losses with XTI for various packet sizes and sending rates, interrupt coalescing disabled in the receiver

100 90 80 70

ULTRA-5

E250

Loss Rate/%

60 50 40 30 20 10 0 98 171 328 539

packet size: 1024 byte packet size: 8192 byte

ULTRA-60: Sending Rate/(Mb/s)

Figure 8:

Losses with XTI for various packet sizes and sending rates, interrupt coalescing enabled on the receiver

ULTRA-5
100

E250
packet size: 9190 byte

packet size: 1024 byte


90 80

70

Loss Rate/(%)

60

50

40

30

20

10

98

150

368

498

ULTRA-60: Sending Rate/(Mb/s)

Figure 9:

Losses with UDP for various packet sizes and sending rates, UDP receive buffer = 8192 byte 8

100

packet size: 1024 byte


90 80

ULTRA-5
packet size: 9190 byte

E250

70

Loss Rate/(%)

60

50

40

30

20

10

98

154

368

515

ULTRA-60: Sending Rate/(Mb/s)

Figure 10:

Losses with UDP for various packet sizes and sending rates, UDP receive buffer = 65535 byte

3.4

Pitfalls of IP multicast routing

IP multicast distribution trees are dynamically built up by routing protocols. Especially for high-speed multicast, these protocols should construct distribution trees that minimize network load and dont send any multicast packets to networks without receivers. Two widely used multicast routing protocols (DVMRP and PIM dense-mode) use the flood and prune principle to construct their distribution trees ([3]). At regular intervals the multicast packets are flooded into the network and afterwards the distribution is pruned to a tree with the multicast receivers at the leaves. While this approach is easy to implement and widely used, it is not suitable for high-speed multicast, because during flooding high bursts in the whole multicast network are generated. These bursts may cause a network disaster disturbing the other production traffic.
900000 800000

Burst in Cells/sec

700000 600000 500000 400000 300000 200000 100000 0 09:22:50 09:23:18 09:23:46 09:24:14 09:24:42 09:25:10 09:25:38 09:26:06 09:26:34 09:27:02 09:27:30 09:27:58 09:28:26 09:28:54 09:29:22 09:29:50 09:30:18 09:30:46

Time

Figure 11:

Traffic burst over an ATM connection caused by a flood and prune IP multicast routing protocol

We studied the flood and prune behaviour in a PIM-dm environment with a multicast sender generating IP traffic of 800,000 ATM cells. Even though there is no receiver in the connected networks, every 3 minutes a burst of 800,000 cells is generated for several seconds before the line is pruned (figure 11). IP multicast distribution trees are mostly sender-based, using the shortest path from the sender to each receiver. While this does not necessarily minimise the total network load, it is a good compromise between network load and computational expense. Some protocols use core-based distribution, where a sender sends the packets first to a core (which may be a single IP Router), from which a distribution tree delivers the packets to all receivers. While this is a good solution for sparse groups with many senders and small traffic load, for high-speed multicast sender-based trees should be preferred. Of the available routing protocols Protocol Independent Multicast sparse mode (PIM-sm) is a good candidate for high-speed multicast. It uses an explicit registration of receivers at a Rendezvous Point (RP) and also switches from a core distribution to a sender-based tree, if the multicast traffic is high. The range of the multicast distribution is restricted by the Time-To-Live (TTL) value of the multicast packets and the use of special multicast groups. The application may use a small TTL value and any multicast packet will only be delivered to routers TTL hops from the source. But this method has major drawbacks, because the end users must have knowledge about the network topology and there is a high danger of misconfiguration. The use of special multicast groups (local multicast groups) with restricted scope and the use of access lists are more appropriate for high-speed multicast. 3.5 Intelligent queuing mechanisms reduce congestion problems A central problem with high-speed multicast is network congestion in bottleneck situations. Unlike TCP traffic there is no feedback mechanism and the sender doesnt reduce the traffic. As a consequence, TCP flows parallel to the multicast flow only get the bandwidth not occupied by the multicast traffic and may be reduced to zero throughput if the multicast traffic is high. Intelligent queuing techniques like Weighted Fair Queuing (WFQ) may improve the situation and allow a fair sharing of the available resources between all traffic flows ([7]). WFQ is implemented in many modern ATM switches like Marconis ASX-1000/ASX-4000 and CISCOs LS-1010/C8540MSR and also in IP routers at least in low- and medium-speed WAN interfaces. We studied the behaviour of a CISCO C8540MSR sending multicast data and a TCP flow over different ATM connections on a 622-Mbit/s ATM link (figure 12). The capacity of the link is fairly divided between the two traffic flows and the TCP flow gets the same performance as the multicast flow.
900000

Multicast
800000

Number of Received Cells

700000

600000

500000

400000

TCP
300000

200000

100000

0 0 5 10 15 20 25 30 35

Time/s

Figure 12:

Parallel TCP and Multicast traffic over an ATM connection with WFQ

So these techniques are very useful in congestion situations. The practical problem is that WFQ or a similar mechanism has to be supported in any network device where congestion might occur and that older hardware and many high-speed interfaces of IP routers dont support it. 10

Summary

Today, modern hardware allows high-speed multicast with hundreds of megabits. In practice the following methods may be used for multicast distribution: APIs for native layer 2 usage (like the ATM extension for the XTI API) IP multicast over single layer 2 networks (based on Gigabit Ethernet or ATM/LAN Emulation) Router-based IP multicast networks. There is no clear winner, but every technology has its advantages and drawbacks. You have to carefully separate your multicast traffic from the other production traffic using techniques like VLANs and/or IP access lists. In congestion situations, techniques like weighted fair queuing (WFQ) may allow other traffic especially TCP flows to get a fair share of the bandwidth. With current end systems you only achieve high data rates if you use big packets dont expect hundreds of megabits with 50-byte data chunks. Avoid flood and prune IP multicast routing protocols like DVMRP, or PIM dense-mode and use protocols like PIM sparse-mode in your IP multicast routing cloud. From an applications point of view, it may be often necessary to construct some kind of feedback mechanism between the receivers and the sender otherwise there may be a perfect multicast distribution in the network and disastrous data loss in the receiving end systems.

The main lessons we learned in our environment are:

While in theory multicast data rates of up to 540 Mbit/s (622 ATM) and 965 Mbit/s (Gigabit Ethernet) are possible and are realisable at least for ATM with very big packets, in practice data rates of about 200 500 Mbit/s are more realistic for current end systems. Your application may restrict this even further to 100 Mbit/s or below.

Acknowledgement We would like to thank our colleagues of the High Speed Networking Group at GMD. The German Research Network (DFN) supported this research as part of the Gigabit Testbed West.

References
[1] 3V: Verteilte Verkehrssimulation und Visualisierung, Abschlussbericht, July 2000. (in German) http://www.webdoc.sub.gdwg.de/ebook/ah/1999/dfn/3v.pdf Dave Kosiur: IP Multicasting, John Wiley & Sons, 1998. Beau Williamson: Developing IP Multicast Networks Volume I, CISCO Press, 2000. The Open Group: Networking Services (XNS), Issue 5.2, X/Open Transport Interface (XTI), January 2000. http://www.opengroup.org/publications/catalog/c808.htm Windows Sockets 2 Protocol Specific Annex, Revision 2.0.3, May 10, 1996. http://www.sockets.com/winsock2.htm

[2] [3] [4]

[5]

11

[6] [7] [8]

ATM Forum: LAN Emulation over ATM Version 1.0, AF-LANE-0021.000, January 1995. ftp://ftp.atmforum.com/pub/approved-specs/af-lane-0021.000.pdf Naval Research Laboratory (NRL): Multi-Generator (MGEN) Toolset, Version 3.1. ftp://manimac.itd.nrl.navy.mil/Pub/MGEN/ S. Keshav: An Engineering Approach to Computer Networking, Addison Wesley, 1997.

Vitae
Gundula Drries received a diploma in physics from the University of Dsseldorf in 1994. After working for Hewlett-Packard as systems engineer, she joined the High Speed Networking group (HSN; now NetMedia) in the Institute for Media Communication (IMK) of the German National Research Center for Information Technology (GMD) in 1999. Her main interests are multimedia networking and QoS issues. Lothar Zier received his diploma in computer science from the university of Bonn in 1991. He has been working as a scientist at GMD in the networking department since then. His research topics are IP and ATM based networking. At present he is working in the NetMedia group in IMK.

12

Vous aimerez peut-être aussi