Vous êtes sur la page 1sur 13

An Implementation of Single System Image Server Cluster

Deepak Agarwal Department of Computer Science and Engineering Motilal Nehru National Institute of Technology Allahabad,UP,INDIA E-mail: it01826@mnnit.ac.in

Abstract The paper reports an implementation of Single System Image Server Cluster (SSISC). The Servers have been configured with same MAC and IP addresses and are connected through a two layer switch. Implementation has been done by modifying iptables and recompiling the linux kernel. *This implementation has been tested for performance using WebStone. WebStone is a highlyconfigurable client-server benchmark for HTTP servers. This implementation removes a possibility of single point failure of web cluster. The implementation automatically detects newly added and failed server and hence reliable. The test indicates that requests are distributed to servers in the cluster, depends on the policy of distribution. 1. Introduction A common technique used to build a high capacity web server is to use a cluster of server machines with a centralized dispatcher. The dispatcher receives all incoming requests and

distributes them to one of the servers from the cluster according to some dispatch policy. This dispatcher serves as the front-end device to the cluster and presents a single host & IP address to the clients. One of the major limitations of such implementation is single point failure and/or overloading of the dispatcher. I have implemented a single system image server cluster. I explore new method of building server cluster using duplicated MAC and IP address without requiring a centralized dispatcher and/or any configuration changes to routers. All the machines in the server cluster are configured with identical MAC and IP address. These server machines were connected through a simple two layer switch. The linux kernel of the servers is modified and recompiled to configure them through a two layer switch. The servers are configured by modifying their iptables[15]. This modification is required to filter the outgoing reset [6] packets. A policy is required to balance the load among the servers. However development of a right decisionmaking function is difficult.

*Mod Backhand from Apache has been installed on servers to distribute the request as per selected policy. The cluster is tested with two dispatch policies, random and round-robin

distribution of requests using mod backhand [13] load balancing module along with Apache web server. It may seem that an intelligent decision could be made given the load of the machines, their power relative to the rest of the cluster and their available memory. This information is updated on a one second period, but resources are allocated hundreds of times per second. The WebStone [12] is a new benchmark that tests the performance of HTTP in contrast to server platforms and different implementations of HTTP. The WebStone is used to measure maximum and average response times for connecting to the server. The implementation is simple and removes any single point of failure. It reduces the cost as well. As the possibility of single point failure, crushing the whole system, is removed, the reliability is improved. This paper is organized in four sections. The section two briefly reviews the related work. Section three describes the implementation. Section four reports the result of performance evaluation for random and by load distribution of client requests. 2. Related Work Load balancing of server machines in a cluster is necessary to achieve a good response time and better utilization of the resources of the cluster. There are two ways in which content can be distributed. Individual cluster machines can contain different content and request distribution is then largely by URL. Alternatively, cluster machines can contain identical (mirror) content and distribution is by round-robin or by a server metric such as least loaded. Distributing content by URL is difficult to manage and requires a URL-aware (layer 7) dispatcher. Distributing content

by mirroring is simpler to manage, scaleable, and does not require a URLaware dispatcher. Load balancing by round-robin attempts to evenly distributes requests in multiple machines. However, for requests of varying workload it is likely that machines utilization will be uneven. That is, requests will be sent to machines that are not the best possible target. Load level can be measured by the current number of active TCP connections. A Single System Image server cluster based on a single host name can be implemented with a round-robin DNS [1] server. The figure1 shows the configuration for this method of clustering. In the figure, C is the machine that will handle the current request.

Figure 1 : Configuration for DNS dispatching

In this case, the dispatcher is an authoritative DNS that responds with the IP addresses of the next (round robin) server machine in the cluster. The DNS method is simple to implement, but has two shortcomings. First, caching of IP addresses in the clients and in secondary DNS servers will result in an unbalanced loading of the server machines. Second, if a server machine goes down, there is no automatic way for the DNS server to recognize this and to stop serving the IP address for the down server machine. Extensions to standard round-robin DNS can address the problem of detecting

down machines by using periodic probe messages. NAT-based dispatching uses a similar configuration to that of roundrobin DNS. A NAT [2] dispatcher has a single hostname and IP address to distribute requests to server machines. NAT changes the destination IP address of incoming request packets (to map to a server machine) and changes the source IP address of outgoing response packets (to map to the IP address of the NAT device). NAT can be implemented directly in the router and thus not require a separate dispatcher. However, this increases the processing load on the router and may introduce performance bottlenecks (in the router). There are two existing methods that do not require a dispatcher device. They are Distributed Packet Rewriting (DPR) and ONE-IP. In DPR [3] each server machine in a cluster has knowledge of the load of all other server machines. IP addresses of all server machines are published, allowing any of the machines to receive requests. An incoming request to any one server machine can be handed off (using TCP connection splicing) to a less loaded server machine. The hand-off of TCP connections is a complex problem to solve and requires that the original machine (the machine that did the handling) to remain in the connection until terminated. In ONE-IP [4], a broadcast mechanism is used to send an incoming request to all machines in the cluster. The figure 2 shows the configuration for ONE-IP. All machines have different Ethernet MAC addresses and different primary IP addresses, but the same secondary IP addresses. A filter is used in each machine so that only one machine will forward the IP packet to the upper

layer (e.g., to TCP). For this mechanism to work, the router must change the MAC destination address of incoming request packets to a broadcast (all 1s) address. This scheme requires a permanent ARP entry at the router, which associates the virtual IP address of the cluster with the Ethernet broadcast address.

Figure 2 : Configuration for ONE-IP

2.1 The Clone Cluster Method All machines in a cluster are proposed with the same IP and MAC address, to overcome the need for router configuration in ONE-IP. This design is called the Clone Cluster [5] method since all machines in the cluster are perfect clones. All machines are attached to a shared-medium Ethernet LAN using simple 2 layer enterasys VH-2402S2 switch [8] so that all packets are seen by all machines. The linux kernel of each machine is modified and recompiled. This modification is required to configure machines with same MAC address with switch. The Clone Cluster configuration (Figure 3) is the same as ONE-IP, but no changes in the router are required. This method automatically detects newly added server. Failure of any server does not affect the system, so definitely the reliability is improved. Connection is established randomly with any machine in clone cluster.

1.2.3.4, the unicast MAC address is set to 02-BF-1-2-3-4. The clusters MAC address is automatically modified by setting a registry entry and then reloading the adapters driver; the operating system does not have to be restarted. 3. Implementation
Figure 3 : Configuration for Clone Cluster

3.1 Server Configuration with two layer switch The cluster machines are attached to a two layer switch [8]. The use of a common MAC address would create a conflict since layer-two switches expect to see unique source MAC addresses on all switch ports. To avoid this problem, the source MAC address for outgoing packets in Ethernet header is modified to the original unique MAC address i.e. virtual MAC address of each machine. The kernel for each machine is recompiled and installed. This technique prevents the switch from learning the clusters actual MAC address i.e. duplicated address, and as a result, incoming packets for the cluster are delivered to all switch ports. The incoming packet to a server contains Ethernet header with duplicated MAC address as the destination address and the outgoing packet contains Ethernet header with virtual MAC address of machine as the source address. The switch updates its MAC vs port table by this source address in Ethernet header. Thus switch will never learn clusters duplicated MAC address. The client needs clusters duplicated MAC address for communication. The client generates an ARP [6][9] request to learn clusters duplicated MAC address. The structure of ARP packet is shown in figure 4. The ARP request packet is simple. It contains

The architecture of a Clone cluster is based on a fully distributed system design that does not use a frontend Web switch. Clone cluster presents a single system image to the outside world through the use of a single IP address. The main difference is that this address is not assigned to a centralized front-end node that receives all incoming client requests and distributes them to a target server. In a Clone cluster, the IP address is shared by all the server nodes in the cluster so that each node receives all in bound packets. The name of the Web site is mapped by the DNS into the common IP address that is shared by all servers. So, the decision on client request routing is not designated to one entity but it is fully distributed among the nodes of the cluster. All inbound packets reach each server. Only one server must accept the packets for the same client request and the others must refuse them. The primary IP address of each machine in the cluster is unique. The primary IP address is used for load balancing. The secondary IP address of each machine is same and the website name is mapped by the DNS into this secondary address. All cluster hosts are assigned the same MAC address. To insure uniqueness, the MAC address is derived from the clusters secondary IP address. For a secondary IP address of

clients MAC address as source address & broadcast MAC address as destination address in Ethernet header and clients MAC and IP address as source address & clusters IP address and empty field for clusters MAC address in ARP header. The ARP reply packet from cluster requires modification to prevent switch from learning clusters actual MAC address. The ARP header contains clusters IP and duplicated MAC address as source address & clients MAC and IP address as destination address. The Ethernet header in ARP reply packet contains clients MAC address as destination address and cluster machines virtual MAC address as source address. Thus the difference in source MAC address in Ethernet and ARP header prevents the switch from learning clusters duplicated MAC address contained in ARP header and at the same time provide clusters duplicated MAC address to the client that is contained in ARP header. The switch learns virtual MAC address of each machine.

Figure 4: Format of ARP request or reply packet

3.1 Compiling the Linux Kernel The Linux Kernel is the heart of the operating system. This is the software that takes care of the programs that are currently running on the system as well as handling the interactions of those processes with the hardware. The Kernel

will decide how much CPU time and usage is allowed to the various programs. The Linux Source code is downloaded from www.kernel.org. The code is uncompressed in /usr/src directory. The un-compressed directory linux2.6.10 is renamed to Linux. The Linux Kernel source code is modified to change the source MAC address in the Ethernet header of outgoing packet. The eth.c file in /usr/src/Linux/net/Ethernet is modified to build the Ethernet header of any outgoing packet with virtual source MAC address. This prevents the switch from learning the clusters actual MAC address. The configuration [10] for the kernel is created from /usr/src/Linux. Inside each category there are options with a box next to it. Possible options are Y (yes), N (no) and M (module). Yes would mean that support for that particular option is compiled right into the kernel. No would exclude that option. Choosing module will mean that support for that option will be built as a module. A module will include support for that particular option but instead of support for that component being included into the kernel, it will be created as a separate file that can be loaded into memory when required. The kernel is getting bigger and bigger all the time as support for more hardware and features are added. Using modules will help in reducing the size of the kernel. Once the kernel is configured save the configuration settings. The kernel is compiled and installed. 3.2 Implementing duplicate addresses All standard Ethernet adapters contain a unique burned-in MAC address. This address can be over-ridden by a locally administered address. Linux shell

commands [14] can be used to assign a locally administered address. The Ethernet cards on different machines are configured with the same MAC address and attach the same secondary IP address to these interfaces. Before a host attaches a new IP address to its Ethernet card, it checks that no other host on the same LAN is using that IP address. If duplicate IP address is found, both machines are informed and warnings are issued. When there are two machines with the same primary IP address, ARP detects this inconsistency and issues warning. A gratuitous ARP packet is sent during initialization. Gratuitous ARP is an ARP packet with destination as broadcast address and the source address as the IP address of the machine. Its basic purpose is to announce the IP address of the new machine on the LAN. If there is another machine with the same IP already existing on the LAN then it comes to know of this machine using the same IP and issue a warning. To avoid detection of the duplicate IP address in the Clone Cluster, the ifup file in /sbin directory is modified. This modification prevents any action on detection of duplicate IP. 3.2 TCP connection The transport layer provides a flow of data between two hosts, for the application layer. In TCP/IP protocol suite there are two vastly different transport protocols; TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). TCP [11] is a connection-oriented protocol. The threeway TCP handshake (figure 5) complete the connection establishment. The requesting end (normally called the client) sends a SYN segment specifying the port number of the server that the client wants to connect to, and the

clients initial sequence number (ISN, 1415531521 in this example). This is segment 1. The server responds with its own SYN segment containing the servers initial sequence number (segment 2). The server also acknowledges the clients SYN by acknowledging the clients ISN plus one. A SYN consumes one sequence number. This is segment 2. The client must acknowledge this SYN from the server by acknowledging the servers ISN plus one (segment 3).

Figure 5 : TCP three-way handshake

A reset [6][7][11] is sent by TCP whenever a segment arrives that doesnt appear correct for the referenced connection. In clone cluster method client sends a SYN packet and the clients initial sequence number (ISN). All the server machines in the cluster receives this SYN packet and respond with their own generated different SYN segment containing the servers initial sequence number and acknowledges the clients SYN by acknowledging the clients ISN plus one. The client acknowledges the first SYN from the cluster and ignores the second one i.e. client establishes the connection with the server from which it receives first response and ignores the rest. Now all the servers receive the packets issued by client for this connection. The server involved in connection receives the packet and process it. The server that is not involved

in this connection processes this packet and issues a RST packet as this is unknown packet for this server. The client receives this RST packet from the non-communicating server. Since the servers in the cluster are exact clones, the client considers that the packet is issued by the communicating server and the connection is reset. Such a scenario can lead to connection reset. To avoid connection reset, a rule is inserted to drop any outgoing RST packet from the running server port in /etc/sysconfig/iptables[15]. 4.0 Load Balancing Method Load balancing of server machines in a cluster is necessary to achieve a small response time and to best utilize the resources of the cluster. There are two ways in which content can be distributed. Individual cluster machines can contain different content and request distribution is then largely by URL Cluster machines can contain identical (mirror) content and distribution is by round-robin or by a server metric such as least loaded- Distributing content by mirroring is simpler to manage, more scaleable, and does not require a URLaware dispatcher. Mod backhand [13] is a module for the Apache web server. It provides several facilities that enable loadbalancing of HTTP requests over a cluster of machines. Allocation of inbound requests to peer machines within a cluster, can happen via an HTTP redirect or via internally proxying the HTTP request. Collection and distribution of resource statistics for machines within the cluster include memory utilization, CPU utilization, system load, and much more. An infrastructure to make allocation

decisions based on cluster-wide resource utilization information, as well as information in the request itself. Candidacy functions decide which server(s) are the candidates to serve a request, in order of preference. The mod backhand distribution comes with several built-in candidacy functions, two of them are byLoad and byRandom. byLoad [bias] reorders the list of candidate servers from least loaded to most loaded. The bias (a floating point number) is used to prefer you over proxying the request and can be used to approximate the effort involved in proxying a request. It is the amount of load that is added to all other servers' loads before sorting takes place. byRandom this function randomly (pseudo, of course) reorders the list of servers given as input. 4.1 Evaluation of Clone Cluster Testbed for experimental evaluation The experimental testbed, shown in Figure 6, consisted of two 1.5 GHz machines with 128 MByte RAM and 20 GByte hard disk running Red Hat Fedora 9.0 operating system and an Apache 1.3.31 web server. Each machine is connected to a 100 Mbps Ethernet shared-medium LAN. A 1.0-Ghz Pentium-3 machine with 64 MByte RAM is used to load and measure the performance of the Clone Cluster. The load generator machine used the WebStone benchmark.

clients or on a separate machine. The WebStone is a distributed, multi-process benchmark. As each of the Webchildren finishes its run the Webmaster collects the data from each client and coalesces the data into a report. During the run the Webchildren are autonomous of each other and the Webmaster.
Figure 6: Clone Cluster with two clients and one server

4.2 Evaluation Experiments Two experiments were defined to evaluate the performance of the Clone Cluster using three load balancing policies. Baseline experiment: This experiment baselines the performance of a single server machine. The WebStone benchmark was used. Response time and server thruput were measured. Two machine cluster experiment: This experiment measures the improvement (over a single machine) in response time and server thruput for a two machine Clone Cluster. The experimental set-up was the same as for the baseline experiment.
.

The WebStone [12] is a new benchmark that tests the performance of HTTP in contrast to server platforms and different implementations of HTTP. The WebStone is used to measure maximum and average response times for connecting to the server. The WebStone is executed simultaneously on one or more clients resident on the server's network. Each client is able to launch a number of children (called Web children). The Webchildren is able to request information from the server based on a given file load. The WebStone is a configurable benchmark that allows performance measurement of the server in the following ways: Average and maximum connect time Average and maximum response time Data throughput rate Number of pages retrieved Number of files retrieved

4.3 Evaluation Results Figure 7 show the results for the one (baseline) and figure 8 & 9 for the two machine cluster experiments. The measurements are taken from WebStone run log files. Figure 7 shows that the server thruput and average response time for one machine plateaus at about 150 clients. Further increases in load increases average response time and server thruput. This shows that the saturation point for the cluster has been reached. Figure 8 shows the server thruput and average response time for two machine cluster by using the policy to distribute the client requests by load. Similarly, figure 9 shows the server

The benchmark sends HTTP requests to the server and then processes the performance data when done. The Webchildren are controlled by the Webmaster. The Webmaster controls the operation of the benchmark run. The Webmaster can be run on one of the

thruput and average response time for two machine cluster for random distribution of client requests. The two machine cluster removes the saturation point for one machine cluster.

It shows the saturation point for one machine cluster at 150 clients. This saturation point is passed in two machine cluster. The byLoad and byRandom policies show similar behavior. Figure 11 shows the server thruput time vs number of clients graph. The initial performance of one machine cluster is better than two machine cluster unto its saturation point. The two machine cluster shows consistent performance.

Figure 7: Performance analysis baseline experiment

Figure 10 : Average Response Time Vs Number of Clients

Figure 8: Performance analysis of two machine experiment (by load policy)

Figure 11 : Server Thruput Vs Number of Clients

Summary The Clone Custer method is designed, implemented and evaluated for building Single System Image websites. The Clone Cluster is similar to ONE-IP [11] in its use of duplicated secondary IP address at individual machines and switch. However, unlike ONE-IP, the Clone Cluster does not require

Figure 9: Performance analysis of two machine experiment (by random policy)

Figure 10 shows the average response time vs number of clients, graph for one and two machine cluster by distributing the client requests randomly and by load.

configuration changes to an IP router and use duplicated MAC address with a simple two layer switch. Both ONE-IP and Clone Cluster are the same in cost for implementing a web site. The Clone Cluster is highly reliable and robust because of its ability to automatically detect failed and newly added machines. The inherent random distribution is not bad for real web traffic. Future work should include evaluation of the Clone Cluster for CPU intensive applications such as server- side scripts. Acknowledgements I would like to acknowledge Prof. B. D. Chaudhary for his able guidance. I would like to thank Rachit Srivastava, Abhinav Yadav, Gaurav Gangwar, Chetan Channa, Anindita Dutta who helped me throughout the project. References
1. T. Brisco, DNS Support for Load Balancing, RFC 1794, April 1995, http://www.rfceditor.org/rfc/rfc1794.txt 2. K. Egevang and P. Francis, ``The IP Network Address Translator (NAT)'', RFC 1631, May 1994, http://www.rfc-editor.org/rfc/rfc1631.txt 3. A. Bestavros,M. Crovella,J. Liu and D. Martin, Distributed Packet Rewriting and its Application to Scalable Server Architectures, Proceedings of the Sixth International Conference on Network Protocols, 1998. 4. O. Damani, P. Emerald Chung, Y. Huang, C. Kintala, and Y. Wang, ONE-IP: Techniques for .

Hosting a Service on a Cluster of Machines, Journal of Computer Networks and ISDN Systems, Vol. 29, No. 8-13, pp. 1019-1027, 1997. 5. Sujit Vaidya and Kenneth J. Christensen, A Single System Image Server Cluster using Duplicated MAC and IP Addresses, Proceedings of the 26th Annual IEEE Conference on Local Computer Networks, 2001. 6. W. Stevens, TCP/IP Illustrated Vol 1, Addison Wesley Longman, 1998. 7. W. Stevens, TCP/IP Illustrated Vol 2, Addison Wesley Longman, 1998. 8. Two layer switch, http://www.enterasys.com/products/switching/V H-2402S2/ 9. An Ethernet Address Resolution Protocol, RFC 826Nov 1982, http://www.rfceditor.org/rfc/rfc826.txt. 10. Compiling the Linux Kernel, www.freeos.com/articles/2589/ - 47k - 25 Jan 2005 11. Transmission Control Protocol RFC 793 September 1981, http://www.rfceditor.org/rfc/rfc793.txt. 12. WebStone, The benchmark for Web Servers, http://www.mindcraft.com/benchmarks/webston e/. 13. mod_backhand, THEO SCHLOSSNAGLE, April 2001, http://www.cnds.jhu.edu/~jesus/, http://www.backhand.org/mod_backhand/. 14. MAC spoofing under LINUX, http://learnlinux.tsf.org.za/courses/build/netadmin/ch05.html. 15. IPTABLES, www.redhat.com/docs/manuals/ linux/RHL-9-Manual/ref-guide/ch-iptables.html

Appendix Client side TCP connection ethereal dump

The servers in the cluster and the client machine are connected through a two layer switch. The client request is received by both the servers but connection is established with only one of them. The path of TCP request is shown in Figure 12.

Figure 12: TCP request The client machine and two servers are connected as shown in figure 4. The ethereal dump from Client machine (172.31.105.110) to the servers (172.31.105.234) is shown below. The client makes an ARP request in Frame 1 and receives ARP reply from both the servers in Frame 2 & 4. The source MAC address in Ethernet header in both are different , thus the ARP reply is from different servers. The client sends 1st syn packet in Frame 3. It receives 1st ack reply from server with virtul Mac 00:02:55:4a:45:62. The connection is established with this machine. The ack from 2nd server with virtual MAC address 00:02:55:4a:41:65 is received in Frame 7 and is discarded. The client completes TCP handshake and data is transferred from server with virtul Mac 00:02:55:4a:45:62.
Frame 1 (42 bytes on wire, 42 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: ff:ff:ff:ff:ff:ff Address Resolution Protocol (request) Frame 2 (60 bytes on wire, 60 bytes captured) Ethernet II, Src: 00:02:55:4a:45:62, Dst: 00:02:55:4a:47:36 Address Resolution Protocol (reply) Frame 3 (62 bytes on wire, 62 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: aa:aa:aa:aa:aa:aa Internet Protocol, Src Addr: 172.31.105.110 (172.31.105.110), Dst Addr: 172.31.105.234 (172.31.105.234) Transmission Control Protocol, Src Port: 2331 (2331), Dst Port: http (80), Seq: 3971471612, Ack: 0, Len: 0 Frame 4 (60 bytes on wire, 60 bytes captured) Ethernet II, Src: 00:02:55:4a:41:65, Dst: 00:02:55:4a:47:36 Address Resolution Protocol (reply) Frame 5 (62 bytes on wire, 62 bytes captured)

Ethernet II, Src: 00:02:55:4a:45:62, Dst: 00:02:55:4a:47:36 Internet Protocol, Src Addr: 172.31.105.234 (172.31.105.234), Dst Addr: 172.31.105.110 (172.31.105.110) Transmission Control Protocol, Src Port: http (80), Dst Port: 2331 (2331), Seq: 1855182594, Ack: 3971471613, Frame 6 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: aa:aa:aa:aa:aa:aa Internet Protocol, Src Addr: 172.31.105.110 (172.31.105.110), Dst Addr: 172.31.105.234 (172.31.105.234) Transmission Control Protocol, Src Port: 2331 (2331), Dst Port: http (80), Seq: 3971471613, Ack: 1855182595, Frame 7 (62 bytes on wire, 62 bytes captured) Ethernet II, Src: 00:02:55:4a:41:65, Dst: 00:02:55:4a:47:36 Internet Protocol, Src Addr: 172.31.105.234 (172.31.105.234), Dst Addr: 172.31.105.110 (172.31.105.110) Transmission Control Protocol, Src Port: http (80), Dst Port: 2331 (2331), Seq: 1744890722, Ack: 3971471613, Frame 8 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: aa:aa:aa:aa:aa:aa Internet Protocol, Src Addr: 172.31.105.110 (172.31.105.110), Dst Addr: 172.31.105.234 (172.31.105.234) Transmission Control Protocol, Src Port: 2331 (2331), Dst Port: http (80), Seq: 3971471613, Ack: 1855182595, Frame 9 (500 bytes on wire, 500 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: aa:aa:aa:aa:aa:aa Internet Protocol, Src Addr: 172.31.105.110 (172.31.105.110), Dst Addr: 172.31.105.234 (172.31.105.234) Transmission Control Protocol, Src Port: 2331 (2331), Dst Port: http (80), Seq: 3971471613, Ack: 1855182595, Hypertext Transfer Protocol Frame 10 (60 bytes on wire, 60 bytes captured) Ethernet II, Src: 00:02:55:4a:45:62, Dst: 00:02:55:4a:47:36 Internet Protocol, Src Addr: 172.31.105.234 (172.31.105.234), Dst Addr: 172.31.105.110 (172.31.105.110) Transmission Control Protocol, Src Port: http (80), Dst Port: 2331 (2331), Seq: 1855182595, Ack: 3971472059, Frame 11 (318 bytes on wire, 318 bytes captured) Ethernet II, Src: 00:02:55:4a:45:62, Dst: 00:02:55:4a:47:36 Internet Protocol, Src Addr: 172.31.105.234 (172.31.105.234), Dst Addr: 172.31.105.110 (172.31.105.110) Transmission Control Protocol, Src Port: http (80), Dst Port: 2331 (2331), Seq: 1855182595, Ack: 3971472059, Hypertext Transfer Protocol Frame 12 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: aa:aa:aa:aa:aa:aa Internet Protocol, Src Addr: 172.31.105.110 (172.31.105.110), Dst Addr: 172.31.105.234 (172.31.105.234) Transmission Control Protocol, Src Port: 2331 (2331), Dst Port: http (80), Seq: 3971472059, Ack: 1855182859, Frame 13 (409 bytes on wire, 409 bytes captured)

Ethernet II, Src: 00:02:55:4a:45:62, Dst: 00:02:55:4a:47:36 Internet Protocol, Src Addr: 172.31.105.234 (172.31.105.234), Dst Addr: 172.31.105.110 (172.31.105.110) Transmission Control Protocol, Src Port: http (80), Dst Port: 2331 (2331), Seq: 1855182859, Ack: 3971472060, Hypertext Transfer Protocol Frame 14 (54 bytes on wire, 54 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: aa:aa:aa:aa:aa:aa Internet Protocol, Src Addr: 172.31.105.110 (172.31.105.110), Dst Addr: 172.31.105.234 (172.31.105.234) Transmission Control Protocol, Src Port: 2331 (2331), Dst Port: http (80), Seq: 3971472060, Ack: 1855182859, Frame 15 (60 bytes on wire, 60 bytes captured) Ethernet II, Src: 00:02:55:4a:41:65, Dst: 00:02:55:4a:47:36 Address Resolution Protocol (request) Frame 16 (42 bytes on wire, 42 bytes captured) Ethernet II, Src: 00:02:55:4a:47:36, Dst: aa:aa:aa:aa:aa:aa Address Resolution Protocol (reply)

Vous aimerez peut-être aussi