Vous êtes sur la page 1sur 9

Using iPerf to Troubleshoot Speed/Throughput Issues

December 29, 2011


Posted by Andrew Tyler in Customer Service, SoftLayer, Technology, Tips and Tricks
Two of the most common network characteristics we look at when investigating network-related
concerns in the NOC are speed and throughput. You may have experienced the following scenario
yourself: You just provisioned a new bad-boy server with a gigabit connection in a data center on the
opposite side of the globe. You begin to upload your data and to your shock, you see "Time
Remaining: 10 Hours." "What's wrong with the network?" you wonder. The traceroute and MTR look
fine, but where's the performance and bandwidth I'm paying for?
This issue is all too common and it has nothing to do with the network, but in fact, the culprits are
none other than TCP and the laws of physics.
In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery of
data, it doesn't send more until it receives an acknowledgement from the remote host that all data
was received. This is called the "TCP Window." Data travels at the speed of light, and typically, most
hosts are fairly close together. This "windowing" happens so fast we don't even notice it. But as the
distance between two hosts increases, the speed of light remains constant. Thus, the further away
the two hosts, the longer it takes for the sender to receive the acknowledgement from the remote
host, reducing overall throughput. This effect is called "Bandwidth Delay Product," or BDP.
We can overcome BDP to some degree by sending more data at a time. We do this by adjusting the
"TCP Window" telling TCP to send more data per flow than the default parameters. Each OS is
different and the default values will vary, but most all operating systems allow tweaking of the TCP
stack and/or using parallel data streams. So what is iPerf and how does it fit into all of this?
What is iPerf?
iPerf is simple, open-source, command-line, network diagnostic tool that can run on Linux, BSD, or
Windows platforms which you install on two endpoints. One side runs in a 'server' mode listening for
requests; the other end runs 'client' mode that sends data. When activated, it tries to send as much
data down your pipe as it can, spitting out transfer statistics as it does. What's so cool about iPerf is
you can test in real time any number of TCP window settings, even using parallel streams. There's
even a Java based GUI you can install that runs on top of it called, JPerf (JPerf is beyond the scope
of this article, but I recommend looking into it). What's even cooler is that because iPerf resides in
memory, there are no files to clean up.
How do I use iPerf?
iPerf can be quickly downloaded from SourceForge to be installed. It uses port 5001 by default, and
the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by default,
but virtually every setting is adjustable. Once installed, simply bring up the command line on both of
the hosts and run these commands.
On the server side:
iperf -s
On the client side:
iperf -c [server_ip]
The output on the client side will look like this:
#iperf -c 10.10.10.5
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 10.0 sec 10.0 MBytes 1.00 Mbits/sec
There are a lot of things we can do to make this output better with more meaningful data. For
example, let's say we want the test to run for 20 seconds instead of 10 (-t 20), and we want to
display transfer data every 2 seconds instead of the default of 10 (-i 2), and we want to test on
port 8000 instead of 5001 (-p 8000). For the purposes of this exercise, let's use those
customization as our baseline. This is what the command string would look like on both ends:
Client Side:
#iperf -c 10.10.10.5 -p 8000 -t 20 -i 2
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 6.00 MBytes 25.2 Mbits/sec
[ 3] 2.0- 4.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 4.0- 6.0 sec 7.00 MBytes 29.4 Mbits/sec
[ 3] 6.0- 8.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 8.0-10.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 10.0-12.0 sec 7.00 MBytes 29.4 Mbits/sec
[ 3] 12.0-14.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 14.0-16.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 16.0-18.0 sec 6.88 MBytes 28.8 Mbits/sec
[ 3] 18.0-20.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec
Server Side:
#iperf -s -p 8000 -i 2
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 2.0 sec 6.05 MBytes 25.4 Mbits/sec
[ 4] 2.0- 4.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 4.0- 6.0 sec 6.94 MBytes 29.1 Mbits/sec
[ 4] 6.0- 8.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 8.0-10.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 10.0-12.0 sec 6.95 MBytes 29.1 Mbits/sec
[ 4] 12.0-14.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 14.0-16.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 16.0-18.0 sec 6.95 MBytes 29.1 Mbits/sec
[ 4] 18.0-20.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec
There are many, many other parameters you can set that are beyond the scope of this article, but for
our purposes, the main use is to prove out our bandwidth. This is where we'll use the TCP window
options and parallel streams. To set a new TCP window you use the -w switch and you can set the
parallel streams by using -P.
Increased TCP window commands:
Server side:
#iperf -s -w 1024k -i 2
Client side:
#iperf -i 2 -t 20 -c 10.10.10.5 -w 1024k
And here are the iperf results from two Softlayer file servers one in Washington, D.C., acting as
Client, the other in Seattle acting as Server:
Client Side:
# iperf -i 2 -t 20 -c 10.10.10.5 -p 8000 -w 1024k
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ 3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 3] 2.0- 4.0 sec 28.5 MBytes 120 Mbits/sec
[ 3] 4.0- 6.0 sec 28.4 MBytes 119 Mbits/sec
[ 3] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
[ 3] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 10.0-12.0 sec 29.0 MBytes 122 Mbits/sec
[ 3] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
[ 3] 16.0-18.0 sec 27.9 MBytes 117 Mbits/sec
[ 3] 18.0-20.0 sec 29.0 MBytes 122 Mbits/sec
[ 3] 0.0-20.0 sec 283 MBytes 118 Mbits/sec
Server Side:
#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 4] 2.0- 4.0 sec 28.6 MBytes 120 Mbits/sec
[ 4] 4.0- 6.0 sec 28.3 MBytes 119 Mbits/sec
[ 4] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
[ 4] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 10.0-12.0 sec 29.0 MBytes 121 Mbits/sec
[ 4] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
[ 4] 16.0-18.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
[ 4] 0.0-20.0 sec 283 MBytes 118 Mbits/sec
We can see here, that by increasing the TCP window from the default value to 1MB (1024k) we
achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit of
this OS in terms of Window size. So what more can we do? Parallel streams! With multiple
simultaneous streams we can fill the pipe close to its maximum usable amount.
Parallel Stream Command:
#iperf -i 2 -t 20 -c -p 8000 10.10.10.5 -w 1024k -P 7
Client Side:
#iperf -i 2 -t 20 -c -p 10.10.10.5 -w 1024k -P 7
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ ID] Interval Transfer Bandwidth
[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 7] 0.0- 2.0 sec 25.6 MBytes 107 Mbits/sec
[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 5] 0.0- 2.0 sec 25.8 MBytes 108 Mbits/sec
[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[SUM] 0.0- 2.0 sec 178 MBytes 746 Mbits/sec

(output omitted for brevity on server & client)

[ 7] 18.0-20.0 sec 28.2 MBytes 118 Mbits/sec
[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 5] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
[ 9] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 6] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
[SUM] 18.0-20.0 sec 200 MBytes 837 Mbits/sec
[SUM] 0.0-20.0 sec 1.93 GBytes 826 Mbits/sec
Server Side:
#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
[ ID] Interval Transfer Bandwidth
[ 5] 0.0- 2.0 sec 25.7 MBytes 108 Mbits/sec
[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 10] 0.0- 2.0 sec 25.9 MBytes 108 Mbits/sec
[ 7] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[SUM] 0.0- 2.0 sec 178 MBytes 747 Mbits/sec

[ 4] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 5] 18.0-20.0 sec 28.3 MBytes 119 Mbits/sec
[ 7] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 10] 18.0-20.0 sec 28.1 MBytes 118 Mbits/sec
[ 9] 18.0-20.0 sec 28.0 MBytes 118 Mbits/sec
[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 6] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
[SUM] 18.0-20.0 sec 200 MBytes 838 Mbits/sec
[SUM] 0.0-20.1 sec 1.93 GBytes 825 Mbits/sec
As you can see from the tests above, we were able to increase throughput from 29Mb/s with a single
stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a
Gigabit link, this about the maximum throughput one could hope to achieve before saturating the link
and causing packet loss. The bottom line is, I was able to prove out the network and verify
bandwidth capacity was not an issue. From that conclusion, I could focus on tweaking TCP to get
the most out of my network.
I'd like to point out that we will never get 100% out of any link. Typically, 90% utilization is about the
real world maximum anyone will achieve. If you get any more, you'll begin to saturate the link and
incur packet loss. I should also point out that Softlayer doesn't directly support iPerf, so it's up to you
install and play around with. It's such a versatile and easy to use little piece of software that it's
become invaluable to me, and I think it will become invaluable to you as well!
-Andrew













Keywords:
Administration, Bandwidth, Client, Connection, Engineer, Iperf,Network, NOC, Performance, Server, TCP, Test, Throughput,
Troubleshooting
Categories:
Customer Service | SoftLayer | Technology | Tips and Tricks
10 comments >>
Comments
Billy Bong Says:
December 29th, 2011 at 10:50am
Nice thanks. I was using NetPCS but it only runs on Windows.
Kathleenisobel Says:
January 5th, 2012 at 1:59pm
We were using iperf for a while. We switched to pathtest - it's still command line and still free, but
more customizable - TCP, UDP and ICMP and results have been consistent.www.testmypath.com
rajib Says:
October 4th, 2012 at 9:39pm
Hi,
how you calculate window size to achieve a particular throughput for a stream?
in iperf log single user peak throughput represents which portion?
in parallel tcp stream single user peak throughput means what??
how can i achive single user peak throughput 25 Mbps for multiple tcp streams in perf....
can you please let me know the answers of the above questions....
i need it urgently...
Thanks for your help..
Rajib
Chris Says:
July 30th, 2013 at 9:01pm
You're using your units wrong!
You keep referring to MB/s, when the iperf output is in Mbit/s. One is 8 times the other -- you're not
getting 824MB/s on a gigabit network, that's for sure!
khazard Says:
July 31st, 2013 at 12:00pm
You're right about that, Chris. Thanks for pointing out that typo! The code output reflected the correct
units but the summary of the results was incorrect. We've edited the content to show that the
numbers are megabits per second rather than megabytes per second.
Jnyanesh Says:
September 3rd, 2013 at 3:56am
We are testing Fast Ethernet circuit using iperf, mainly looking for through put of TCP & UDP.
We are using laptop both side. UDP have no issue for both side.
TCP result is strange. Because I made A side as server other as client, the result was far better (like
70~80M). Then I closed iperf, restarted both laptops.
This time I made B end as server other as client. The result is less than 5M.
Anyone have any idea? please help
selvakumar Says:
October 14th, 2013 at 5:18am
hi
plz help me m dion my project in linux xenomai and rtnet so i need to find iprf and jperf i dnt knpw
how to work with that so can u help me plz....
Me Says:
December 3rd, 2013 at 6:39pm
928Mb/s here on Gbit LAN.
JPerf client on windows, iperf server on linux.
TCP, 4 streams, 2Mbit TCP Windows size, 16 sec transmit time
CAT5 FTP cable (foiled twisted pair), not even CAT5e!
So 824Mb/s and 90% are NOT the practical maximums ;)
Stranger Says:
December 10th, 2013 at 5:45pm
Could anyone tell me what I am doing wrong here?
Is anyone experiencing the same results as me?
first run:
machine1: iperf -s
machine2: iperf -c 10.0.0.20
provides 304Mbit/sec saying default window size 64KByte
second run:
machine1: iperf -s -w 64K
machine2: iperf -c 10.0.0.20 -w 64K
provides 3.80Gbit/sec with provided 64KByte window size...
Proof: http://puu.sh/5IJy4.jpg
Srao Says:
January 16th, 2014 at 2:47pm
I am seeing the same behavior as @Stranger
First run after rebooting the machines: 9.41Gbitsper second on a 10Gb link. The two machines are
connected by one optical cable sitting side by side.
Subsequent runs yield ~300Mbps.
I did not see any difference in TCP handshake on wireshark.
1st run 9.4 Gbps 99.4% CPU utilization
2nd run 300 Mbps 4% CPU utilization.
So, why is this different?
BTW both machines are running ubuntu 12.04 and identical blade server hardware with massive
CPU/memory resources and iperf is the only "application" running.

Vous aimerez peut-être aussi