Vous êtes sur la page 1sur 2

InfiniBand (abbreviated IB) is a computer-networking communications standard used in high-performance computing that features very

high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or
switched interconnect between servers and storage systems, as well as an interconnect between storage systems.[1]
As of 2014 it was the most commonly used interconnect in supercomputers. Mellanox and Intel manufacture InfiniBand host bus adapters and network
switches, and in February 2016 it was reported[2] that Oracle Corporation had engineered its own Infiniband switch units and server adapter chips for
use in its own product lines and by third parties. Mellanox IB cards are available for Solaris, RHEL, SLES, Windows, HP-UX, VMware
ESX,[3] and AIX.[4] It is designed to be scalable and uses a switched fabric network topology.
As an interconnect, IB competes with Ethernet, Fibre Channel, and proprietary technologies[5] such as Intel Omni-Path.
The technology is promoted by the InfiniBand Trade Association.

Performance
Characteristics

SDR DDR QDR FDR10 FDR EDR HDR NDR XDR

Signaling rate (Gbit/s) 2.5 5 10 10.3125 14.0625[6] 25.78125 50 100 250

Theoretical effective throughput, Gbs, per 1x[7] 2 4 8 10 13.64 25 50

Speeds for 4x links (Gbit/s) 8 16 32 40 54.54 100 200

Speeds for 8x links (Gbit/s) 16 32 64 80 109.08 200 400

Speeds for 12x links (Gbit/s) 24 48 96 120 163.64 300 600

Encoding (bits) 8/10 8/10 8/10 64/66 64/66 64/66 64/66

Adapter latency (microseconds)[8] 5 2.5 1.3 0.7 0.7 0.5


Characteristics

SDR DDR QDR FDR10 FDR EDR HDR NDR XDR

2001,
Year[9] 2005 2007 2011 2011 2014[7] 2017[7] after 2020 future
2003

Links can be aggregated: most systems use a 4X aggregate. 8X and 12X links are typically used for cluster and supercomputer interconnects and for
inter-switch connections.
InfiniBand also provides RDMA capabilities for low CPU overhead.

Topology
InfiniBand uses a switched fabric topology, as opposed to early shared medium Ethernet. All transmissions begin or end at a channel adapter. Each
processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange
information for security or quality of service (QoS).

Messages
InfiniBand transmits data in packets of up to 4 KB that are taken together to form a message. A message can be:

 a direct memory access read from or write to a remote node (RDMA)


 a channel send or receive
 a transaction-based operation (that can be reversed)
 a multicast transmission
 an atomic operation

API
InfiniBand has no standard API. The standard only lists a set of verbs such as ibv_open_device or ibv_post_send , which are abstract
representations of functions or methods that must exist. The syntax of these functions is left to the vendors. The de facto standard software stack is
developed by OpenFabrics Alliance. It is released under two licenses GPL2 or BSD license for GNU/Linux and FreeBSD, and as WinOF under a
choice of BSD license for Windows. It has been adopted by most of the InfiniBand vendors, for GNU/Linux, FreeBSD, and Windows.

Vous aimerez peut-être aussi