Vous êtes sur la page 1sur 16

8eporL lnflnl8and 1omasz Palapacz

$ a a e

1.ACKCPDUN0
n the year 2000, a new hIghspeed Interconnect nfInI8and was announced and aImed to
replace all the other Internal and external system Interconnects. Today, nfInI8and Is
wIdely deployed and provIdes hIghspeed (from 5CbIt/sec up to 120CbIt/sec.)
Interconnects (|ellanox, 2010). nfInI8and supports InterconnectIons wIthIn or among
servers (servertoserver connectIvIty), as well as among storage systems (serverto
storage and storagetostorage connectIvIty) (7oltaIre, 2010).
|ajor corporatIons are now IntroducIng nfInI8and based solutIons. To Illustrate, Sun
|Icrosystems announced that SUN's UtIlIty ComputIng CrId wIll be based on nfInI8and,
and 8| has presented an nfInI8and based solutIon for the 8| blade centre (8| Press
Felease, 2009). Furthermore, 8| supports nfInI8and for theIr Ceneral Parallel FIle System
(CPFS) based clusters. The user communIty can now take advantage of the provIded
nfInI8and features such as hIgh bandwIdth, extremely low latency perIods, sImple
ImplementatIon and low costs to buIld solId cluster Interconnects or SAN solutIons (8TA,
2010).
nfInIband systems became IncreasIngly popular In varIous applIcatIon domaIns maInly due
to theIr hIgh performancetocost ratIo. Dut of the current Top 500 Supercomputers, 414
systems are clusters (Top500.org, 2010).
.AIh AN0 DJECTIVES:
AIh:
The cm of ths report s to cssess the lnfn8cnd technoloy cnd fnctonclty of ts
components.
DJECTIVES:
F To provIde a wIder understandIng of nfInI8and technology and evaluate how
nfInI8and can be used In dIfferent computIng envIronments.
F To analyse the major archItectural areas of nfInI8and: fabrIc (swItches and
routers), components, software and management.
F To evaluate the benefIts and lImItatIons of the nfInI8and.
F To compare and contrast the nfInI8and wIth contemporary technologIes
F To dIscuss the effectIveness of nfInI8and In current technology

8eporL lnflnl8and 1omasz Palapacz
$ a a e

.INTPD0UCTIDN
nItIally nfInI8and was consIdered as a replacement for PC 8us. PC, PCExpress and
nfInI8and technologIes all share the same general goal of ImprovIng /D bandwIdth, but
each technology attacks a dIfferent problem. Two of these technologIes, namely PC and
PCExpress are local buses. Therefore, nfInI8and enables a fabrIc that offers substantIally
more features and capabIlItIes than a local bus Interconnect (Shanley and WInkles, 200J).
The nfInI8and development process targeted at offerIng hIgh bandwIdth and hIgh
expandabIlIty for future computIng systems and InnovatIve features lIke Femote 0Irect
|emory Access (F0|A) (Shanley and WInkles, 200J). ThIs and the relatIvely low prIces due
to the wIde spreadIng make nfInI8and also very attractIve to HIgh Performance ComputIng
(HPC) vendors. n HPC envIronments, nfInI8and has been the Interconnect of choIce for
several years maInly due to Its low latency attrIbutes (EllIot, 2009). Dver the past few
years, It has made a remarkable comeback and has been backed by large companIes, usIng
It to develop strategIc products lIke CF0 solutIons, blade servers, storage, cluster
Interconnects etc.

.HISTDPY of INFINIAN0
nfInI8and came alIve In 1999 as a suggestIon to solve future communIcatIon problems
related to the lImIted transactIon rate throughout the PC port (FIchmond, 2001). n 1999
there were hIgh speed communIcatIons through fIber optIcs, however thIs was an
expensIve solutIon.
"lnfn8cnd s c hh speed sercl ponttopont lnk throh copper. lnstecd of tryn to
drve lots of wres n c shcred bs n pcrcllel, c snle wre ct mch hher speed s sed."
(Futral, 2001)
The nfInI8and Trade AssocIatIon was created as a merge between the two organIsatIons
Next CeneratIon /D and Future /D (see FIgure 1).
8eporL lnflnl8and 1omasz Palapacz
$ a a e


FIg.1. InfInIband EvoIutIon
nfInIband has become rapIdly a wIdely accepted medIum for Internode networks. The
specIfIcatIon was fInIshed In June 2001. From 2002 on a number of vendors has started to
offer theIr products based on the nfInIband standard (Cupta, 2002).
Today nfInI8and enjoys the advantage of broad Industry support vIa the nfInI8and Trade
AssocIatIon (8TA) and Its busIness partners. The 8TA membershIp has expanded to more
than J0 companIes IncludIng 0ell, HewlettPackard, 8|, ntel, |ellanox and Sun
|Icrosystems (8TA, 2010).
(Full lIst of major vendors Is Included In AppendIx A).
The use of nfInI8and In enterprIse data centres has recently become more sIgnIfIcant. n
2008, Dracle CorporatIon released Its HP Exadata Dracle 0atabase |achIne, whIch utIlIzes
nfInI8and as the backend Interconnect for all /D and Interconnect traffIc. The updated
versIon of Exadata now uses Sun ComputIng hardware and contInues to use an nfInI8and
Infrastructure (Dracle, 2008).

8eporL lnflnl8and 1omasz Palapacz
$ a a e

n 2009, 8| announced Its 082 pureScale offerIng a shareddIsk clusterIng scheme that
uses a cluster of 8| System servers communIcatIng wIth each other over an nfInI8and
Interconnect (8| Press Felease, 2009).
n 2010, scaleout hIghperformance network storage systems such as the 8| Sonas, sIlon
0, 0ata0Irect and Terascala have adopted nfInI8and as the prImary storage Interconnects
(Kerner, 2010).

.INFINIAN0 APCHITECTUPE AN0 CDhPDNENTS
nfInI8and represents a hIghperformance, swItchbased Interconnect archItecture that Is
desIgned to operate wIthIn a server (componenttocomponent communIcatIon, replacIng
exIstIng bus technologIes), as well as an external Interconnect solutIon (frametoframe
communIcatIon for server or storage components). t offers a sIngle Interconnect for
clusterIng, communIcatIon and storage purposes (8TA, 2010).
.1 InputlDutput ArchItecture
The shared bus archItecture Is the most common /D Interconnects today, although there
are numerous drawbacks. Clusters and networks requIre systems wIth hIgh speed fault
tolerant Interconnects that cannot be properly supported wIth a bus archItecture (EllIot,
2009). Thus, all bus archItectures requIre network Interface modules to enable scalable
network topologIes. To keep pace wIth systems an /D archItecture must provIde a hIgh
speed connectIon wIth the abIlIty to scale. Table below (FIgure 2) provIdes a sImple
feature comparIson between swItched fabrIc archItecture and shared bus archItecture.
(7oltaIre, 2010)
Feature FabrIc us
PIn Count Low HIgh
Number of End PoInts |any Few
hax SIgnaI Length |Iles nches
PeIIabIIIty YES ND
ScaIabIe YES ND
FauIt toIerant YES ND
FIgure . FabrIc archItecture versus us archItecture

8eporL lnflnl8and 1omasz Palapacz
$ a a e

.. InfInIand SwItched -fabrIc archItecture
The nfInI8and archItecture Is a poInttopoInt, swItched fabrIc archItecture that connects
varIous end poInts/nodes (FIgure J) Each end poInt can be a storage controller, a network
Interface card (NC) or an Interface to a host system.

FIgure . InfInIand FabrIc ArchItecture (ITA, 1
The swItched fabrIc archItecture provIdes scalabIlIty whIch can be accomplIshed by addIng
swItches to the fabrIc and connectIng more end nodes through the swItches. UnlIke a
shared bus archItecture, the total bandwIdth of a system Increases as addItIonal swItches
are added to the network (EllIot, 2009).
.2.1 Femote 0Irect |emory Access (F0|A)
The F0|A feature of nfInI8and usIng the |essage PassIng nterface (|P) allows database
servers In the cluster to dIrectly read and wrIte to each other's memory (EllIot, 2009).
8y IncorporatIng F0|A technology, nfInI8and typIcally achIeves a low latency of J to 5
mIcroseconds wIth some manufacturers claImIng latencIes as low as 1 to 2 mIcroseconds
(|ellanox, 2010). n contrast, Ethernet latencIes typIcally range from 20 to 80
mIcroseconds (|ellanox, 2007). These features make nfInI8and especIally useful as a
computIng cluster Interconnect, sInce tIghtly coupled cluster applIcatIons requIre low
latencIes for optImum performance (HoskIns, 2005; Fey, 2010).




8eporL lnflnl8and 1omasz Palapacz
$ a a e

.. Components of InfInIand
An nfInI8and Interconnect consIsts of many hardware and software components IncludIng:
O nfInI8and swItches and Fouters
O The subnet manager
O Host channel adapters (HCA)
O Target channel adapters (TCA)
O CablIng lInks
O DperatIng system
nfInI8and was desIgned to work wIth conventIonal swItches as a pure server Interconnect,
and wIth multIfabrIc server swItches to combIne the server Interconnect functIon wIth
Ethernet and FIbre Channel gateways as Illustrated on the fIgure below (FIgure 4).

FIgure . An exampIe of a network desIgn usIng InfInIand (ITA, 1
The most basIc nfInI8and Infrastructure wIll consIst of host nodes or servers equIpped
wIth HCAs and subnet manager software. |ore expansIve networks wIll Include multIple
swItches.
WIth a multIfabrIc swItch, nfInI8and archItecture Is used to connect servers to the swItch
fabrIc, FIbre Channel Is used to Interconnect from the swItch to the storage unIts and
Ethernet Is used to connect from the swItch to the user base through more tradItIonal
swItches and to the local area network (8TA, 2010; |ellanox, 2007).

8eporL lnflnl8and 1omasz Palapacz
$ a a e

5.J.1 SwItches
SwItches are the fundamental component of an nfInI8and fabrIc. A swItch contaIns more
than one nfInI8and port and forwards packets from one of Its port to another (|ellanox,
2010). A swItch provIdes scalabIlIty to an nfInI8and Infrastructure by allowIng a number of
HCAs, TCAs and other 8 swItches to connect to the Infrastructure. The swItch handles
network traffIc by checkIng the local lInk header of each data packet receIved and
forwardIng the packet to the proper destInatIon (8TA , 2010).

Sun |Icrosystems nfInIband SwItch 7oltaIre nfInIband SwItch
5.J.2 Fouters
nfInI8and routers forward packets from one subnet to another wIthout consumIng or
generatIng packets. UnlIke a swItch, a router reads the Clobal Foute Header to forward
the packet based on Its Pv6 network layer address (8TA, 2010).
5.J.J Subnet manager
The subnet manager Is the leadIng "object" for the nfInI8and fabrIc. Each nfInI8and
subnet has at least one subnet manager.
"cch sbnet mcncer resdes on c port of c chcnnel cdcpter, roter, or swtch cnd ccn
be mplemented ether n hcrdwcre or softwcre. The sbnet mcncers cre clso
responsble for neotctn cnd mctchn dctc rctes for c ponttopont chcnnel between
two nodes."
(|ellanox, 2010)
ThIs can be needed If one node has a 4x connectIon and sends data to a storage system
wIth only 1x connectIon. Then the subnet manager sets up a 1x connectIon channel
wIthout droppIng packets or ImpedIng any hIgher speed traffIc (8TA , 2010).


8eporL lnflnl8and 1omasz Palapacz
$ a a e

5.J.4 Channel adapters
A channel adapter Is the physIcal devIce that connects two
nfInI8and devIces. A channel adapter can be eIther a host
channel adapter (HCA) or a target channel adapter (TCA).

nfInIband Channel Adapter
.J.4.1 Host Chcnnel Adcpters (HCA)
nfInI8and host adapters, called Host Channel Adapters (HCA), have a protocol processIng
engIne In them that Implements a hardware queue to accept commands for each
communIcatIon endpoInt In the adapter. A host channel adapter connected to the host
processor through a standard PerIpheral Component nterconnect (PC), PC Extended (PC
X), or PC Express bus provIdes the host Interface. Each HCA can have more than one
nfInI8and port (|ellanox, 2010; 8TA , 2010).
.J.4.2 Tcret chcnnel cdcpters (TCA)
A TCA Is a specIalIzed channel adapter (CA). A TCA would be used as a gateway In a data
storage devIce and generally does not have the full functIonalIty and resources of an HCA.
A TCA enables /D devIces to be located wIthIn the network, Independent of a host
computer. The TCA also Includes an /D controller that Is specIfIc to Its partIcular devIce's
protocol such as SCS, FIbre Channel, or Ethernet (|ellanox, 2010; 8TA , 2010).
5.J.5 DperatIng Systems
WhIle the majorIty of exIstIng nfInI8and clusters operate on the LInux platform, drIvers
and HCA stacks are also avaIlable for |Icrosoft WIndows, SolarIs and other operatIng
systems from varIous nfInI8and hardware and software vendors (|ellanox, 2010).
5.J.6 Cables and Transfer Fates
The termInology for nfInI8and Is set by the 8TA and currently consIsts of sIx standard lInk
speeds:
O S0F SIngle 0ata Fate
O 00F 0ouble 0ata Fate
O 00F 0uad 0ata Fate
O F0F Fourteen 0ata Fate
8eporL lnflnl8and 1omasz Palapacz
$ a a e

O E0F Enhanced 0ata Fate
O H0F HIgh 0ata Fate
The lInks In nfInI8and are poInttopoInt (swItched) bIdIrectIonal usIng 2.5 Cb sIgnallIng
rate. The basIc 1x cable Is capable data rate of 2.5 C8/s In both dIrectIons sImultaneously.
The other bundle sIzes are 4x (10 C8/s), 8x (20 C8/s) and 12 x (J0 C8 /s) (EllIot, 2009;
Fey, 2010).

FIgure . InfInIand Poadhap (ITA, 1
fferent lnfn8cnd ccbles cre reqred for the dfferent performcnce levels of
lnfn8cnd."
(Abts, 2010)
As Illustrated In the fIgure above (FIgure 5), hIgher bandwIdth solutIons requIre cables
wIth more paIrs of wIre (4x, 8x,12x) (7oltaIre, 2010). The speed desIgnatIon Is based on
the numbers of send and receIve paIrs In each Interface. For example, a 1X nfInI8and
cable has one paIr of send and one paIr of receIve wIres, whIle a 12X connectIon has 12
send and 12 receIve paIrs (see FIgure 6) (|ellanox, 2007; 8TA, 2010).
8eporL lnflnl8and 1omasz Palapacz
$ a a e


FIgure 6. InfInIband LInk Types (ITA, 1.
|ost nfInI8and deployments have been on 4X S0F products, however the adoptIon of 4X
00F and 00F products Is IncreasIng rapIdly. To accommodate the varIous physIcal desIgns,
the nfInI8and specIfIcatIon defInes both copper and fIber optIc lInks. Current operatIng
dIstances for copper cable Is up to 17 meters at the 4X S0F and about 10 meters at 4X
00F. The fIber optIc lInks can be up to J00 meters at S0F and about 100 meters at 4X 00F
(|ellanox, 2010).
t Is expected that 1X lInks would provIde connectIons among frontend Web or fIle
servers, whIle 4X lInks connect second tIer applIcatIon servers and 12X lInks are used In
mIssIon crItIcal transactIon processIng and database management (Abts, 2010).
6.USACE DF INFINIAN0
|any of today's servers requIre three dIfferent network adapters to effIcIently and
effectIvely operate.
O A CIgabIt Ethernet card for the LAN,
O A FIbre Channel card for the SAN,
O A dedIcated servertoserver clusterIng card (eIther proprIetary or another CIgE
card).
O n some Instances, the cluster nodes may requIre an addItIonal dedIcated CIgE card
to connect to a backup network.
(7oltaIre, 2010)
n a blade server envIronment by provIdIng each blade system wIth three cards It
Introduces some Issues IncludIng Increased power consumptIon, addItIonal space
8eporL lnflnl8and 1omasz Palapacz
$ a a e

requIrements wIthIn the server, hIgher costs, greater complexIty and overheatIng. A blade
server wIth an nfInI8and elImInates the three cards and the Issues assocIated. The
backplane further enables F0|A capabIlItIes to allow the blade servers to act as one unIt
(If necessary) (HoskIns, 2005).
EnablIng hIgher performance wIthIn a server (replacIng PC buses wIth nfInI8and fabrIcs),
nfInI8and Introduces bandwIdth capacItIes that are normally used for CPUtoCPU and
CPU to memory communIcatIons outsIde of the server. t allows physIcally separated
systems to communIcate wIth each other as If they would represent a sIngle, large
symmetrIc multIprocessIng system (|ellanox, 2007; HoskIns, 2005).
6.1. CIusterIng
nfInI8and's strongest functIon Is hIgh performance computIng clusters In any applIcatIon
requIrIng parallel processIng and maxImal performance.
A cluster Is sImply a group of servers connected by load balancIng swItches workIng In
parallel to serve a partIcular applIcatIon (7oltaIre, 2010). nfInI8and sImplIfIes applIcatIon
cluster connectIons by joInIng the network Interconnect wIth a managed archItecture.
nfInI8and's swItched archItecture provIdes natIve cluster connectIvIty, thus supportIng
scalabIlIty and relIabIlIty InsIde and "out of the box". 0evIces can be added and multIple
paths can be utIlIzed wIth the addItIon of swItches to the fabrIc. Hh prorty
trcnscctons between devces ccn be processed checd of the lower prorty tems throh
DoS mechcnsms blt n to lnfn8cnd (Abts, 2010).
A hIgh performance computatIonal cluster Is only as fast as the slowest lInk between Its
fundamental work statIons, servers, swItches and storage unIts. There are dIfferent ways
to connect these resources together although the bottleneck problem must be consIdered.
(EllIot, 2009)
.1.1 stcnce fcctor
WhIle nfInI8and provIdes many benefIts wIthIn the data center, nfInI8and's dIstance
lImItatIons requIre close proxImIty for system and storage connectIvIty, makIng It
unsuItable for deployment beyond a sIngle sIte (|eghanathan et.al, 2011) (Fey, 2010).
WIth the advent of nfInI8and swItch extensIon devIces, It Is possIble to overcome dIstance
lImItatIons of nfInI8and and expand network and storage servIces creatIng global data
centres (Abts, 2010). nfInI8and extensIon solutIons not only allow nfInI8and to be relIably
transported and extended to any poInt on the globe at full lIne rate, but also support
8eporL lnflnl8and 1omasz Palapacz
$ a a e

nfInI8and transport over just about any network technology at lIne rate performance
(|ellanox, 2010).
6.. Storage Area Networks

nfInI8and has found some applIcatIon In networkattached storage (NAS), storage area
networks (SANs) and clustered storage systems. FIbre Channel Is the mandatory
Interconnect for SANs, although alternatIves such as nfInI8and potentIally offer prIce and
performance benefIts (|eghanathan et.al, 2011).
Storage Area Networks are groups of complex storage systems connected together through
managed swItches to allow very large amounts of data to be accessed from multIple
servers. Today, Storage Area Networks are buIlt usIng FIbre Channel swItches, hubs, and
servers whIch are attached through FIbre Channel host bus adapters (Abts, 2010).
The fabrIc topology of nfInI8and allows communIcatIon to be sImplIfIed between storage
and server (Fey, 2010). nfInI8and Interfaces enable storage systems to attach dIrectly to
the exIstIng nfInI8and fabrIc swItches that are In use by the cluster, sImplIfyIng the
network, and provIdIng sIgnIfIcant cost savIngs compared to FIbre Channel or gateway
based solutIons (Abts, 2010; |ellanox, 2007).
6.. VIrtuaIIsatIon

nfInI8and Is also IncreasIngly used alongsIde vIrtualIzatIon technologIes. LIghtly used
servers assIst others as needed to complete Intense processIng jobs Instead of delayIng
those tasks untIl resources become avaIlable, a smaller number of servers wIll perform a
better job sInce they are utIlIzed more effIcIently (8TA, 2010).

.SUhhAPY
8y provIdIng a low latency, hIgh bandwIdth and extremely low CPU overhead wIth very
hIgh prIce/performance ratIos, nfInI8and has become the most deployed hIghspeed
Interconnect, replacIng proprIetary or lowperformance solutIons. The nfInI8and
ArchItecture Is desIgned to provIde the needed scalabIlIty for tens of thousands of nodes
and multIple CPU cores per server platform and to provIde effIcIent utIlIzatIon of compute
processIng resources.
As of 2009 nfInI8and has become a popular Interconnect for hIgh performance computIng,
and Its adoptIon as seen In the TDP500 supercomputers lIst Is faster than Ethernet.
8eporL lnflnl8and 1omasz Palapacz
$ a a e

AccordIng to the latest TDP500 lIst of the most powerful systems In the world, nfInI8and
connected systems grew 18 yearoveryear (8TA, 2010), up to 215 systems In November
2010 (FIgure 6) from 182 systems In November 2009.

FIg.. Interconnect FamIIy share for 11l1
Source: Top.org
AccordIng to the latest update of Top500 lIst nfInI8and represents more than 42 of all
systems on the TDP500 (FIg 7.). FIgure 8 shows that nfInI8and has the hIghest
performance share of Interconnect fabrIcs In the Top500 supercomputers.

FIg.8: Interconnect Chart (Number of Systems ) FIg.: Interconnect FamIIy (Performance
Source: Top.org
Furthermore thIs type of Interconnect Is used In the majorIty of the Top100 wIth 61, the
Top200 wIth 58, and the TopJ00 wIth 51 share (Top500.org, 2010).
8eporL lnflnl8and 1omasz Palapacz
$ a a e


8.CDNCLUSIDN
nfInI8and technology Is Ideally posItIoned to solve key problems for supercomputIng and
for smallerscale HPC confIguratIons. 0eveloped as a standardsbased protocol, nfInI8and
was desIgned to provIde hIgh bandwIdth and low latency for clusters. WIth today's
vIrtualIzed server envIronments, nfInI8and can greatly reduce the /D bottleneck and
support applIcatIon scalabIlIty.
The decIsIon of usIng Ethernet or nfInI8and should be based on the Interconnect
performance requIrement and cost consIderatIon. nfInI8and leads In performance In both
bandwIdth and latency, and In general has a lower cost than the 10 C8 Ethernet products
that have started comIng to market. 8TA has set aggressIve roadmap to roll out E0F
eIght data rate and F0Ffourteen data rate nfInI8and products In the future whIch makes
nfInI8and a stable competItor to other technologIes.




















8eporL lnflnl8and 1omasz Palapacz
$ a a e

.PEFEPENCES

1. Abts, 0., (2010). Hh Performcnce Networks: From Spercomptn to Clod Comptn,
|organ E Claypool.
2. EllIot. F., (2009). Prcctccl deployment cnd mcncement of lnfn8cnd. [onlIne] AvaIlable:
http://hyperlInesystems.com/Info/cablIng_0209_8/ [accessed 02 February 2011
J. Fey, 0., (2010). rdComptn, SprInger.
4. Cupta, |., (2002). Storce crec network fndcmentcls, CIsco Press.
5. HoskIns, J., (2005). plorn l8M server 8 storce technoloy: c lcymcns de to the
l8M eServer cnd TotclStorce fcmles, |axImum Press.
6. 8| Press Felease, (2009). l8M preSccle Technoloy Redefnes Trcnsccton Processn
conomcs. [onlIne]. AvaIlable:
http://www0J.Ibm.com/press/us/en/pressrelease/2859J.wss [accessed 29 |arch 2011]
7. 8TA, (2010). Abot lnfn8cnd [onlIne]. AvaIlable:
http://www.InfInIbandta.org/content/pages.php:pg=about_us_InfInIband [accessed 28
November 2010]
8. 8TA, (2010). Advcntces of lnfn8cnd [onlIne]. AvaIlable:
http://www.InfInIbandta.org/content/pages.php:pg=about_us_advantages [accessed 28
November 2010]
9. 8TA, (2010). lnfn8cnd Mcrket Momentm [onlIne]. AvaIlable:
http://www.InfInIbandta.org/content/pages.php:pg=about_us_market [accessed 28
November 2010]
10.8TA, (2010). lnfn8cnd Trcde Assoccton (l8TA) Annonces 0pdcted lnfn8cnd"
Rocdmcp, Pro]ectn ctc Speeds of 104b/s per 4 Port n 2011 [onlIne]. AvaIlable:
http://www.InfInIbandta.org/content/pages.php:pg=press_room_ItemErec_Id=679
[accessed 28 November 2010]
11.8TA, (2010). Membershp Roster [onlIne]. AvaIlable:
http://www.InfInIbandta.org/content/pages.php:pg=about_us_roster [accessed 28
November 2010]
12.8TA, (2010). Nmber of Systems Connected by lnfn8cnd rows 18 percent YecrDver
Yecr ln the TDP00 Lst of the World's Most Powerfl Compters [onlIne]. AvaIlable:
http://www.InfInIbandta.org/content/pages.php:pg=press_room_ItemErec_Id=717
[accessed 28 November 2010]
1J.Kerner, S. |., (2010). lnfn8cnd Movn to thernet [onlIne]. AvaIlable:
http://www.enterprIsenetworkIngplanet.com/news/artIcle.php/J879506 [accessed 11
AprIl 2011]
8eporL lnflnl8and 1omasz Palapacz
$ a a e

14.|eghanathan, N., KaushIk, 8. K. E NagamalaI, 0., (2011). Advcnces n Compter Scence
cnd lnformcton Technoloy: Frst lnternctoncl Conference on Compter Scence cnd
lnformcton Technoloy, CCST 2011, 8angalore, ndIa, January 24, 2011. ProceedIngs,
SprInger.
15.|ellanox, (2007). lnfn8cnd for Storce Applcctons. [onlIne] AvaIlable:
http://mellanox.com/pdf/whItepapers/WP_2007_Storage_1.0_WP.pdf [accessed 28
November 2010]
16.|ellanox, (2008). The Ccse for lnfn8cnd over thernet [onlIne]. AvaIlable:
http://mellanox.com/pdf/whItepapers/WP_The_Case_for_nfInI8and_over_Ethernet.pdf
[accessed 28 November 2010]
17.|ellanox, (2010). lnfn8cnd FAD [onlIne]. AvaIlable:
http://mellanox.com/pdf/whItepapers/nfInI8andFA0_F0_100.pdf [accessed 28 November
2010]
18.|ellanox, (2010). lntrodcton to lnfn8cnd for nd 0sers: lndstryStcndcrd \cle cnd
Performcnce for Hh Performcnce Comptn cnd the nterprse[onlIne]. AvaIlable:
http://mellanox.com/pdf/whItepapers/ntro_to_8_for_End_Users.pdf [accessed 28
November 2010]
19.Dracle, (2008). cdctc Relecse v2. [onlIne] AvaIlable:
http://www.oracle.com/us/products/database/exadata/Index.html [accessed 26 |arch
2011]
20.Dzsu, |. T., Azsu, |. T. E 7aldurIez, P., (2011). Prncples of strbted ctcbcse
Systems. J
rd
ed., SprInger.
21.Palma, J. |. L. |., 0ayd, |., |arques, D. E Lopes, J. C., (2011). Hh Performcnce
Comptn for Comptctoncl Scence \CPAR 2010. 9th nternatIonal Conference,
8erkeley, CA, USA, June 2225, 2010, FevIsed, Selected Papers, SprInger London, LImIted.
22.FIchmond, F., (2001). lnfn8cnd: Net enercton l/D. [onlIne] AvaIlable:
http://sysopt.earthweb.com/artIcles/InfInIband/Index.html [accessed 01 February 2011]
2J.Shanley, T., J. WInkles, et cl., (200J). lnfn8cnd network crchtectre. AddIsonWesley.
24.7oltaIre, (2010). Lcre Sccle Clstern wth \oltcre lnfn8cnd HyperSccle"
Technoloy. [onlIne]. AvaIlable:
http://www.voltaIre.com/FesourceCenter/0ownload:pdf=WTPPFhyperscaleWE8
FInal.pdfEtype=whItepapersEleadsource=Web20200ocument200ownloadEprogramnam
e=WHTEPAPEFHyperscale [accessed 28 November 2010]
25.7oltaIre, (2010). Prodcts: lnfnbcnd. [onlIne]. AvaIlable:
http://www.voltaIre.com/Products/nfInI8and [accessed 28 November 2010]

Vous aimerez peut-être aussi