Vous êtes sur la page 1sur 9

Beowulf Clusters

Shane Almeida
April 30, 2007

1 Introduction the stated goals of the NAS program was “to act
as a pathfinder in advanced, large-scale, computer
NASA increasingly relies on “the availability of ad- systems capability through systematic incorporation
vanced computing capability” to support its ever of state-of-the-art improvements in computer hard-
expanding missions [9]. Traditionally, NASA uti- ware and software technologies” [1]. In support of
lized high-end supercomputers or scientific worksta- this goal, NAS announced the formation of the NAS
tions to meet its need for resources capable of han- Distributed Computing Team (DCT) in 1993. As
dling “science, engineering, data assimilation, space- “interest in tightly coupled workstation clusters as
borne flight and instrumentation control, data archiv- replacements for large systems” had grown, the task
ing and dissemination, signal processing, and simu- of the DCT was to investigate the “possibilities of
lation.” However, the “high cost, decreasing number doing compute intensive tasks on distributed work-
of supercomputer vendors, variability of architecture stations systems” [1].
types across vendors and between successive genera- At the time, there was significant interest from
tions of a given vendor, and often inadequate software aerospace firms in distributed computing, partly due
environments” limited the availability and attractive- to the “realities” of making massively parallel proces-
ness of such machines. sor systems function. Many in the industry believed
In the 1990s, economies of scale, Moore’s law, and that clusters could be a viable alternative to super-
the birth of free software combined to enable the via- computers because of three major drivers [1]:
bility of a new approach to high-performance parallel
• The rate of increase in workstation CPU perfor-
computing. The Beowulf Project, a research effort at
mance was outstripping that of MPPs and tra-
the Center of Excellence in Space Data and Informa-
ditional supercomputers
tion Sciences (CESDIS) at NASA’s Goddard Space
Flight Center, introduced commodity-based cluster • Large memory workstations were available at
systems to the world of high-performance computing reasonable prices
and defined a new class of computer systems: Be-
• Many corporations had a large installed base of
owulf clusters.
workstations that could be used for distributed
computing
2 Clustered Workstations The Distributed Computing Team was assembled
with the mission to “establish a prototype com-
The Numerical Aerodynamic Simulation (NAS) Sys- puting environment across large groups of worksta-
tems devision at NASA Ames Research Center was tions that allows efficient computing for batch and
designed as one of the most advanced supercomput- batch/parallel jobs while not co-opting those sys-
ing environments for a “national client base of scien- tems from their primary role” [1]. The resulting en-
tists working primarily in the field of computational vironment, the NAS Distributed Computing Facil-
fluid dynamics and related disciplines” [1]. One of ity (DCF), was a loosely-coupled cluster consisting

1
of over 320 NAS workstations. The aggregate theo- Besides performance, one of the major differences
retical computational capacity of the cluster was 2.3 between personal computers and workstation was the
gigaflops, which was approximately 30% of the per- software environment [9]. Scientific workstations typ-
formance of an entire Cray C90 supercomputer. A ically used sophisticated Unix-derived operating sys-
conservative estimate of the actual achievable perfor- tems while personal computers were limited to DOS.
mance was around 700 megaflops. The conclusion In 1992, the birth of free Unix-like operating systems
of the Distributed Computing Team was that “clus- such as Linux and BSD-derived systems gave per-
ter computing is a legitimate and viable computing sonal computer users access to the same software en-
environment for compute-intensive work.” [1]. vironments used in workstations. In fact, these free
operating systems were quite often “equal to or even
superior to any of the vendor offered Unix-like en-
3 Commodity Computing vironments,” which led to the personal computer re-
placing Unix workstations as the “platform of choice”
The birth of commodity-based cluster systems can in many academic institutions [9].
be attributed to “a complex sequence of incremen- The combination of these factors resulted in clus-
tal changes in the complicated nonlinear tradeoff ters of personal computers becoming viable alterna-
space” [9]. These changes enabled researchers to tives to vendor-supplied parallel computers for many
chart a new direction in parallel computing based en- science and engineering applications [9]. Expanding
tirely on commodity components. on the concepts of Networks of Workstations (NOW)
One impetus for the shift was a significant reduc- and Clusters of Workstations (COW) introduced by
tion in vendor-supported high-performance comput- the Numerical Aerodynamic Simulation (NAS) Sys-
ers [9]. By the end of the 1990s, only four compa- tems Division [1], Donald Becker and Thomas Ster-
nies were still producing high-end systems. The four ling began to develop the idea of commodity-based
companies that remained focused primarily on the cluster systems. Designed as a cost-effective alter-
workstation and server markets and did not consider native to large supercomputers, the commodity clus-
high-end systems to be their main product line. ters promised to deliver new performance-to-cost ra-
At the same time, the performance and capac- tios that far exceeded traditional supercomputers and
ity of mass-market commodity off-the-shelf systems scientific workstations.
(M2 COTS) began to approach the capabilities of sci-
entific workstations. The processors in both personal
computers and workstations were “within the same 4 The Beowulf Project
performance regime” by 1998 [9]. This convergence
can be partially attributed to a rapid increase in float- Based on the initial sketches by Becker and Sterling,
ing point performance of personal computers. While the Beowulf Project was founded in 1994 at CES-
workstations experienced an improvement by a fac- DIS under the sponsorship of the High Performance
tor of five over three generations, personal computers Computing and Communications/Earth and Space
underwent an improvement by a factor of eighteen Sciences (HPCC/ESS) project. During that time,
over the same period [9]. The dramatic out-pacing NASA began to focus on a new paradigm for its mis-
of workstations in terms of the rate of performance sions: “better, faster, cheaper.” The Beowulf Project
gains enabled personal computers to approach work- fit this new paradigm nicely.
stations in raw processing power. In addition, both
types of systems began to utilize the same memory 4.1 Beowulf Architecture
and interface standards. Even EIDE-based secondary
storage in personal computers started to approach Driven by a “set of requirements for high performance
the capacity and performance of the SCSI storage scientific workstations in the Earth and space sciences
systems in use in workstations. community and the opportunity of low cost com-

2
puting made available through the PC related mass and distribution drives lower prices for such compo-
market of commodity subsystems” [10], the Beowulf nents as processors, motherboards, and networking
Project developed the Beowulf parallel workstation equipment. The clusters consist of computation ele-
in 1995. The initial configuration consisted of [10]: ments called nodes. A node consists of a single moth-
erboard and memory bus, but may contain one or
• 16 motherboards with Intel x86 processors or more processors. Nodes are equipped with one or
equivalent more network interfaces and are networked together.
In terms of software, Beowulf-class systems em-
• 256 Mbytes of DRAM, 16 MByte per processor
ploy Unix-like operating systems, with a preference
board
given to distributions with available source code and
• 16 hard disk drives and controllers, one per pro- little or no cost. An emphasis is placed on source
cessor board code availability because it “enables custom modifi-
cations to facilitate parallel computations” [9]. Linux
• 2 Ethernets (10baseT or 10base2) and con- and BSD-derived operating systems have been used
trollers, 2 per processor in Beowulf-class systems. On top of the operating
system, Beowulf-class systems use message passing
• 2 high resolution monitors with video controllers execution models. In some case, direct use of sockets
and 1 keyboard is used, but more frequently standard communica-
tion libraries such as MPI or PVM are used [7, 9].
The first prototype coupled 100 MHz Intel DX4 Although shared memory models exist [2], most
processors with 500 megabyte disk drives, for a com- Beowulf-class systems use distributed memory.
bined 8 gigabytes of storage. Linux was used on each From the beginning, the community surrounding
of the nodes. The total cost of the system was less Beowulf-class systems has stressed that designs and
than $50,000. Initial benchmarks showed the system improvements should be shared with the community.
sustained 60 megaflops on a compressible fluid dy- To this end, the Beowulf Project started Grendel, a
namics test, which actually compared favorable to an continuously evolving set of software tools [6]. Gren-
Intel Paragon supercomputer of equivalent size [10]. del included kernels, utilities, and add-on packages
The fact that a “pile of PCs” achieved comparable such as development environments and development
performance with a commercial supercomputer val- libraries.
idated the concept of cluster computing based on One advantage of a Beowulf-class system is that
commodity components and defined a new class of no single vendor owns the rights to the product be-
computers. cause many vendors are competing in the commod-
ity market and using the same standards [6]. Be-
4.2 Beowulf-class Systems cause of this, this approach to high-performance com-
puting allows users to track technology advances.
As defined by the Beowulf Project, a Beowulf-class The best and most recent technology can be read-
system is a “combination of hardware, software, and ily added to a cluster as it is introduced the market.
usage which while not all encompassing, yields a do- The user-controlled nature of Beowulf-class systems
main of computing that is scalable, exhibits excellent allows “just in place configuration,” which means
price-performance, provides a sophisticated and ro- application-tailored configurations are possible [6].
bust environment, and has broad applicability” [9].
The focus of Beowulf-class systems is science and en-
4.3 Network
gineering applications.
To reduce costs, Beowulf-class systems exclusively The capacity of the interprocessor communication
employ commodity technology that has been targeted network limits the degree to which multiple proces-
to the mass market. The benefits of mass production sors can be combined to perform a single applica-

3
tion [11], which means that the network is a major able to balance bandwidth with the performance of
constraints in a Beowulf-class system. When the Be- the system. The first system was “characterized with
owulf Project at Goddard began, the highest perfor- a parallel disk bandwidth that significantly exceeds
mance network equipment available on the commod- the interprocessor communications bandwidth” [8],
ity market was 10BASE2 and 10BASE-T [10]. resulting in a network-imposed bottleneck to the sys-
Recognizing that the network was a limiting factor tem.
in performance, the original Beowulf attempted to in- To address the problem, a second cluster, the Be-
crease communication bandwidth by routing packets owulf Demonstration system, was constructed us-
over multiple, independent channels. A special device ing Fast Ethernet instead of 10BASE2 or 10BASE-
driver was written for Linux that allowed the kernel T [8]. In performance tests, the new system provided
to split network traffic between two or more network enough bandwidth to match the demands of the disk
interfaces. This resulted in a “bonded” dual network throughput. While its predecessor was unbalanced,
that was completely transparent to the user [10]. The the Fast Ethernet-based Beowulf cluster provided a
network driver blindly dispatched packets over avail- balanced architecture for scientific computing [8]. In
able channels in an alternating fashion. Even in the fact, the new system was not only better than the
absence of a sophisticated load balancing algorithm, original, but it performed in “a new regime” [8]. The
the bonded network achieved 1.7 times the sustained authors concluded that “[parallel combination of dual
bandwidth of a single channel [11]. 100 Mbps Ethernet channels] is both necessary and
However, even with the bonded network, the net- sufficient to achieve interprocessor communications
work was “inadequate under certain loads” [10]. rates comparable to those of the disk array” [8].
The first report published by the Beowulf Project
concluded that “higher bandwidth networks are re- 4.3.2 Topologies
quired” [10], but high-performance, high-reliability,
inexpensive, and linearly-scalable networks did not Although channel-bonded Fast Ethernet had been
exist and were unlikely to become available “any time shown to provide balance, the technology was only
soon” [7]. Because of its low cost, Fast Ethernet was beginning to become cost effective in 1996. In addi-
seen as an acceptable compromise despite high la- tion, the 10BASE2- and 10BASE-T-based networks
tency and only modest bandwidth and scalability [7]. used in the original Beowulf were “multidrop net-
works” in which a network channel connected all
4.3.1 Balancing nodes. This type of topology meant that only one
packet could be transmitted at a time. Even with
Large data sets are frequently used in scientific com- channel bonding, the number of simultaneous trans-
puting and the working set size often exceeded the missions was limited to a small number because of
storage capacity of conventional scientific worksta- cost and physical limitations (i.e., a single personal
tions of the time. Because data usually required re- computer can only support a few network interface
peated examination, repeated access to shared file cards).
servers over common local area networks was neces- Because a key factor in the viability of Beowulf-
sary to access pieces of the data [8]. Because Beowulf- class systems is the achievable interprocessor network
class systems aggregate disk capacity, it became pos- bandwidth, the Beowulf Project also studied alter-
sible to stage large working sets entirely within the native network topologies as a path to higher per-
cluster and, therefore, eliminate much of the overhead formance [5]. Instead of using dual Ethernet chan-
access to the remote file server. nels working in parallel as a virtual channel, the
One important consideration in a Beowulf-class network was segmented into eight separate channels,
system is the balance between disk throughput and with each node connected to two channels. Two rout-
internal network bandwidth. Unfortunately, even ing techniques were used: software-based routing and
with bonded channels, the original Beowulf was un- switch-based routing, as shown in Figures 1 and 2 re-

4
spectively. only communicate directly with six neighbors. In the
original “multidrop” network, each node was a direct
neighbor to the remaining fifteen nodes. To commu-
nicate with the nine remote nodes in the software-
routed network, an intermediate node must act as a
router. Remote communication consumes twice the
aggregate bandwidth of local communication because
packets must be replicated on two segments [5]. In
addition, latency is added because of the overhead of
software routing.
Despite the overhead, performance tests showed
that a segmented network topology outperformed the
original bonded network. For local communication
(i.e., within a segment), performance improved by
almost a factor of four because of reduced contention
on the smaller segments [5]. The remote communica-
tion tests showed an improvement of a factor of two
over the original network. The fact that performance
did not degrade despite the overhead of routing was
an important result of the study and proved the im-
Figure 1: Software-Routed Network [5]
portance of alternative network topologies.
The software-routed network yielded performance
near that of the switch-based system. However, in sit-
uations with light traffic, the software latency became
a dominant factor and caused a more significant di-
vergence in performance [5]. With high traffic in the
switched topology, the channel-bonded network con-
nections actually reduced throughput on the network.
The additional load caused by packet replication re-
sulted in worse performance compared to a stan-
dard single-channel network. These results demon-
strate that users of Beowulf-class systems must bal-
ance the type of application with an appropriate net-
work topology and that high theoretical bandwidth
does not always result in high effective bandwidth.

4.3.3 Scaling
Although Beowulf-class systems were being installed
at universities, laboratories, and industrial sites
Figure 2: Switch-Routed Network [5] around the world by 1998, clusters consisting of hun-
dreds or thousands of processors had not been ex-
Although these types of networks quadruple the plored. Systems with a few dozen processors were ap-
theoretical maximum aggregate bandwidth, there are proaching supercomputing performance, but the fu-
some constraints on the effective bandwidth. For ture of Beowulf-class systems hinged on architectural
example, in the software-routed network, nodes can scalability that would enable large-scale applications

5
to run on Beowulf-class systems [7]. that the nodes served both as routers and computa-
The Beowulf Project investigated new intercon- tional engines [7]. Using the nodes as routers, elimi-
nection techniques to work within the Fast Ether- nates the expense of dedicated hardware routers and
net constraint. Because switches were the “de facto also overcomes some limitations in the spanning tree
standard connection component for small Beowulf algorithm of Ethernet [7].
systems” [7], connecting multiple switches together The drawback of node routing is that additional
would allow clusters to grow beyond the port limi- latency is added because of the software overhead in-
tations of a single switch. A tree structure similar volved. In addition, the computational performance
to Figure 3 was proposed to allow L ∗ P nodes to of the nodes can be affected by the routing opera-
be connected, where P is the number of ports in the tions. However, the ability to route between nodes
second-level switches and L is both the number of “opens an enormous class of network topologies for
second-level switches and the number of ports on the consideration” [7]. One such topology is shown in
root switch. Figure 4. Each of the five groups consists of four
nodes for a total of twenty nodes. A single group
is shown in Figure 5. Each node has two network
interfaces and connects to two other groups.

Figure 3: Tree of Switches [7]

Unfortunately, the per-node bandwidth through


the top switch in such a system is u/P , where u
is the uplink bandwidth. To improve the band-
width per-node, the inter-switch links could be re-
placed with higher-bandwidth technology, such as gi- Figure 4: Routed Network Configuration [7]
gabit Ethernet. A number of switches available at
the time already included one or two gigabit up- Each of the groups in Figure 4 is actually a small
link ports [7]. This approach would certainly im- Beowulf-class system. With such a configuration,
prove per-node bandwidth through the root, but it “bandwidth-limited applications may be able to scale
would not offer a path to scalable systems. Because up from smaller Beowulf systems with only moder-
the performance of the nodes themselves continue to ate degradation in performance” and allow scalability
increase with newer technology, the network would “up to thousands of processors with currently avail-
once again become saturated despite improved inter- able commodity components” [7]. In performance
switch bandwidth [7]. tests, the software-routed network did not greatly un-
The second approach investigated was a network derperform a more expensive switched network, al-
that combined point-to-point connections and com- though there was a noticeable degradation of per-
modity switches. The novelty of this technique was formance. Ultimately, the authors conclude, routed

6
Los Alamos National Laboratory, Hargrove and Hoff-
man decided to construct a Beowulf-class system for
their research. Their funding request for 64 Pen-
tium II-based personal computers, however, was de-
nied. Faced with a complete lack of funding, the re-
searchers turned to donated surplus personal com-
puters to construct a Beowulf-class system [3].
Built entirely from discarded or donated per-
sonal computers1 , the Stone SouperComputer at Oak
Ridge National Laboratory became one of the first
examples of a heterogeneous configuration [3, 4].
The Stone SouperComputer consisted of 75 Intel 486
nodes, 53 Pentium nodes, and five Alpha nodes, with
Figure 5: Detail of One Group [7] the Alpha nodes running Digital UNIX and the re-
maining Intel-based nodes using Linux [3].
One of the problems faced by the Oak Ridge
network may be the necessary choice because of cost researchers was balancing the processing workload
and the ability to scale beyond the port limitations among the nodes [3]. With a naı̈ve approach to
in switching equipment [7]. workload distribution that evenly divided compu-
tational responsibility among nodes, the faster ma-
chines would be idle for long periods of time as
5 Heterogeny they waited for the slower machines to finish pro-
cessing. The performance of individual nodes, there-
Although Beowulf-class systems use discrete process- fore, would drop to that of the slowest node. Instead,
ing nodes, the vast majority of installed systems by the researchers developed an algorithm that allows a
2002 still consisted of homogeneous components: typ- faster node to receive additional work after complet-
ically a single processor type and at most two differ- ing its task [3]. With this load balancing in place,
ent operating systems [4]. Two projects, in particu- the faster machines do the majority of the work, but
lar, examined heterogeny in Beowulf-class systems. the slower machines still contribute to system perfor-
mance.
5.1 Stone SouperComputer
In 1996, William Hargrove and Forrest Hoffman were 5.2 Wiglaf
attempting to map national ecoregions at Oak Ridge
National Laboratory. Their approach divided the The success of the Stone SouperComputer proved the
country into millions of square cells and tracked up viability of low levels of heterogeny in Beowulf-class
to twenty-five variables per cell. Although a single systems, but large-scale heterogeny was still untested.
workstation could manage the data for a few states The Wiglaf system, a Beowulf-class system at Bran-
at a time, the problem could not be divided and as- deis University, began as a project to “keep some
signed to separate workstations in such a manner be- old HP workstations usable” but soon evolved into
cause the environmental data had to be compared a modestly-sized heterogeneous cluster that enabled
and processed simultaneously [3]. The computations research into the viability of diversity in hardware
required a parallel-processing supercomputer. 1 Because Oak Ridge incurred no cost for the components,
Because of the success of the original Beowulf par- the researchers jokingly argued that, if they could achieve any
allel workstation and its two successors, Hyglac at performance from the system, the performance-to-price ratio
the California Institute of Technology and Loki at of the system was infinity.

7
and software environments beyond the scope of the Beowulf installations will become heterogeneous” [4].
Stone SouperComputer [4].
Wiglaf used a number of unique hardware archi-
tectures across approximately forty nodes [4]. The 6 Conclusions
majority consisted of a mix of x86-based machines,
PowerPC systems, and MIPS-based SGI worksta- Cluster computing was a natural consequence of sev-
tions and the remaining nodes were made up of Mo- eral trends in computing in the 1990s. Powerful and
torola 68040, Sparc, Alpha, and HP-PA machines. In inexpensive mass-market microprocessors combined
total, eight unique processor architectures in a vari- with powerful and freely-available operating systems
ety of speed classes were used. enhanced the useful of personal computers for science
Because of architectural differences among the and engineering. High-speed, low-cost networking en-
nodes, the Wiglaf cluster at Brandeis used six differ- abled multiple nodes to be connected, resulting in
ent operating systems [4]. Although the differences aggregate performance that rivaled more traditional
between the Linux variants used by most of the nodes scientific workstations and supercomputers.
were minimal, the difference between Linux and Irix Beowulf-class computing “wrests high-level com-
(used on the SGI workstations) were “quite deep” [4]. puting away from the privileged few and makes low-
Different version of the C standard library, for ex- cost parallel-processing systems available to those
ample, caused software incompatibility. In addition, with modest resources. Research groups, high
the systems sometimes responded differently to ba- schools, colleges or small businesses can build or buy
sic commands and some lacked procfs, the pseudo- their own Beowulf clusters, realizing the promise of a
file system available in some Unix-like operating sys- supercomputer in every basement” [3].
tem that grants user-level programs access to kernel Clusters in general and Beowulf-class systems in
process information. Subtle differences such as these particular continue to be used throughout the world.
complicated system management and the gathering High-performance networks, increasingly powerful
of diagnostic information. processors, and an abundance of freely available par-
Ultimately, these obstacles were overcome. Tech- allel software ensures that clustered computing re-
niques for handling multiple operating systems and main a dominant force in high-performance comput-
architectures were developed and refined and, “with ing.
only minor software alterations,” “machines of eight
different processor architectures and six different op-
erating systems... [worked] together as a single Be- References
owulf cluster” [4].
[1] K. Castagnera, D. Cheng, R. Fatoohi, E. Hook,
The heterogeneous Wiglaf cluster at Brandeis en-
B. Kramer, C. Manning, J. Musch, C. Niggley,
abled an investigation of theories about “useful ma-
W. Saphir, D. Sheppard, M. Smith, I. Stock-
chine lifespan” [4]. In the case of Wiglaf, the pre-
dale, S. Welch, R. Williams, and D. Yip. Clus-
vailing theory was that older machines, such as SGI
tered workstations and their potential role as
workstations or first-generation Pentiums, would of-
high speed compute processors, 1994.
fer no advantage to a cluster that already contained
high-powered Athlons because of the significant dif- [2] J. A. Crawford and C. M. Mobarry. Hrunting: A
ferences in floating point performance between the distributed shared memory system for the BE-
machines. However, tests showed that even adding OWULF Parallel Workstation. In Proceedings of
a single, 100 MHz SGI Indy workstation to a group Aerospace Conference, 1998.
of Athlons resulted in a noticeable decrease in the
run-time of some test calculations [4]. Because newer [3] W. W. Hargrove, F. M. Hoffman, and T. Ster-
processors will continue to be added to existing homo- ling. The do-it-yourself supercomputer. Scien-
geneous systems, the study concluded that “all major tific American, 265(2):72–79, August 2001.

8
[4] A. Macks. Heterogeny in a Beowulf. In
HPCS ’02: Proceedings of the 16th Annual In-
ternational Symposium on High Performance
Computing Systems and Applications, page 42,
Washington, DC, USA, 2002. IEEE Computer
Society.
[5] C. Reschke, T. Sterling, D. Ridge, D. Savarese,
D. J. Becker, and P. Merkey. A design study of
alternative network topologies for the Beowulf
Parallel Workstation. In HPDC ’96: Proceedings
of the High Performance Distributed Computing
(HPDC ’96), pages 626–636, 1996.
[6] D. Ridge, D. Becker, P. Merkey, and T. Sterling.
Beowulf: Harnessing the power of parallelism in
a pile-of-pcs, 1997.
[7] J. K. Salmon, T. Sterling, and C. Stein.
Scaling of Beowulf-class distributed systems.
In Supercomputing ’98: Proceedings of the
1998 ACM/IEEE conference on Supercomput-
ing (CDROM), pages 368–369, Washington, DC,
USA, 1998. IEEE Computer Society.
[8] T. Sterling, D. Becker, D. Savarese, M. Berry,
and C. Reschke. Achieving a balanced low-
cost architecture for mass storage management
through multiple fast ethernet channels on the
Beowulf Parallel Workstation.
[9] T. Sterling, D. Becker, M. Warren, T. Cwik,
J. Salmon, and B. Nitzberg. An assessment
of Beowulf-class computing for NASA require-
ments: Initial findings from the first NASA
workshop on Beowulf-class clustered computing,
1998.
[10] T. Sterling, D. Savarese, D. J. Becker, J. E.
Dorband, U. A. Ranawake, and C. V. Packer.
BEOWULF: A parallel workstation for scientific
computation. In Proceedings of the 24th Interna-
tional Conference on Parallel Processing, pages
I:11–14, Oconomowoc, WI, 1995.
[11] T. Sterling, D. Saverese, D. J. Becker, B. Fryx-
ell, and K. Olson. Communication overhead for
space science applications on the Beowulf Paral-
lel Workstation. In HPDC, pages 23–, 1995.

Vous aimerez peut-être aussi