Vous êtes sur la page 1sur 6

On Death, Taxes, and the Convergence of

Peer-to-Peer and Grid Computing


Ian Foster 1, 2 Adriana Iamnitchi 1
1
Department of Computer Science, University of Chicago, Chicago, IL 60615
2
Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439

nature of the strengths and weaknesses of the two


1 Introduction approaches suggests that the interests of the two
It has been reported [25] that life holds but two communities are likely to grow closer over time.
certainties, death and taxes. And indeed, it does
appear that any society—and in the context of this 2 Defining Terms
article, any large-scale distributed system—must The popularity of both Grid and P2P has led to a
address both death (failure) and the establishment and number of (often contradictory) definitions. We
maintenance of infrastructure (which we assert is a assume here that Grids are sharing environments
major motivation for taxes, so as to justify our title!). implemented via the deployment of a persistent,
Two supposedly new approaches to distributed standards-based service infrastructure that supports
computing have emerged in the past few years, both the creation of, and resource sharing within,
claiming to address the problem of organizing large- distributed communities. Resources can be computers,
scale computational societies: peer-to-peer (P2P) [15, storage space, sensors, software applications, and
36, 49] and Grid computing [21]. Both approaches data, all connected through the Internet and a
have seen rapid evolution, widespread deployment, middleware software layer that provides basic services
successful application, considerable hype, and a for security, monitoring, resource management, and so
certain amount of (sometimes warranted) criticism. forth. Resources owned by various administrative
The two technologies appear to have the same final organizations are shared under locally defined policies
objective—the pooling and coordinated use of large that specify what is shared, who is allowed to share,
sets of distributed resources—but are based in and under what conditions. We call a set of
different communities and, at least in their current individuals and/or institutions defined by such sharing
designs, focus on different requirements. rules a virtual organization (VO) [24].
In this article, we take some first steps toward We define P2P as a class of applications that
comparing and contrasting P2P and Grid computing. takes advantage of resources—storage, cycles,
Basing our discussion whenever possible on the content, human presence—available at the edges of
characteristics of deployed systems, rather than the the Internet [49]. Because accessing these
unverified claims abundant in the literature, we review decentralized resources means operating in an
their target communities, resources, scale, environment of unstable connectivity and
applications, and technologies. On the basis of this unpredictable IP addresses, P2P design requirements
review, we draw some initial conclusions concerning commonly include independence from DNS and
their interrelationship and future evolution. In brief, significant or total autonomy from central servers.
we argue that (1) both are concerned with the same Their implementations frequently involve the creation
general problem, namely, the organization of resource of overlay networks [53] with a structure independent
sharing within virtual communities; (2) both take the of that of the underlying Internet. We prefer this
same general approach to solving this problem, definition to the alternative “decentralized, self-
namely the creation of overlay structures that coexist organizing distributed systems, in which all or most
with, but need not correspond in structure to, communication is symmetric,” because it
underlying organizational structures; (3) each has encompasses large-scale deployed (albeit centralized)
made genuine technical advances, but each also has— “P2P” systems (such as Napster and SETI@home)
in current instantiations—crucial limitations, which where much experience has been gained.
we characterize (simplistically, but, we believe,
usefully) as “Grid computing addresses infrastructure 3 Comparing Grids and P2P
but not yet failure, whereas P2P addresses failure but Current Grids provide many services to
not yet infrastructure”; and (4) the complementary moderate-sized communities [19] and emphasize the

1
integration of substantial resources to deliver the dynamic nature of VO existence and membership
nontrivial qualities of service within an environment and the often-limited engagement of VO participants
of at least limited trust. For example, NASA’s circumscribe the ability to impose solutions at
Information Power Grid links supercomputers at four individual sites (where local VO participants may
NASA laboratories [26]. In contrast, current P2P have only limited authority) and VO-specific
systems deal with many more participants (e.g., administration.
Limewire [31] reports hundreds of thousands in In contrast, P2P has been popularized by grass-
Gnutella) but offer limited and specialized services, roots, mass-culture (music) file-sharing and highly
have been less concerned with qualities of service, and parallel computing applications [4, 5] that scale in
have made few if any assumptions about trust. some instances to hundreds of thousands of nodes.
These characterizations and examples might not The “communities” that underlie these applications
suggest commonality of interest. Nevertheless, we comprise diverse and anonymous individuals with
shall argue that in fact the two environments are little incentive to act cooperatively. Thus, for
concerned with the same general problem, namely, example, we find that in file sharing applications,
resource sharing within VOs that may not overlap there are few providers and many consumers [2]; the
with any existing organization. Clearly, the two types operators of SETI@home [4] devote significant effort
of system have both conceptual and concrete to detecting deliberately submitted incorrect results;
distinctions, which we shall identify and illuminate, and people tend to intentionally misreport their
focusing as noted above on characteristics of deployed resources [46]. Thus, rule enforcing mechanisms as
systems. We shall show that the distinctions seem to well as incentives for good behavior must be provided
be the result of different target communities and thus by the system (more in Section 3.5).
different evolutionary paths. Grids have incrementally The need for other participation models based,
scaled the deployment of relatively sophisticated for example, on payment, contracts, markets, and
services and application, connecting small numbers of licensing is recognized in both systems [28, 30, 54]
sites into collaborations engaged in complex scientific but is not yet standard practice.
applications. As system scale increases, Grid
developers are now facing and addressing problems 3.2 Resources
relating to autonomic configuration and management. In general, Grid systems integrate resources that
P2P communities developed rapidly around are more powerful, more diverse, and better connected
unsophisticated, but popular, services such as file than the typical P2P resource. A Grid resource might
sharing and are now seeking to expand to more be a cluster, storage system, database, or scientific
sophisticated applications as well as continuing to instrument of considerable value that is administered
innovate in the area of large-scale autonomic system in an organized fashion according to some well-
management. We expect the definition of persistent defined policy. This explicit administration enhances
and multipurpose infrastructure to emerge as an the resource’s ability to deliver desired qualities of
important theme. service and can facilitate, for example, software
upgrades, but it can also increase the cost of
3.1 Target Communities and Incentives integrating the resource into a Grid. Diversity in
The development and deployments of Grid architecture and policy makes the publication of
technologies were motivated initially by the resource properties important [16]. Explicit
requirements of professional communities needing to administration, higher cost of membership, and the
access remote resources, federate datasets, and/or pool stronger community links within scientific VOs mean
computers for large-scale simulations and data that resource availability tends to be higher and more
analyses. Although Grid technologies were initially uniform. In contrast, P2P systems often deal with
developed to address the needs of scientific intermittent participation and highly variable
collaborations, commercial interest is growing. behavior: for example, in the case of Mojo Nation it is
Participants in contemporary Grids thus form part of reported [54] that average connection time was only
established communities that are prepared to devote 28% and highly skewed (one sixth of the nodes
effort to the creation and operation of required always connected).
infrastructure and within which exist some degree of Grids integrate not only “high-end” resources:
trust, accountability, and opportunities for sanctions in desktop systems with variable availability [34] form a
response to inappropriate behavior. At the same time, major component of many contemporary Grids [12].

2
However, the ensemble of all such resources within a 3.4 Scale and Failure
Grid are not treated as an undifferentiated swarm of We can measure “scale” in terms of at least two
global scope. Rather, they are aggregated within different dimensions: number of participating entities
administrative domains via technologies such as and amount of activity. We discussed above the
Condor [32, 33] to create local resource pools that are necessity of dealing with failure—seen as intermittent
integrated into larger Grids via the same Grid participation in collaboration—as imposed by
protocols as other computational resources. resource and community characteristics. Dealing
Resources in Grids, traditionally from research effectively and automatically with failure is both a
and educational organizations, tend to be more consequence of and a prerequisite for scaling up in
powerful than home computers that arguably represent both dimensions.
the majority of P2P resources (e.g., 71% of The community orientation of scientific Grid
SETI@home systems are home computers [4]). The communities means that they often involve only
difference in capabilities between home and work modest numbers of participants, whether institutions
computers is illustrated by the average CPU time per (tens), pooled computers (thousands), or simultaneous
work unit in SETI@home: home computers are 30% users (hundreds). For example, the high energy
slower than work computers (13:45 vs. 10:16 hours physics collaboration that shares and analyzes data
per work unit). from the D0 Experiment [17] spans 73 institutions in
18 countries, with thousands of scientists involved, of
3.3 Applications which hundreds access its resources (data and
We see considerable variation in the range and computers) simultaneously. The amount of activity,
scope of scientific Grid applications, depending on on the other hand, can be large. For example, during
the interest and scale of the community in question. the first half of 2002, about 300 D0 users submitted
As three (real, not demonstration) examples, we 2.7 million requests and retrieved 824 TB of data. A
mention the HotPage portal, providing remote access consequence of these community characteristics is
to supercomputer hardware and software [51]; that early Grid implementations did not address
numerical solution of the long-open “nug30” scalability and self-management as priorities. Thus,
quadratic optimization problem using hundreds of while the design of core Grid protocols (as
computers at many sites [7]; and the NEESgrid system instantiated within the Globus Toolkit [20]) does not
that integrate earthquake engineering facilities into a preclude scalability, actual deployments often employ
national laboratory [38]. centralized components. For example, we find central
In contrast, P2P systems tend to be vertically repositories for shared data, centralized resource
integrated solutions to specialized resource-sharing management components (such as the Condor
problems: currently deployed systems share either Matchmaker [39]), and centralized (and/or
compute cycles or files. Diversification comes from hierarchical) information directories. This situation is
differing design goals, such as scalability [41, 44, 50, changing, with much work proceeding on such topics
55], anonymity [13], or availability [13, 29]. as reliable and scalable management of large job
One significant point of differentiation between pools, distributed scheduling [40], replica location
applications on deployed Grid and P2P systems is that [11], and discovery [16]. Overall, though, scalable
the former tend to be far more data intensive. For autonomic management remains a goal, not an
example, a recent analysis of Sloan Digital Sky accomplishment, for Grid computing.
Survey data [6] involved, on average, 660 MB input Far larger P2P communities exist: millions of
data per CPU hour; the Compact Muon Solenoid [35] simultaneous nodes in the case of file-sharing systems
data analysis pipeline involves from 60 MB to 72 GB [31, 47] and several million total nodes in
input data per CPU hour. In contrast, SETI@home SETI@home. The amount of activity is also
moves at least four orders of magnitude less data: a significant, albeit, surprisingly, not always larger than
mere 21.25 KB data per CPU hour. The reason is in the relatively smaller-scale Grids: 1-2 TB per day in
presumably, in part at least, better network file sharing systems as of end of 2001 [47], amounting
connectivity, which also allows for more flexibility in to less than half the data transferred in D0. This large
Grid application design: in addition to loosely coupled scale has emerged from (and later motivated work on)
applications [1, 10, 12], Grids have been used for robust self-management of large numbers of nodes.
numerical simulation [3, 43] and branch-and-bound- Over time, P2P systems have evolved from first-
based optimization problems [7]. generation centralized structures (e.g., Napster index,

3
SETI@home) to second-generation flooding-based services adhere to a set of standard interfaces and
(e.g., Gnutella file retrieval) and then third-generation behaviors (some required, others optional) for such
systems based on distributed hash tables. First- and purposes as service creation, registry, discovery,
second-generation P2P collaborations have been lifecycle, service data query, notification, and reliable
characterized at the level of both individual nodes invocation.
(behavior, resources [2, 46]) and network properties P2P systems have tended to focus on the
(topological properties [42], scale [31], traffic [47]), integration of simple resources (individual computers)
revealing not only general resilience but also via protocols designed to provide specific vertically
unexpected emergent properties. Third-generation integrated functionality. Thus, for example, Gnutella
systems have been characterized primarily via defines its own protocols for search and network
simulation studies [41, 50] rather than large-scale maintenance. Such protocols do, of course, define an
deployments. Scalable autonomic management clearly infrastructure, but in general (at least for second- and
has been achieved to a significant extent in P2P, albeit third-generation systems) the persistence properties of
within specific narrow domains. such infrastructures are not specifically engineered but
are rather emergent properties. Over time, experience
3.5 Services and Infrastructure with these emergent properties has revealed the need
The technologies used to develop Grid and P2P for new services, such as anonymity and censorship
applications differ both in the specific services resistance [48], incentives for fair sharing and
provided and in the emphasis placed on persistent, reputation management [14], and result checking
multipurpose infrastructure. [45]—important issues that have not tended to arise to
Much work has been expended within the Grid date in Grid computing, because of different
community on both technical and organizational underlying trust assumptions.
issues associated with creating and operating JXTA [27], XtremWeb [18], and BOINC [9]
persistent, multipurpose infrastructure services for have been proposed as standard service infrastructure
authentication [23], authorization [37, 52], discovery for P2P systems but, to date, have seen little adoption
[16], resource access, data movement, and so forth. and no interoperability. Thus, for example, a user
(Perhaps because of the relatively self-contained participating in Gnutella, KaZaA, SETI@home, and
nature of early Grid communities, less effort has been FightAIDSatHome must run four independent
devoted to managing participation in the absence of applications, each coded from scratch and running its
trust, via accounting, reputation management, and so own protocols over its own overlay networks. This
forth, although these issues are increasingly lack of interest in standard infrastructure may perhaps
discussed.) We use the term persistent to indicate that derive from the simple nature of current P2P
services are operated by participants over extended applications and the fact that a typical user does not
periods as critical and often highly available encounter a substantial administrative burden even
infrastructure elements, like DNS servers; and when running multiple applications. As functionality
multipurpose to indicate that the same services are requirements increase, so presumably will the benefits
used for multiple purposes (e.g., the same monitoring of standard tooling and infrastructure.
and discovery service [16] is used by a wide range of While P2P and Grid service requirements overlap
higher-level functions, such as computation in many regards, there are also important distinctions.
scheduling, data replication, and fault detection). First, some services are specific to particular regimes:
These services operate as overlays on resources and for example, mechanisms that make up for the
services maintained by participating institutions. inherent lack of incentives for cooperation in P2P.
Gatewaying from these overlay structures to local Second, functionality requirements can conflict; for
mechanisms and policies is a significant concern. example, Grids might require accountability and P2P
Many Grid communities use the open source systems anonymity. Third, common services may
Globus Toolkit [20] as a technology base. Significant start from different hypotheses, as in the case of trust.
effort has been channeled toward the standardization
of protocols and interfaces to enable interoperability 4 Future Directions
between different Grid deployments. The Open Grid Grid and P2P are both concerned with the
Services Architecture (OGSA) [22] is such an effort: pooling and coordinated use of resources within
it integrates Grid and Web services technologies to distributed communities and are constructed as
define a service-oriented architecture within which all overlay structures that operate largely independently

4
of institutional relationships. Yet despite these definitions and implementations able to support
commonalities, there are also major differences in different mixes of logical and physical organizations.
communities, incentives, applications, technologies, This analysis suggests to us that the Grid and P2P
resources, and achieved scale. communities have more in common than is perhaps
Nevertheless, we argue that the vision that generally recognized and that a broader recognition of
motivates both Grid and P2P computing—that of a key commonalities will tend to accelerate progress in
worldwide computer within which access to resources both disciplines—which is why we wrote this article.
and services can be negotiated as and when needed—
will come to pass only if we are successful in Acknowledgments
developing a technology that combines elements of
what we today call both P2P and Grid computing. We are grateful to H. Casanova, K. Ranganathan, and
This technology will address failure (death) at a M. Ripeanu for comments . Our work was supported
fundamental level, using scalable self-configuring in part by the NSF GriPhyN project.
protocols such as those emerging from P2P research.
It will also provide persistent and multipurpose References
infrastructure (at some cost, justified because 1. Berkeley Open Infrastructure for Network Computing.
amortized over many uses and users), which like DNS http://boinc.berkeley.edu, 2002.
2. DZero Experiment. www-d0.fnal.gov.
and routing tables will be supported in an organized 3. JXTA. www.jxta.org.
and distributed fashion and will exploit heterogeneity 4. Limewire. www.limewire.com.
5. Abramson, D., Sosic, R., Giddy, J. and Hall, B., Nimrod: A Tool for
(whether naturally occurring or artificially imposed) Performing Parameterised Simulations Using Distributed
in its environment to achieve goals of robustness, Workstations. Proc. 4th IEEE Symp. on High Performance
performance, and trust. Diverse discovery, Distributed Computing, 1995.
6. Adar, E. and Huberman, B.A. Free Riding on Gnutella. First Monday,
negotiation, and maintenance protocols constructed on 5 (10). 2000.
some common base will be used to deliver a wide 7. Allen, G., Dramlitsch, T., Foster, I., Goodale, T., Karonis, N.,
spectrum of services and qualities of service. Ripeanu, M., Seidel, E. and Toonen, B., Supporting Efficient
Execution in Heterogeneous Distributed Computing Environments
The Grid and P2P communities are approaching with Cactus and Globus. SC'2001, 2001, ACM Press.
this nirvana from different directions. Over time, the 8. Anderson, D.P., Cobb, J., Korpella, E., Lebofsky, M. and Werthimer,
D. SETI@home: An Experiment in Public-Resource Computing.
scale of Grid systems is increasing as barriers to Communications of the ACM, 45 (11). 56-61. 2002.
participation are lowered and as commercial 9. Anderson, D.P. and Kubiatowicz, J. The Worldwide Computer.
deployments enable communities based on purely Scientific American (3). 2002.
10. Annis, J., Zhao, Y., Voeckler, J., Wilde, M., Kent, S. and Foster, I.,
monetary transactions. For example, the International Applying Chimera Virtual Data Concepts to Cluster Finding in the
Virtual Data Grid Laboratory [8] is deploying to Sloan Sky Survey. SC'2002, 2002.
scores of sites and many thousands of resources both 11. Anstreicher, K., Brixius, N., Goux, J.-P. and Linderoth, J.T. Solving
Large Quadratic Assignment Problems on Computational Grids.
nationally and internationally, and interest is growing Mathematical Programming, 91 (3). 563-588. 2002.
in utility computing models [28] that establish sharing 12. Avery, P., Foster, I., Gardner, R., Newman, H. and Szalay, A. An
International Virtual-Data Grid Laboratory for Data Intensive Science,
relationships based on commercial transactions rather 2001. www.griphyn.org.
than common interest. Both trends lead to a greater 13. Casanova, H., Obertelli, G., Berman, F. and Wolski, R., The AppLeS
need for scalability, trust negotiation, self- Parameter Sweep Template: User-Level Middleware for the Grid.
Proc. SC'2000, 2000.
configuration, automatic problem determination, and 14. Chervenak, A., Deelman, E., Foster, I., Guy, L., Hoschek, W.,
fault tolerance [11, 16]—areas where P2P has much to Iamnitchi, A., Kesselman, C., Kunszt, P., Ripeanu, M., Schwartzkopf,
offer. OGSA definition work, proceeding within the B., Stockinger, H., Stockinger, K. and Tierney, B., Giggle: A
Framework for Constructing Scalable Replica Location Services.
Global Grid Forum, and an OGSA-based Globus SC'02, 2002.
Toolkit 3 are stimulating much work on service 15. Chien, A., Calder, B., Elbert, S. and Bhatia, K. Entropia: Architecture
and Performance of an Enterprise Desktop Grid System. Journal of
definition and implementation. Parallel and Distributed Computing, To appear.
Meanwhile, developers of P2P systems are 16. Clarke, I., Sandberg, O., Wiley, B. and Hong, T.W., Freenet: A
becoming increasingly ambitious in their applications Distributed Anonymous Information Storage and Retrieval System.
International Workshop on Designing Privacy Enhancing
and services, as a result of both natural evolution and Technologies, Berkeley, CA, USA, 2000, Springer-Verlag.
more powerful and connected resources. We expect 17. Cornelli, F., Damiani, E., Capitani, S.D., Paraboschi, S. and Samarati,
that the developers of such systems are going to P., Choosing reputable servents in a P2P network. International World
Wide Web Conference, Honolulu, Hawaii, USA, 2002, ACM Press,
become increasingly interested in standard 376 - 386.
infrastructure and tools for service description, 18. Crowcroft, J. and Pratt, I., Peer to Peer: peering into the future. IFIP-
TC6 Networks 2002 Conference, Pisa, Italy, 2002, Springer Verlag.
discovery, and access, as well in standardized service

5
19. Czajkowski, K., Fitzgerald, S., Foster, I. and Kesselman, C., Grid 38. Prudhomme, T., Kesselman, C., Finholt, T., Foster, I., Parsons, D.,
Information Services for Distributed Resource Sharing. 10th IEEE Abrams, D., Bardet, J.-P., Pennington, R., Towns, J., Butler, R.,
International Symposium on High Performance Distributed Futrelle, J., Zaluzec, N. and Hardin, J. NEESgrid: A Distributed
Computing, 2001, IEEE Press, 181-184. Virtual Laboratory for Advanced Earthquake Experimentation and
20. Fedak, G., Germain, C., Néri, V. and Cappello, F., XtremWeb : A Simulation: Scoping Study, NEESgrid, 2001. www.neesgrid.org.
Generic Global Computing System. Workshop on Global Computing 39. Raman, R., Livny, M. and Solomon, M., Matchmaking: Distributed
on Personal Devices (CCGRID2001), Berlin, Germany, 2001, IEEE Resource Management for High Throughput Computing. IEEE
Press. International Symposium on High Performance Distributed
21. Foster, I. The Grid: A New Infrastructure for 21st Century Science. Computing, 1998, IEEE Press.
Physics Today, 55 (2). 42-47. 2002. 40. Ranganathan, K. and Foster, I., Decoupling Computation and Data
22. Foster, I. and Kesselman, C. Globus: A Toolkit-Based Grid Scheduling in Distributed Data Intensive Applications. International
Architecture. Foster, I. and Kesselman, C. eds. The Grid: Blueprint for Symposium for High Performance Distributed Computing, Edinburgh,
a New Computing Infrastructure, Morgan Kaufmann, 1999, 259-278. UK, 2002.
23. Foster, I. and Kesselman, C. (eds.). The Grid: Blueprint for a New 41. Ratnasamy, S., Francis, P., Handley, M., Karp, R. and Shenker, S., A
Computing Infrastructure. Morgan Kaufmann, 1999. Scalable Content-Addressable Network. SIGCOMM Conference,
24. Foster, I., Kesselman, C., Nick, J. and Tuecke, S. The Physiology of 2001, ACM.
the Grid: An Open Grid Services Architecture for Distributed Systems 42. Ripeanu, M., Foster, I. and Iamnitchi, A. Mapping the Gnutella
Integration, Globus Project, 2002. Network: Properties of Large-Scale Peer-to-Peer Systems and
www.globus.org/research/papers/ogsa.pdf. Implications for System Design. Internet Computing, 6 (1). 50-57.
25. Foster, I., Kesselman, C., Tsudik, G. and Tuecke, S. A Security 2002.
Architecture for Computational Grids. ACM Conference on 43. Ripeanu, M., Iamnitchi, A. and Foster, I. Performance Predictions for
Computers and Security, 1998, 83-91. a Numerical Relativity Package in Grid Environments. International
26. Foster, I., Kesselman, C. and Tuecke, S. The Anatomy of the Grid: Journal of High Performance Computing Applications, 15 (4). 2001.
Enabling Scalable Virtual Organizations. International Journal of 44. Rowstron, A.I.T. and Druschel, P., Pastry: Scalable, Decentralized
High Performance Computing Applications, 15 (3). 200-222. 2001. Object Location, and Routing for Large-Scale Peer-to-Peer Systems.
27. Franklin, B., Letter to Jean-Baptiste Leroy, 1789. Middleware, 2001, 329-350.
28. Johnston, W.E., Gannon, D. and Nitzberg, B., Grids as Production 45. Sarmenta, L.F.G. Sabotage-tolerance mechanisms for volunteer
Computing Environments: The Engineering Aspects of NASA's computing systems. Future Generation Computer Systems, 18 (4).
Information Power Grid. 8th IEEE Symposium on High Performance 561-572. 2002.
Distributed Computing, 1999, IEEE Press. 46. Saroiu, S., Gummadi, P.K. and Gribble, S.D., A Measurement Study
29. Kenyon, C. and Cheliotis, G., Architecture Requirements for of Peer-to-Peer File Sharing Systems. Proceedings of Multimedia
Commercializing Grid Resources. 11th IEEE International Computing and Networking (MMCN), San Jose, CA, USA, 2002.
Symposium on High Performance Distributed Computing, 2002. 47. Sen, S. and Wang, J., Analyzing Peer-to-Peer Traffic Across Large
30. Kubiatowicz, J., Bindel, D., Chen, Y., Czerwinski, S., Eaton, P., Networks. Internet Measurement Workshop, Marseille, France, 2002.
Geels, D., Gummadi, R., Rhea, S., Weatherspoon, H., Weimer, W., 48. Serjantov, A., Anonymizing Censorship Resistant Systems. 1st
Wells, C. and Zhao, B., OceanStore: An Architecture for Global-Scale International Workshop on Peer-to-Peer Systems (IPTPS'02).
Persistent Storage. 9th Intl. Conf. on Architectural Support for Cambridge, MA, 2002, Springer Verlag.
Programming Languages and Operating Systems, 2000. 49. Shirky, C. What Is P2P... and What Isn't?
31. Lai, C., Medvinsky, G. and Neuman, B.C. Endorsements, Licensing, www.openp2p.com/pub/a/p2p/2000/11/24/shirky1-whatisp2p.html,
and Insurance for Distributed System Services. Proc. 2nd ACM 2000.
Conference on Computer and Communication Security, 1994. 50. Stoica, I., Morris, R., Karger, D., Kaashoek, F. and Balakrishnan, H.,
32. Litzkow, M., Livny, M. and Mutka, M. Condor - A Hunter of Idle Chord: A Scalable Peer-to-Peer Lookup Service for Internet
Workstations. Proc. 8th Intl Conf. on Distributed Computing Systems, Applications. SIGCOMM Conference, San Diego, CA, USA, 2001,
1988, 104-111. ACM Press.
33. Livny, M. High-Throughput Resource Management. Foster, I. and 51. Thomas, M.P., Mock, S. and J., B., Development of Web Toolkits for
Kesselman, C. eds. The Grid: Blueprint for a New Computing Computational Science Portals: The NPACI HotPage. Ninth IEEE
Infrastructure, Morgan Kaufmann, 1999, 311-337. International Symposium on High Performance Distributed
34. Mutka, M. and Livny, M. The Available Capacity of a Privately Computing, 2000.
Owned Workstation Environment. Performance Evaluation, 12 (4). 52. Thompson, M., Johnston, W., Mudumbai, S., Hoo, G., Jackson, K. and
269--284. 1991. Essiari, A., Certificate-based Access Control for Widely Distributed
35. Negra, M.D. CMS Collaboration, CERN, 1994. Resources. 8th Usenix Security Symposium, 1999.
http://cmsinfo.cern.ch/Welcome.html. 53. Touch, J. Overlay networks. Computer Networks, 3 (2-3). 115-116.
36. Oram, A. (ed.), Peer-to-Peer: Harnessing the Power of Disruptive 2001.
Technologies. O'Reilly, 2001. 54. Wilcox-O'Hearn, B., Experiences Deploying A Large-Scale Emergent
37. Pearlman, L., Welch, V., Foster, I., Kesselman, C. and Tuecke, S., A Network. 1st International Workshop on Peer-to-Peer Systems
Community Authorization Service for Group Collaboration. IEEE 3rd (IPTPS'02), Cambridge, MA, 2002, Springer-Verlag.
International Workshop on Policies for Distributed Systems and 55. Zhao, B.Y., Kubiatowicz, J.D. and Joseph, A.D. Tapestry: An
Networks, 2002. infrastructure for fault-tolerant wide-area location and routing, UC
Berkeley, 2001.

Vous aimerez peut-être aussi