Vous êtes sur la page 1sur 22

http://weblogs.foxite.com/andykramek/archive/2008/09/29/6935.

aspx

Introduction to Client Server Architecture


I am often surprised to find that many developers today still do not really understand what is meant by a "Client Server" architecture or what the difference between "tiers" and "layers" is. So I thought I would post the following explanation which, if not universally accepted, has served me well over the past 10 years.

Evolution of Client/Server Systems


Computer system architecture has evolved along with the capabilities of the hardware used to run applications. The simplest (and earliest) of all was the "Mainframe Architecture" in which all operations and functionality are contained within the central (or "host") computer. Users interacted with the host through 'dumb' terminals which transmitted instructions, by capturing keystrokes, to the host and displayed the results of those instructions for the user. Such applications were typically character based and, despite the relatively large computing power of the mainframe hosts were often relatively slow and cumbersome to use because of the need to transmit every keystroke back to the host. The introduction and widespread acceptance of the PC, with its own native computing power and graphical user interface made it possible for applications to become more sophisticated and the expansion of networked systems led to the second major type of system architecture, "File Sharing". In this architecture the PC (or "workstation") downloads files from a dedicated "file server" and then runs the application (including data) locally. This works well when the shared usage is low, update contention is low, and the volume of data to be transferred is low. However it rapidly became clear that file sharing choked as networks grew larger, and the applications running on them grew more complex and required ever larger amounts of data to be transmitted back and forth. The problems associated with handling large, data-centric applications, over file sharing networks led directly to the development of the Client/Server architecture in the early 1980s. In this approach the file server is replaced by a database server (the "Server") which, instead of merely transmitting and saving files to its connected workstations (the "Clients") receives and actually executes requests for data, returning only the result sets to the client. By providing a query response rather than a total file transfer this architecture significantly decreases network traffic. This allowed for the development of applications in which multiple users could update data through GUI front ends connected to a single shared database. Typically either Structured Query Language (SQL) or Remote Procedure Calls (RPCs) are used to communicate between the client and server. There are several variants of the basic Client/Server architecture as described below.

The Two Tier Architecture


In a two tier architecture the workload is divided between the server (which hosts the database) and the client (which hosts the User Interface). In reality these are normally located on separate physical machines but there is no absolute requirement for this to be the case. Providing that the tiers are logically separated they can be hosted (e.g. for development and testing) on the same computer (Figure 1).

Figure 1: Basic Two-Tier Architecture The distribution of application logic and processing in this model was, and is, problematic. If the client is 'smart' and hosts the main application processing then there are issues associated with distributing, installing and maintaining the application because each client needs its own local copy of the software. If the client is 'dumb' the application logic and processing must be implemented in the database and then becomes totally dependent on the specific DBMS being used. In either scenario, each client must also have a log-in to the database and the necessary rights to carry out whatever functions are required by the application. However, the two tier client/server architecture proved to be a good solution when the user population work is relatively small (up to about 100 concurrent users) but it rapidly proved to have a number of limitations. Performance: As the user population grows, performance begins to deteriorate. This is the direct result of each user having their own connection to the server which means that the server has to keep all these connections live (using "keep-alive" messages) even when no work is being done Security: Each user must have their own individual access to the database, and be granted whatever rights may be required in order to run the application. Apart from the security issues that this raises, maintaining users rapidly becomes a major task in its own right. This is especially problematic when new features/functionality have to be added to the application and users rights need to be updated Capability: No matter what type of client is used, much of the data processing has to be located in the database which means that it is totally dependent upon the capabilities, and implementation, provided by the database manufacturer. This can seriously limit application functionality because different databases support different functionality, use different programming languages and even implement such basic tools as triggers differently Portability: Since the two-tier architecture is so dependent upon the specific database implementation, porting an existing application to a different DBMS becomes a major issue. This is especially apparent in the case of vertical market applications where the choice of DBMS is not determined by the vendor Having said that, this architecture found a new lease of life in the Internet age. It can work well in a disconnected environment where the UI is essentially dumb (i.e. a browser). However, in many ways this implementation harks back to the original Mainframe Architecture and indeed, a browser based, two-tier application, can (and usually does) suffer from many of the same issues.

The Three Tier Architecture


In an effort to overcome the limitations of the two-tier architecture outlined above, an additional tier was introduced creating what is now the standard Three-Tier Client/Server model. The purpose of the additional tier (usually referred to as the "middle" or "rules" tier) is to handle application execution and database management. As with the two-tier model, the tiers can either be implemented on different physical machines (Figure 2), or multiple tiers may be co-hosted on a single machine.

Figure 2: Basic Three Tier Architecture

By introducing the middle tier, the limitations of the two-tier architecture are largely removed and the result is a much more flexible, and scalable, system. Since clients now connect only to the application server, not directly to the data server, the load of maintaining connections is removed, as is the requirement to implement application logic within the database. The database can now be relegated to its proper role of managing the storage and retrieval of data, while application logic and processing can be handled in whatever application is most appropriate for the task. The development of operating systems to include such features as connection pooling, queuing and distributed transaction processing has enhanced (and simplified) the development of the middle tier. Notice that, in this model, the application server does not drive the user interface, nor does it actually handle data requests directly. Instead it allows multiple clients to share business logic, computations, and access to the data retrieval engine that it exposes. This has the major advantage that the client needs less software and no longer need a direct connection to the database, so there is less security to worry about. Consequently applications are more scalable, and support and installation costs are significantly less for a single server than for maintaining applications directly on a desktop client or even a two-tier design. There are many variants of the basic three-tier model designed to handle different application requirements. These include distributed transaction processing (where multiple DBMS are updated in a single transaction), message based applications (where applications do not communicate in real-time) and cross-platform interoperability (Object Request Broker or "ORB" applications).

The Multi or n-Tier Architecture


With the growth of internet based applications a common enhancement of the basic three-tier client server model has been the addition of extra tiers, such architecture is referred to as 'n-tier' and typically comprises four tiers (Figure 3) where the Web Server is responsible for handling the connection between client browsers and the application server. The benefit is simply that multiple web servers can connect to a single application server, thereby handling more concurrent users.

Figure 3: n-Tier Architecture

Tiers vs Layers
These terms are often (regrettably) used interchangeably. However they really are distinct and have definite meanings. The basic difference is that Tiers are physical, while Layers are logical. In other words a tier can theoretically be deployed independently on a dedicated computer, while a layer is a logical separation within a tier (Figure 4). The typical three-tier model described above normally contains at least seven layers, split across the three tiers.

The key thing to remember about a layered architecture is that requests and responses each flow in one direction only and that layers may never be "skipped". Thus in the model shown in figure 4, the only layer that can address Layer "E" (the Data Access Layer) is Layer "D" (the Rules Layer). Similarly layer "C" (the Application Validation Layer) can only respond to requests from Layer "B" (the Error Handling layer) .

Figure 4: Tiers are divided into logical Layers

Published Monday, September 29, 2008 3:08 PM by andykr Filed Under:Data Management

Client/Server Fundamentals
February 8, 1999

Introduction to Client/Server Fundamentals 2.1 Introduction


Simply stated, an object-oriented client/server Internet (OCSI) environment provides the IT infrastructure (i.e., middleware, networks, operating systems, hardware) that supports the OCSI applicationsthe breed of distributed applications of particular interest to us. The purpose of this chapter is to explore this enabling infrastructure before digging deeply into the details of application engineering and reengineering in Parts II and III of this book, respectively. Specifically, we review the following three core technologies of the modern IT infrastructure:
l Client/server that allows application components to behave as service consumers (clients) and service providers (servers). See Section 2.2. l Internet for access to application components (e.g., databases, business logic) located around the world from Web browsers. See Section 2.3. l Object-orientation to let applications behave as objects that can be easily created, viewed, used, modified, reused, and deleted over time. See Section 2.4.

In addition, we will attempt to answer the following questions:


l How are the key technologies combined to support the modern applications (Section 2.5)? l What type of general observations can be made about the state of the art, state of the market, and state of the practice in OCSI environments (Sections 2.6, 2.7, 2.8)? l What are the sources of additional information on this topic (Section 2.13)?

The information in this chapter is intentionally high level and is an abbreviation of the companion book [Umar 1997] that discusses the infrastructure issues, in particular the middleware, in great detail. Figure 2.1 will serve as a general framework for discussion. This framework, introduced in Chapter 1, illustrates the role of the following main building blocks of OCSI environments:
l Client and server processes (applications) that represent the business logic as objects that may reside on different machines and can be invoked through Web services

l Middleware that supports and enables the OCSI applications (see the sidebar "What is Middleware?") l Network services that transport the information between remote computers l Local services (e.g., database managers and transaction managers) l Operating systems and computing hardware to provide the basic scheduling and hardware services

We will quickly scan these building blocks and illustrate their interrelationships in multivendor environments that are becoming common to support enterprisewide distributed applications.

Figure 2.1 Object-Oriented Client/Server Internet Environments

2.2 Client/Server Fundamentals


2.2.1 Definitions Client/server model is a concept for describing communications between computing processes that are classified as service consumers (clients) and service providers (servers). Figure 2.2 presents a simple C/S model. The basic features of a C/S model are: 1. Clients and servers are functional modules with well defined interfaces (i.e., they hide internal information). The functions performed by a client and a server can be implemented by a set of software modules, hardware components, or a combination thereof. Clients and/or servers may run on dedicated machines, if needed. It is unfortunate that some machines are called "servers." This causes confusion (try explaining to an already bewildered user that a clients software is running on a machine called "the server"). We will avoid this usage as much as possible. 2. Each client/server relationship is established between two functional modules when one module (client) initiates a service request and the other (server) chooses to respond to the service request. Examples of service requests (SRs) are "retrieve customer name," "produce net income in last year," etc. For a given service request, clients and servers do not reverse roles (i.e., a client stays a client and a server stays a server). However, a server for SR R1 may become a client

for SR R2 when it issues requests to another server (see Figure 2.2). For example, a client may issue an SR that may generate other SRs. 3. Information exchange between clients and servers is strictly through messages (i.e., no information is exchanged through global variables). The service request and additional information is placed into a message that is sent to the server. The server's response is similarly another message that is sent back to the client. This is an extremely crucial feature of C/S model. The following additional features, although not required, are typical of a client/server model: 4. Messages exchanged are typically interactive. In other words, C/S model does not support an off-line process. There are a few exceptions. For example, message queuing systems allow clients to store messages on a queue to be picked up asynchronously by the servers at a later stage. 5. Clients and servers typically reside on separate machines connected through a network. Conceptually, clients and servers may run on the same machine or on separate machines. In this book, however, our primary interest is in distributed client/server systems where clients and servers reside on separate machines. The implication of the last two features is that C/S service requests are real-time messages that are exchanged through network services. This feature increases the appeal of the C/S model (i.e., flexibility, scalability) but introduces several technical issues such as portability, interoperability, security, and performance.

Figure 2.2 Conceptual Client/Server Model

What Is Middleware?
Middleware is a crucial component of modern IT infrastructure. We will use the following definition of middleware in this book: Definition: Middleware is a set of common business-unaware services that enable applications and end users to interact with each other across a network. In essence, middleware is the software that resides above the network and below the business-aware application software. The services provided by these routines are available to the applications through application programming interfaces (APIs) and to the human users through commands and/or graphical user interfaces (GUIs). A common example of middleware is e-mail because it provides businessunaware services that reside above networks and interconnect users (in several cases applications also). Other examples are groupware products (e.g., Lotus Notes), Web browsers, Web gateways, SQL gateways,

Electronic Data Interchange (EDI) packages, remote procedure call (RPC) packages, and "distributed object servers" such as CORBA. We will briefly discuss these middleware components in this chapter.

Client/server applications, an area of vital importance to us, employ the C/S model to deliver business aware functionalities. C/S applications provide a powerful and flexible mechanism for organizations to design applications to fit the business needs. For example, an order processing application can be implemented using the C/S model by keeping the order processing databases (customers, products) at the corporate office and developing/customizing the order processing logic and user interfaces for different stores that initiate orders. In this case, order processing clients may reside on store computers to perform initial checking and preprocessing, and the order processing servers may exist at the corporate mainframe to perform final approval and shipping. Due to the critical importance of C/S applications to business enterprises of the 1990s and beyond, we will focus on C/S applications in this book. 2.2.2 Client/ServerA Special Case of Distributed Computing Figure 2.3 shows the interrelationships between distributed computing and client/server models. Conceptually, client/server model is a special case of distributed-computing model.

Figure 2.3 Interrelationships between Computing Models

A Distributed Computing System (DCS) is a collection of autonomous computers interconnected through a communication network to achieve business functions. Technically, the computers do not share main memory so that the information cannot be transferred through global variables. The information (knowledge) between the computers is exchanged only through messages over a network. The restriction of no shared memory and information exchange through messages is of key importance because it distinguishes between DCS and shared memory multiprocessor computing systems. This definition requires that the DCS computers are connected through a network that is responsible for the information exchange between computers. The definition also requires that the computers have to work together and cooperate with each other to satisfy enterprise needs (see Umar [1993, Chapter 1] for more discussion of DCS). Distributed computing can be achieved through one or more of the following:
l File transfer model

l Client/server model l Peer-to-peer model

File transfer model is one of the oldest models to achieve distributed computing at a very minimal level. Basically, programs at different computers communicate with each other by using file transfer. In fact, e-mail is a special case of file transfer. Although this is a very old and extremely limited model of distributed computing, it is still used to support loosely coupled distributed computers. For example, media clips, news items, and portions of corporate databases are typically exchanged between remote computers through file transfers; and e-mail is used frequently to exchange files through embeddings and attachments. The C/S model is state of the market and state of the practice for distributed computing at the time of this writing. C/S model, as stated previously, allows application processes at different sites to interactively exchange messages and is thus a significant improvement over the file transfer model. Initial versions of C/S model utilized the remote procedure call paradigm that extends the scope of a local procedure call. At present, the C/S model is increasingly utilizing the distributed objects paradigm that extends the scope of local object paradigm (i.e., the application processes at different sites are viewed as distributed objects). Peer-to-peer model allows the processes at different sites to invoke each other. The basic difference between C/S and peer-to-peer is that in a peer-to-peer model the interacting processes can be a client, server, or both while in a C/S model one process assumes the role of a service provider while the other assumes the role of a service consumer. Peer-to-peer middleware is used to build peer-to-peer distributed applications. In this book, we will primarily concentrate on a C/S model. File transfer model is older and does not need additional discussion. We will also not dwell on the peer-to-peer model because peer-to-peer applications are not state of the market and state of the practice at the time of this writing. 2.2.3 Client/Server Architectures Client/server architecture provides the fundamental framework that allows many technologies to plug in for the applications of 1990s and beyond. Clients and servers typically communicate with each other by using one of the following paradigms (see [Umar 1997, Chapter 3] for detailed discussion and analysis of these and other paradigms): Remote Procedure Call (RPC). In this paradigm, the client process invokes a remotely located procedure (a server process), the remote procedure executes and sends the response back to the client. The remote procedure can be simple (e.g., retrieve time of day) or complex (e.g., retrieve all customers from Chicago who have a good credit rating). Each request/response of an RPC is treated as a separate unit of work, thus each request must carry enough information needed by the server process. RPCs are supported widely at present. Remote Data Access (RDA). This paradigm allows client programs and/or enduser tools to issue ad hoc queries, usually SQL, against remotely located

databases. The key technical difference between RDA and RPC is that in an RDA the size of the result is not known because the result of an SQL query could be one row or thousands of rows. RDA is heavily supported by database vendors. Queued Message Processing (QMP). In this paradigm, the client message is stored in a queue and the server works on it when free. The server stores ("puts") the response in another queue and the client actively retrieves ("gets") the responses from this queue. This model, used in many transaction processing systems, allows the clients to asynchronously send requests to the server. Once a request is queued, the request is processed even if the sender is disconnected (intentionally or due to a failure). QMP support is becoming commonly available. Initial implementations of client/server architecture were based on the "two-tiered" architectures shown in Figure 2.4 (a) through Figure 2.4 (e) (these architectural configurations are known as the "Gartner Group" configurations). The first two architectures (Figure 2.4 (a) and Figure 2.4 (b) are used in many presentation intensive applications (e.g., XWindow, multimedia presentations) and to provide a "face lift" to legacy applications by building a GUI interface that invokes the older text-based user interfaces of legacy applications. Figure 2.4 (c) represents the distributed application program architecture in which the application programs are split between the client and server machines, and they communicate with each other through the remote procedure call (RPC) or queued messaging middleware. Figure 2.4 (d) represents the remote data architecture in which the remote data is typically stored in a "SQL server" and is accessed through ad hoc SQL statements sent over the network. Figure 2.4 (e) represents the case where the data exist at client as well as server machines (distributed data architecture).

Figure 2.4 Traditional Client/Server Architectures

Although a given C/S application can be architected in any of these configurations, the remote data and distributed program configurations are used heavily at present. The remote data configuration at present is very popular for departmental applications and is heavily supported by numerous database vendors (as a matter of fact this configuration is used to represent typical two-tiered architectures that rely on remote SQL). Most data warehouses also use a remote data configuration because the data warehouse tools can reside on user workstations and issue remote SQL calls to the data warehouse (we will discuss data warehouses in Chapter 10). However, the distributed programs configuration is very useful for enterprisewide applications,

because the application programs on both sides can exchange information through messages. 2.2.4 OSF DCEA Client/Server Environment The Open Software Foundation (OSF) Distributed Computing Environment (DCE) packages and implements "open" and de facto standards into an environment for distributed client/server computing. OSF DCE, also commonly known as DCE, is currently available on a wide range of computing platforms such as UNIX, OS/2, and IBM MVS. Figure 2.5 shows a conceptual view of OSF DCE. The applications are at the highest level and the OSI transport services are at the lowest level in DCE (at present, DCE uses the TCP/IP transport services). The security and management functions are built at various levels and are applicable to all components. The distributed file access to get at remotely located data, naming services for accessing objects across the network, remote procedure calls (RPCs), and presentation services are at the core of DCE. As can be seen RPCs are at the core of DCE. Additional information about DCE can be found in Rosenberry [1993].

Figure 2.5 OSF DCE

http://en.wikipedia.org/wiki/Distributed_computing Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.[1] Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area.[3] The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing.[4] While there is no single definition of a distributed system,[5] the following defining properties are commonly used:

There are several autonomous computational entities, each of which has its own local memory.[6] The entities communicate with each other by message passing.[7]

In this article, the computational entities are called computers or nodes. A distributed system may have a common goal, such as solving a large computational problem.[8] Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.[9] Other typical properties of distributed systems include the following:

The system has to tolerate failures in individual computers.[10] The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.[11] Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.[12]

(a)(b) A distributed system. (c) A parallel system.

[edit] Parallel and distributed computing


Distributed systems are groups of networked computers, which have the same goal for their work. The terms "concurrent computing", "parallel computing", and "distributed computing" have a lot of overlap, and no clear distinction exists between them.[13] The same system may be characterised both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel.[14] Parallel computing may be seen as a particular tightly-coupled form of distributed computing,[15] and distributed computing may be seen as a loosely-coupled form of parallel computing.[5] Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:

In parallel computing, all processors have access to a shared memory. Shared memory can be used to exchange information between processors.[16] In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.[17]

The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; as usual, the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory. The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems; see the section Theoretical foundations below for more detailed discussion. Nevertheless, as a rule of

thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.

[edit] History
The use of concurrent processes that communicate by message-passing has its roots in operating system architectures studied in the 1960s.[18] The first widespread distributed systems were localarea networks such as Ethernet that was invented in the 1970s.[19] ARPANET, the predecessor of the Internet, was introduced in the late 1960s, and ARPANET email was invented in the early 1970s. E-mail became the most successful application of ARPANET,[20] and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET, and its successor, the Internet, other early worldwide computer networks included Usenet and FidoNet from 1980s, both of which were used to support distributed discussion systems. The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its European counterpart International Symposium on Distributed Computing (DISC) was first held in 1985.

[edit] Applications
There are two main reasons for using distributed systems and distributed computing. First, the very nature of the application may require the use of a communication network that connects several computers. For example, data is produced in one physical location and it is needed in another location. Second, there are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example, it may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. A distributed system can be more reliable than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system.[21] Examples of distributed systems and applications of distributed computing include the following:[22]

Telecommunication networks: o Telephone networks and cellular networks. o Computer networks such as the Internet. o Wireless sensor networks. o Routing algorithms. Network applications: o World wide web and peer-to-peer networks. o Massively multiplayer online games and virtual reality communities. o Distributed databases and distributed database management systems. o Network file systems. o Distributed information processing systems such as banking systems and airline reservation systems.

Real-time process control: o Aircraft control systems. o Industrial control systems. Parallel computation: o Scientific computing, including cluster computing and grid computing and various volunteer computing projects; see the list of distributed computing projects. o Distributed rendering in computer graphics

The Evolution of Client/Server Computing


Several years ago, many computing environments consisted of mainframes hooked to dumb terminals that only did processing at the mainframe. Over the years, personal computers started to replace these dumb terminals but the processing continued to be done on the mainframe. The improved capacity of personal computers were largely ignored or used on an individual level. With so much computing power idle, many organizations started thinking about sharing, or splitting, some of the processing demands between the mainframe and the PC. Client/server technology evolved out of this movement for greater computing control and more computing value. Client/server refers to the way in which software components interact to form a system that can be designed for multiple users. This technology is a computing architecture that forms a composite system allowing distributed computation, analysis, and presentation between PCs and one or more larger computers on a network. Each function of an application resides on the computer most capable of managing that particular function. There is no requirement that the client and server must reside on the same machine. In practice, it is quite common to place a server at one site in a local area network (LAN) and the clients at the other sites. The client, a PC or workstation, is the requesting machine and the server, a LAN file server, mini or mainframe, is the supplying machine. Clients may be running on heterogeneous operating systems and networks to make queries to the server(s). Networks provide connectivity between client/server and the protocols that they use to communicate. The Internet provides connectivity between systems that function as clients, servers, or both. Many services used on the Internet are based on client/server computing model. File Transfer Protocol (FTP) for example uses client/server interactions to exchange files between systems. An FTP client requests a file that resides on another system. An FTP server on the system where the file resides handles the clients request. The server gets access to the file and sends the file back to the clients system. Market researchers have projected enormous growth in the client/server area. This growth seems to have come at the expense of the mainframe market, which has stagnated. While the movement towards migrating from the mainframe to client/server architecture is gaining momentum, there are several distinct drawbacks since most of the client/server tools and methodologies are not in place and the associated administration support is still undefined. First generation systems are 2-tiered architectures where a client presents a graphical user interface (GUI) to the user, and acts according to the user's actions to perform requests of a database server running on a different machine. 2-Tier Architectures

Client/server applications started with a simple, 2-tiered model consisting of a client and an application server. The most common implementation is a 'fat' client - 'thin' server architecture, placing application logic in the client. (Figure 1) The database simply reports the results of queries implemented via dynamic SQL using a call level interface (CLI) such as Microsoft's Open Database Connectivity (ODBC).

Figure 1. Traditional Fat Client/Server Deployment An alternate approach is to use thin client - fat server waylays that invokes procedures stored at the database server. (Figure 2) The term thin client generally refers to user devices whose functionality is minimized, either to reduce the cost of ownership per desktop or to provide more user flexibility and mobility. In either case, presentation is handled exclusively by the client, processing is split between client and server, and data is stored on and accessed through the server. Remote database transport protocols such as SQL-Net are used to carry the transaction. The network 'footprint' is very large per query so that the effective bandwidth of the network, and thus the corresponding number of users who can effectively use the network, is reduced. Furthermore, network transaction size and query transaction speed is slowed by this heavy interaction. These architectures are not intended for mission critical applications.

Figure 2. Thin Client/Server Deployment Development tools that generate 2-tiered fat client implementations include PowerBuilder, Delphi, Visual Basic, and Uniface. The fat server approach, using stored procedures is more effective in gaining performance, because the network footprint, although still heavy, is lighter than that of a fat client. Advantages of 2-Tier System Good application development speed Most tools for 2-tier are very robust Two-tier architectures work well in relatively homogeneous environments with fairly static business rules

A new generation of client/server implementation takes this a step further and adds a middle tier to achieve a '3-tier' architecture. Generally, client-server can be implemented in an 'Ntier' architecture where application logic is partitioned. This leads to faster network communications, greater reliability, and greater overall performance. 3-Tier Architectures Enhancement of network performance is possible in the alternative 'N-tier' client-server architecture. Inserting a middle tier in between a client and server achieves a 3-tier configuration. The components of three-tiered architecture are divided into three layers: a presentation layer, functionality layer, and data layer, which must be logically separate. (Figure 3) The 3-tier architecture attempts to overcome some of the limitations of 2-tier schemes by separating presentation, processing, and data into separate distinct entities. The middle-tier servers are typically coded in a highly portable, non-proprietary language such as C. Middle-tier functionality servers may be multithreaded and can be accessed by multiple clients, even those from separate applications.

Figure 3. 3-Tiered Application Architecture The client interacts with the middle tier via a standard protocol such as DLL, API, or RPC. The middle-tier interacts with the server via standard database protocols. The middle-tier contains most of the application logic, translating client calls into database queries and other actions, and translating data from the database into client data in return. If the middle tier is located on the same host as the database, it can be tightly bound to the database via an embedded 3gl interface. This yields a very highly controlled and high performance interaction, thus avoiding the costly processing and network overhead of SQL-Net, ODBC, or other CLIs. Furthermore, the middle tier can be distributed to a third host to gain processing power capability. Advantages of 3-Tier Architecture RPC calls provide greater overall system flexibility than SQL calls in 2-tier architectures 3-tier presentation client is not required to understand SQL. This allows firms to access legacy data, and simplifies the introduction of new data base technologies Provides for more flexible resource allocation Modularly designed middle-tier code modules can be reused by several applications 3-tier systems such as Open Software Foundation's Distributed Computing Environment (OSF/DCE) offers additional features to support distributed applications development As more users access applications remotely for business-critical functions, the ability of servers to scale becomes the key determinant of end-to-end performance. There are several

ways to address this ever-increasing load on servers. Three techniques are widely used: Upsizing the servers Deploying clustered servers Partitioning server functions into a "tiered" arrangement N-Tier Architectures The 3-tier architecture can be extended to N-tiers when the middle tier provides connections to various types of services, integrating and coupling them to the client, and to each other. Partitioning the application logic among various hosts can also create an N-tiered system. Encapsulation of distributed functionality in such a manner provides significant advantages such as reusability, and thus reliability. As applications become Web-oriented, Web server front ends can be used to offload the networking required to service user requests, providing more scalability and introducing points of functional optimization. In this architecture (Figure 4), the client sends HTTP requests for content and presents the responses provided by the application system. On receiving requests, the Web server either returns the content directly or passes it on to a specific application server. The application server might then run CGI scripts for dynamic content, parse database requests, or assemble formatted responses to client queries, accessing dates or files as needed from a back-end database server or a file server.

Figure 4. Web-Oriented N-Tiered Architecture By segregating each function, system bottlenecks can be more easily identified and cleared by scaling the particular layer that is causing the bottleneck. For example, if the Web server layer is the bottleneck, multiple Web servers can be deployed, with an appropriate server load-balancing solution to ensure effective load balancing across the servers (Figure 5).

Figure 5. Four-Tiered Architecture with Server Load Balancing The N-tiered approach has several benefits: Different aspects of the application can be developed and rolled out independently Servers can be optimized separately for database and application server functions Servers can be sized appropriately for the requirements of each tier of the architecture More overall server horsepower can be deployed Deployment Considerations Deployment Considerations The choice of choosing the optimal architecture should be based on the scope and complexity of a project, the time available for completion, and the expected enhancement or obsolescence of the system. With n-tiered architectures, the network manager must do three key things: Co-ordinate closely with application developers Design the infrastructure supporting the server farm for maximum performance Understand application users and their access and performance requirements

Researchers believe that by the year 2040, client/server technology will have evolved from the current rigid definition applied to using specified computers to an intelligent network or to what some call a single system image. This intelligent network will have "smart hubs" or general call points to access multiple requests and assign the processing in the most efficient manner. Whether this becomes a reality or a commercial dream is beyond our vision, but in time, client/server technology will surely continue to unfold and prove its true worth. The major characteristics of client/server architecture include the logical separation of client and server processes, the ability to change the server without affecting the clients, and the capacity to change a client without affecting the server or other clients. Other characteristics of client/server are:

User friendly applications Gives user great deal of control Department level managers are given the ability to be responsive to their local needs Network Security New technical approach to distributed computing

A server is passive. It does not initiate conversations with clients although it can act as a client of other servers. Characteristically, a server: Waits for and accepts clients Presents a defined abstract interface to client Maintains the location independence and transparency of client interface

The client is the networked information requestor. Typical client functions are to: Display the user interface Perform basic input editing Format queries to be forwarded to the server processor Communicate with the server Format server responses for presentation

In general, client/server technology centralizes applications and information, making them available to the data owners via network terminals. Consequently, client/server technology offers many benefits over mainframe computing: Reduced cost of operation Reduced lead time for system enhancements Ad-hoc reporting tools that do not require programmer assistance Increased end-user control of the system Graphical user interfaces (GUIs)

Business Impact Client/server technology has the following features and benefits: Features desktop processing multiple, shared processing functionality where it best fits> higher speed> software integration Benefits ad hoc query capabilities fits with down-sizing/decentralization greater flexibility distributed information custom-tailored user interface enhanced IT functionality

Key Applications The key applications for client/server technology are: price/performance ratios shared processing application control speed data integrity via centralized data functionality GUI, highly interactive end-user interface

Measuring the Technology's Performance

The performance of client/server technology is measured in two ways: cost savings - this is compared to mainframe costs in hardware and development flexibility - robust development environment with sufficient analysis, design and development tools that can integrate the necessary management tools

Co-ordinate closely with the application developers Some applications lend themselves to WAN links between the application servers and the data servers, but the majority do not. Understanding the application's network behavior is critical to determining where to deploy the various layers of the architecture as well as the network characteristics--performance, security, and redundancy--required between each layer. Application data flow should be characterized prior to deployment. However, once in service, both the design and the operation of the application may change. Therefore, application behavior must be monitored regularly to identify bottlenecks. Design the infrastructure supporting the server farm for maximum performance Technologies such as server load balancing, high-speed LAN switching; Gigabit Ethernet; multipoint link aggregation (MPLA); high-performance server network interface cards (NICs), which offload low-level networking functions from the CPU; and Fibre Channel storage access can be leveraged to ensure a high-performance server farm. Understand application users and their access and performance requirements The increased scale of the application architecture means that more users from more locations can use the application. To provide these users the best possible service, the network manager must understand who the users are, where they are, and what information they need to access. Optimizing the network pathways between users and the application tier with which they interact usually implies more attention to network latency, bandwidth, and traffic management, especially in the WAN. If the level of traffic associated with business-critical applications is projected to approach the capacity of the WAN links, more WAN bandwidth is needed. More likely, the total traffic-including both business-critical traffic and non-time-critical applications--approaches the WAN capacity limit at peak periods. In this case, WAN traffic management can ensure that the critical application traffic is given priority during periods of congestion. For example, one could guarantee that access to business-critical enterprise resource planning (ERP) tools is given priority over e-mail and stock quote updates when monthly reports are being generated and analyzed. The network manager's role should be to interpret business objectives and goals as well as to establish traffic management policies that support those goals

Vous aimerez peut-être aussi