Vous êtes sur la page 1sur 105

PART I Summery S No. 1. 2. 3 4 5 6. 7. 8.

. Description Introduction Objectives And Scope Limitations Theoretical Perspective Methodology And Procedure Analysis of Data Findings, Inferences And Recommendations Conclusions Page No.(s) 8 - 11 11 - 15 15 - 18 18 - 19 19 - 41 42 42 42 - 52 53 55

PART II Overview of the Organization S No. 1. 2. 3. Description An Overview Of The Organization Waters India Pvt Limited Operational Statistics PART III Project Overview S No. 1. 2. 3. Description Page No.(s) Waters India Data Center Design 59 69 Waters India Data Center Infrastructure 69 - 80 Layout Bibliography 80 Page No.(s) 55 56 56 - 57 57 59

Name: Salimmalik Contact Number: 09342246210

1. Project Management Introduction:


In the standardized model of the project process, project management is a broad category of oversight activity that occurs throughout the course of the project to provide communication, planning, coordination, and problem resolution (Figure 1).

Prepare Design Acquire Implement

Figure 1 Project management in the project process map As with any business project, data center project management provides dedicated oversight to address project-critical activities such as Scheduling Resources Scope of responsibilities Continuity (handoffs) Budget System changes Process defects Status reporting

General techniques, training, and tools for project management are well documented in business and industry literature, and are beyond the scope of this paper. This paper focuses on the particular project management roles needed for data center projects,

and how those management responsibilities can be divided up and accounted for, in order to meet the needs of a specific project. Determining which management roles are needed for the project, and who will perform them, is part of configuring the process for the project at hand. The proper configuration of the process is as important to the success of the Project as the configuration of the physical equipment of the system. 1.1 When does project management start? The configuration and delegation of project management activity is a critical element of process design that must be considered and determined up front, well before the time comes to execute it. Depending upon the size, scope, and clarity of the project initiative at the outset, assigned and dedicated management may not begin until after the initial fact-finding activities of the Prepare phase, which identifies and clarifies the endeavor as a project (Figure 2). Note that the milestone defining the end of this first phase is Commit to Project, which typically marks the beginning of whatever tracking and database activities will be used to support the project, and in some cases may be the point at which formal project management starts.

Projects of greater scope or with more customized engineering may require that project management activity begin earlier during the Prepare phase whereas for smaller data center expansion projects, project management may not need to start until later, after the purchase order is executed at the end of the Design phase. The size, complexity, and criticality of the project will determine when project management needs to become a structured, dedicated role.

Regardless of how and when project management is configured, there will always be some project management activity in the customer organization from the very beginning, if only to make the configuration decisions and possibly negotiate contracts for outsourced management. 1.2 Subordinate management roles: The general process map of Figure 1 shows project management as a single bar across the top, implying that it is one job. It can be one job, and in smaller projects it might be configured that way. More often it is configured as more than one job, or an oversight job with subordinate jobs under it. For example, installation management can be defined as a separate role spanning the Acquire and Implement phases, overseeing on-site activity related to delivery and setup of the physical system (Figure 3). Management roles such as this should be considered modular elements of overall management, remaining subordinate to the overall end-to-end project management role. Theoretically, management responsibility could be subdivided further by assigning separate management to each of the four phases, or even to combinations of steps within a phase (not generally recommended, but it could be appropriate in special circumstances). More typically, management responsibility is subdivided by the organization(s) providing hardware and services.

1.3 Dedicated point-of-contact Regardless of how project management responsibilities are configured, the objective of each management role is the same: seamless coverage within its scope of responsibility, integration with other management roles, and a dedicated point-ofcontact at all times. A dedicated point of contact is especially critical when the ultimate responsibility lies with delegated sub-roles or third party providers. Such a dedicated point-of contact, whose job it is to field, direct, and coordinate communication, should be considered an essential role in every project. This management role monitors and facilitates fulfillment of all commitments made to the customer delivery dates, appointments, and other promises during the course of the project, with authority to do whatever it takes to clear roadblocks and solve coordination problems. 1.4 Documentation and tracking Regardless of how project management roles are configured for the project, an essential project management responsibility is documentation and tracking of project activity. Current project information must be easily accessible at all times to authorized project team members and service partners. A common and effective method is an online Web site. This interactive project record should not only provide up-to date information, but it should also accept feedback, comments, requests, and problem statements, and route the information appropriately. The project database should be able to provide updates and reports, and log ad hoc information such as contractors vacation schedules, alternate phone numbers, and miscellaneous remarks. 1.5 Coordination of Multiple Suppliers Most data center projects will have more than one supplier of hardware or services contributing to the work of the project. The customer may engage separate equipment vendors or service providers for power, cooling, racks, security, fire suppression, electrical work, mechanical work, and perhaps a general contractor if building construction is required. Each supplier of hardware or services will have potential interaction or dependencies with the other suppliers to the project. For example, fire suppression installation depends upon piping and wiring that must be installed first, both of which may be handled by a different supplier. While each of these suppliers will have its own project manager to conduct the work it contributes

to the project, there is an additional project role that spans all suppliers: coordination. Coordination provides an interface among suppliers with whom there are equipment or time dependencies. It is a role that can be difficult to assign when there are many suppliers to a project. If dependencies among suppliers are not coordinated, delays and expense can result from supplier site visits that are scheduled too soon for the handoff, or from one supplier unnecessarily waiting for something from another. Coordinating the work of all suppliers is a critical part of project management that can be overlooked in planning, but is essential to the efficient and reliable progress of the project. Minimizing the number of suppliers for example, by bundling some services and equipment under a single vendor shifts some of the coordination burden to the intermediate vendor and reduces the risk of faulty communication between suppliers (Figure 4). While it may not be possible to have everything handled by a single vendor, reducing the number of vendors can significantly decrease the coordination burden especially when all possible interdependencies are considered (Figure 5).

2. Data Center Introduction


Data center is home to the computational power, storage, and applications necessary to support an enterprise business. The data center infrastructure is central to the IT architecture, from which all content is sourced or passes through. Proper planning of the data center infrastructure design is critical, and performance, resiliency, and scalability need to be carefully considered.

Another important aspect of the data center design is flexibility in quickly deploying and supporting new services. Designing a flexible architecture that has the ability to support new applications in a short time frame can result in a significant competitive advantage. Such a design requires solid initial planning and thoughtful consideration in the areas of port density, access layer uplink bandwidth, true server capacity, and oversubscription, to name just a few. The data center network design is based on a proven layered approach, which has been tested and improved over the past several years in some of the largest data center implementations in the world. The layered approach is the basic foundation of the data center design that seeks to improve scalability, performance, flexibility, resiliency, and maintenance. A Data Center Architecture includes the layout of the boundaries of the room (or rooms) and the layout of IT equipments within the room. Most users do not understand how critical the floor layout is to the performance of a data center, or they only understand its importance after a poor layout has compromised the deployment. The floor plan either determines or strongly affects the following characteristics of a data center: The number of rack locations that are possible in the room The achievable power density The complexity of the power and cooling distribution systems The predictability of temperature distribution in the room The electrical power consumption of the data center There are five core values that are the foundation of a data center design philosophy: Simplicity, flexibility, scalability, modularity,

sanity. The last one might give you pause, but if youve had previous experience in designing data centers, it makes perfect sense. Design decisions should always be made with consideration to these values.

2.1 Keep the Design as Simple as Possible


A simple data center design is easier to understand and manage. A basic design makes it simple to do the best work and more difficult to do sloppy work. For example, if you label everythingnetwork ports, power outlets, cables, circuit breakers, their location on the floorthere is no guess work involved. When people set up a machine, they gain the advantage of knowing ahead of time where the machine goes and where everything on that machine should be plugged in. It is also simpler to verify that the work was done correctly. Since the locations of all of the connections to the machine are pre-labeled and documented, it is simple to record the information for later use, should the machine develop a problem.

2.2 Design for Flexibility Nobody knows where technology will be in five years, but it is a good guess that there will be some major changes. Making sure that the design is flexible and easily upgradable is critical to a successful long-term design. Part of flexibility is making the design cost-effective. Every design decision has an impact on the budget. Designing a cost effective data center is greatly dependent on the mission of the center. One company might be planning a data center for mission critical applications, another for testing large-scale configurations that will go into a mission critical data center. For the first company, full backup generators to drive the entire electrical load of the data center might be a cost-effective solution. For the second company, a UPS with a 20-minute battery life might be sufficient. Why the difference? If the data center in the first case goes down, it could cost the company two million dollars a

minute. Spending five million on full backup generators would be worth the expense to offset the cost of downtime. In the second case, the cost of down time might be $10,000 an hour. It would take 500 hours of unplanned downtime to recoup the initial cost of five million dollars of backup generators. 2.3 Design for Scalability The design should work equally well for a 2,000, 20,000, or 2,000,000 square foot data center. Where a variety of equipment is concerned, the use of watts per square foot to design a data center does not scale because the needs of individual machines are not taken into consideration. This book describes the use of rack location units (RLUs) to design for equipment needs. This system is scalable and can be reverse engineered. 2.4 Use a Modular Design Data centers are highly complex things, and complex things can quickly become unmanageable. Modular design allows you to create highly complex systems from smaller, more manageable building blocks. These smaller units are more easily defined and can be more easily replicated. They can also be defined by even smaller units, and you can take this to whatever level of granularity necessary to manage the design process. The use of this type of hierarchy has been present in design since antiquity. 2.5 Keep Your Sanity Designing and building a data center can be very stressful. There are many things that can, and will, go wrong. Keep your sense of humor. Find ways to enjoy what youre doing. Using the other four values to evaluate design decisions should make the process easier as they give form, order, and ways to measure the value and sense of the design decisions youre making. Primarily, they help to eliminate as

many unknowns as possible, and eliminating the unknowns will make the process much less stressful.

3. Objectives & Scope


3.1 Objectives: To ensure that your data center is built and meet to your business requirement Meeting to known Data Center Standard and best practices To meet the principal goals in data center design, which are flexibility, and scalability, where these shall involve site location, building selection, floor layout, electrical system design, mechanical design and modularity. 3.2 Provide all design as mentioned in the scope of services Many factors go into the design of a successful data center. Careful and proper planning during the design phase will ensure a successful implementation resulting in a reliable and scalable data center, which will serve the needs of the business for many years. Remember the times when you had to keep your computer in a cool, dust-free room? Your personal computer may not need such a cozy environment anymore, but your precious servers do. The datacenter is what houses your organizations servers. The benefits of a datacenter are many. But a datacenter may not be necessary for every organization. Two things dictate the requirement of a datacenter: the number of servers, network devices and your ability to manage the datacenter. If you find monitoring and managing a datacenter overwhelming, you need not forsake the datacenter. Instead, opt for a hosted one, where your datacenter is kept within the premises of a vendor who also takes care of its management and security. Ideally, large businesses and the mid-size ones that are growing rapidly should go for

datacenters. For small businesses, single servers or cluster servers can provide all that they need. Now, how does a datacenter help your business?

It offers high availability. A well-managed datacenter ensures that business never suffers because of one failure somewhere. It is highly scalable. The datacenter offers support as the business needs change. It offers business continuity. Unexpected problems and server failures dont deter the functioning of your business in any way.

3.3

One of the biggest criteria of owning a datacenter is your ability to manage it.

Datacenter management, however, does not necessarily depend on you. You can hire professionals to manage it for you. In fact, you dont even need to keep the datacenter in your premises; it can be kept within the premises of the vendor. IT operations are a crucial aspect of most organizational operations. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation. 3.4 When designing a large enterprise cluster network, it is critical to consider specific objectives. No two clusters are exactly alike; each has its own specific requirements and must be examined from an application perspective to determine the

particular design requirements. Take into account the following technical considerations: LatencyIn the network transport, latency can adversely affect the overall cluster performance. Using switching platforms that employ a low-latency switching architecture helps to ensure optimal performance. The main source of latency is the protocol stack and NIC hardware implementation used on the server. Driver optimization and CPU offload techniques, such as TCP Offload Engine (TOE) and Remote Direct Memory Access (RDMA), can help decrease latency and reduce processing overhead on the server. Latency might not always be a critical factor in the cluster design. For example, some clusters might require high bandwidth between servers because of a large amount of bulk file transfer, but might not rely heavily on server-toserver Inter-Process Communication (IPC) messaging, which can be impacted by high latency. Mesh/Partial mesh connectivityServer cluster designs usually require a mesh or partial mesh fabric to permit communication between all nodes in the cluster. This mesh fabric is used to share state, data, and other information between master-to-compute and compute-to-compute servers in the cluster. Mesh or partial mesh connectivity is also application-dependent. High throughputThe ability to send a large file in a specific amount of time can be critical to cluster operation and performance. Server clusters typically require a minimum amount of available non-blocking bandwidth, which translates into a low oversubscription model between the access and core layers.

Oversubscription ratioThe oversubscription ratio must be examined at multiple aggregation points in the design, including the line card to switch fabric bandwidth and the switch fabric input to uplink bandwidth. Jumbo frame supportAlthough jumbo frames might not be used in the initial implementation of a server cluster, it is a very important feature that is necessary for additional flexibility or for possible future requirements. The TCP/IP packet construction places additional overhead on the server CPU. The use of jumbo frames can reduce the number of packets, thereby reducing this overhead. Port densityServer clusters might need to scale to tens of thousands of ports. As such, they require platforms with a high level of packet switching performance, a large amount of switch fabric bandwidth, and a high level of port density. High-Availability All data center designs are judged by their ability to provide continuous operations for the network services they support. Data center availability is affected by both planned (scheduled maintenance) and unplanned (failures) events. To maximize availability, the impact from each of these must be minimized and/or eliminated. All data center designs are judged by their ability to provide continuous operations for the network services they support. Data center availability is affected by both planned (scheduled maintenance) and unplanned (failures) events. To maximize availability, the impact from each of these must be minimized and/or eliminated. All data centers must be maintained on a regular basis. In most data center designs, scheduled maintenance is a planned event requiring network downtime. For this reason, general maintenance is often forgone, leaving

long-term availability to chance. In robust data center designs, concurrently maintainable systems are implemented to avoid interruption to normal data center operations. To mitigate unplanned outages, both redundancy and fault-tolerance must be incorporated into the data center design. High-availability is accomplished by providing redundancy for all, major and minor, systems, thereby eliminating single points of failure. Additionally, the data center design must offer predictable uptime by incorporating fault-tolerance against hard failures. (A hard failure is a failure in which the component must be replaced to return to an operational steady state.) A data center achieves high-availability by implementing a fully redundant, fault-tolerant, and concurrently maintainable IT and support infrastructure architecture in which all possible hard failures are predictable and deterministic.

3.5 Scope
An important distinction to make at this point is what really constitutes the elements of a data center. When we talk about the data center, we are talking about the site, the Command Center (if one is to be added), the raised floor (if one is to be added), the network infrastructure (switches, routers, terminal servers, and support equipment providing the core logical infrastructure), the environmental controls, and power. Though a data center contains servers and storage system components (usually contained in racks), these devices are contents of the data center, not part of the data center. They are transient contents just as DVDs might be considered the transient contents of a DVD player. The data center is more of a permanent fixture, while the servers and storage systems are movable, adaptable, interchangeable elements. However, just as the DVD is of no value without the player and the player is of no value without the DVD, a data center without equipment is an expensive empty room, and servers with no connection are just expensive paper

weights. The design of the data center must include all of the elements. The essential elements are called the criteria. Most often, it is the project scope that determines the data center design. The scope must be determined based on the company's data center needs (the desired or required capacities of the system and network infrastructure), as well as the amount of money

available. The scope of the project could be anything from constructing a separate building in another state with offices and all the necessary utilities, to simply a few server and storage devices added to an existing data center. In either case, those creating the project specifications should be working closely with those responsible for the budget.

3.6 Limitation
The primary components necessary to build a data center include rack space (real estate), electrical power and cooling capacity. At any given time, one of these components is likely to be a primary capacity limitation, and over the past few years, the most likely suspect is power. The obvious requirement is power for the servers and network equipment, but sometimes less obvious is the power required to run the air handling and cooling systems. Unless you have built your data center right next to a power station, and have a very long contract in place for guaranteed supply of power at a nice low rate per kilowatt-hour, you are likely seeing your data center costs rise dramatically as the cost of electricity increases. Power costs have become the largest item in the OpEx budgets of data-centre owners and, for the first time, now exceed the IT hardware cost over its average service-life of 3-4 years. This has resulted in the pursuit (both real and spun) for higher efficiency, sustainable, green designs. Majority of the losses occurs in a data centre as per the below mentioned ratio:

Many IT professionals are finding themselves in the awkward position of begging their data center hosting providers for increased electrical capacity, or they are facing the unpalatable option of relocating their data center facilities.

COMPONENT Chip Set SMPS losses Server Fans Room Fans Pumps Compressors Condenser Fans Humidification Plantroom Cooling Anticilliary Power Security, Controls & Comms UPS & Distribution Losses Transmission Losses

Losses % 34% 9% 6% 4% 2% 18% 4% 1% 1% 7% 1% 6% 7%

While the industry is very aware of the issue of equipment power consumption, and the major manufacturers are already designing in power reduction and conservation features to their latest products, we are far from turning the corner on the demand for more data center electrical power. The demand for computing capacity continues to rise, which implies a demand for more, better and faster systems. This

results directly in higher demands for rack space, power and cooling in your data center, and related increased costs. The increasing costs are receiving a lot of executive management attention, especially given current economic conditions. Server rationalization is the buzz word of the day. Do we need to buy a new server, or can we re-use one we already have? Are we using the servers we have to their full capability? A typical data center has very high storage utilization (as no one voluntarily throws old data away), and server utilization is low (as different functions within a company dont want to share). Server virtualization has become one of the hottest topics in IT, as it has become a means of ensuring higher utilization of servers, lowering data center costs and providing flexibility for shuffling systems workload. But there are limits to what can be virtualized, and of course, the overall utilization of a server pool has an upper bound, as well. But there are other components needed to build a data center, including: network equipment and communication circuits, power distribution equipment (e.g., distribution panels, cable and outlets), power backup equipment (including generator(s) and uninterruptible power supplies (UPS)), cable trays (for both network and power cables), and fire suppression systems. And, yes, believe it or not, sometimes these other components can become the constraining factor in data center capacity. Cable trays can be a limiting factor? you ask. Yes, just two years ago, we ran into a situation where an older data center couldnt add capacity to a specific cage because the weight of the cable already in the tray was at design load limits. We couldnt risk the tray ripping out of the ceiling by laying in more cable, and we couldnt shut the network and power down long enough to pull out all of the old cables and put new ones back in without excessive downtime to the business. A very expensive migration to a new cage in the data center became the only feasible option. Though it may seem hard to believe, there are many IT professionals who have never seen the inside of a data center.

Over the years, having a glass-walled computer room as a showcase in your corporate headquarters became problematic for a number of reasons including, security, highpriced real estate for servers that could be better accommodated elsewhere, as well as loss of the glass wall space for other purposes (e.g., communications punch-down blocks, electrical distribution panels). Besides, the number of blinking lights in the data center has steadily decreased over the years. And you dont see white lab-coated workers pushing buttons and handling tapes. So, whats there to see? (Not much, especially in a lights out facility.) But the interesting part is: when corporate executives and managers do occasionally visit a data center facility, they still expect to see nice clean rows of equipment, full racks of blinking lights and servers happily computing away. Instead, we now see large amounts of unused floor space and partially filled racks. As servers have become more powerful per cubic inch of space occupied, power and cooling capacity have become increasingly scarce, not the rack space for the equipment. The decreasing server footprint relative to the higher energy per cubic inch requirement is often referred to as a power-density problem. You should ask your data center manager for a tour. Standing behind a full rack of 40+ servers consuming 200 watts (or more) of power each is an amazing experience, likened to having someone turn six 1200-watt hair dryers directly toward you, running full blast. While the heat load behind a server rack has its shock effect, the more interesting cognitive dissonance for many executives is seeing the empty racks and floor space. When called upon to explain, my simple example has been this: if the power capacity (power cap) in your cage (or computer room) is limited to 100 kWh, it doesnt matter whether you have 10 full racks that consume 10 kWh each, or 20 partially filled racks that only consume 5 kWh each. If you have only one supercomputer that consumes all 100 kWh sitting in the middle of the room taking up only 20 square feet and there is

500 square feet of unused space all around it, it may look very odd, but youre still out of power. It would be wonderful if the data center design would come out even, with exactly the right amount of full racks, power and cooling to look like the space is being well-utilized, but that is not a common occurrence these days. Even if you can optimize one cage, its extremely difficult to optimize across the entire data center floor. Building out a new data center space requires careful planning and engineering of all the components. And even then, changes over time in server utilization, increases (and decreases) in the business needs, Virtualization and server technology all conspire to generate re-work of your carefully thought-out and well-balanced design.

3.7 . Perspectives
IT operations are a crucial aspect of most organizational operations. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation. Information stewardship is one key perspective. Information stewardship calls for holistic data management in the enterprise: defining and enforcing policy to guide

the acquisition, management, and storage lifecycle of data, and the protection of data from theft, leak, or disaster. Our research shows that enterprises that manage these intertwined issues as a set are more successful dealing with them than those that treat them as disjoint. The IT executives we are speaking with in our current research on security and information protection frequently cite the rising importance of a second key perspective: risk management. In the past, we mainly heard about risk management in two specific contexts: disaster planning and security. In disaster planning, risk assessment (where risk equals the cost to the business of major IT outages times the likelihood of the natural or manmade disasters that would lead to those outages) dictates how much the enterprise should spend on back-up IT infrastructure and services. In security, IT would often focus on specific threats and specific defensive technologies, and use risk assessment mainly to help decide where to spend money on security tools, or how to dedicate IT security staff time. Now many of the people we speak with tell us they use risk as a lens through which they view all their systems, processes, and staffing. Risk is not subordinate, as a component in the calculations of security and business continuity planners; instead, security and business continuance have become facets of risk management.

4. Methodology and Procedure of Work


4.1 Sizing the Data Center
Nothing has a greater influence on a Data Center's cost, lifespan, and flexibility than its sizeeven the Data Center's capability to impress clients. Determining the size of your particular Data Center is a challenging and essential task that must be

done correctly if the room is to be productive and cost-effective for your business. Determining size is challenging because several variables contribute to how large or small your server environment must be, including:

How many people the Data Center supports The number and types of servers and other equipment the Data Center hosts The size that non-server areas should be depending upon how the room's infrastructure is deployed

Determining Data Center size is essential because a Data Center that is too small won't adequately meet your company's server needs, consequently inhibiting productivity and requiring more to be spent on upgrading or expansion and thereby putting the space and services within at risk. A room that is too big wastes money, both on initial construction and ongoing operational expenses.

Many users do not appreciate these effects during data center planning, and do not establish the floor layout early enough. As a result, many data centers unnecessarily provide suboptimal performance.

Below explained how floor plans affect these characteristics, and to prescribe an effective method for developing a floor layout specification.

4.2 Role of the Floor Plan in the System Planning Sequence


Floor plans must be considered and developed at the appropriate point in the data center design process. Considering floor plans during the detailed design phase is typical, but simply too late in the process. Floor plans should instead be considered to

be part of the preliminary specification and determined BEFORE detailed design begins. It is not necessary for a floor layout to comprehend the exact location of specific IT devices. The effective floor plans only need to consider the location of equipment racks or other cabinets, and to target power densities. These preliminary floor layouts do not require knowledge of specific IT equipment. For most users it is futile to attempt to specify particular IT equipment locations in advance in fact, racks may ultimately house equipment that is not even available on the market at the time the data center is designed.

Figure The floor plan is a key input in the system planning sequence

The reasons that floor plans must be considered early, as part of the preliminary specification, and not left until the later detailed design include:

Density is best specified at the row level, so rows must be identified before a density specification can be created. Phasing plans are best specified using rows or groups of rows, so rows must be identified before an effective phasing plan can be created. The floor grid for a raised floor and the ceiling grid for a suspended ceiling should be aligned to the rack enclosures, so rows must be identified before those grids can be located. Criticality or availability can (optionally) be specified differently for different zones of the data center rows must be identified before a multi-tier criticality plan can be created. Density and phasing plans are a key part of any data center project specification, and both require a row layout. Detailed design can only commence after density, phasing, and criticality have been specified. Therefore, a floor plan must be established early in the specification phase of a project, after SYSTEM CONCEPT but well before DETAILED DESIGN (see Figure).

4.3 Floor Planning Concepts


A data center floor plan has two components: the structural layout of the empty room and the equipment layout of what will go in the room. Note that for many projects the room is pre-existing and the only option is to lay out the equipment within the room. A key rule of data center design is that there is a potentially huge

advantage in efficiency and density capacity if planners can lay out the room boundaries at the outset. Wherever possible, an attempt should be made to influence the structural room layout using the principles established.

4.3.1 Structural layout The room layout includes the location of walls, doors, support columns, windows, viewing windows, and key utility connections. If the room has a raised floor, the height of the raised floor and the location of access ramps or lifts are also part of the structural layout. If the room has a raised floor or a suspended ceiling, the index points for the floor or ceiling grid are critical design variables, and must also be included in the structural layout. Room measurements will be described in units of tiles, where a tile width is equal to 2 feet (600 mm) or one standard rack enclosure width. 4.3.2 Equipment layout The equipment layout shows the footprint of IT equipment and the footprint of power and cooling equipment. IT equipment can usually be defined as rack locations without regard for the specific devices in the cabinets, but other equipment such as tape libraries or large enterprise servers may have form factors that are different from typical racks and must be called out explicitly. In addition, IT equipment in a layout must be characterized by its airflow path. In the case of typical IT racks, the airflow is front-to-back, but some devices have other airflow patterns such as front-totop. Power and cooling equipment must also be accounted for in equipment layouts, but many new power and cooling devices are either rack mountable or designed to integrate into rows of racks, which simplifies the layout.

4.4 The Effects of Floor Plans on Data Center Performance


Several important data center characteristics are affected by floor plans. To understand effective floor layout methods, it is important to understand the consequences. 4.4.1 Number of rack locations The floor layout can have a dramatic affect on the number of rack locations that are possible in the room. Although, on average, the number of IT rack locations possible can be estimated by dividing the room area by 28 sq ft / rack (2.6 sq meters / rack)1, the actual number of racks for a particular data center can vary greatly from this typical value.

The basic principle of floor planning is to maximize the number of rack locations possible. Small variations in the location of walls, existing IT devices, air conditioners, and power distribution units can have a surprisingly large impact on the number of possible rack locations. This effect is magnified when high power densities are required. For this reason, a careful and systematic approach to floor planning is essential. 4.4.2 Achievable power density The floor plan can have a major impact on the achievable power density. With certain cooling architectures, a poor layout can decrease the permissible power for a given rack by over 50%. This is a huge performance compromise in a modern data center, where new technologies have power densities that are already stressing the capabilities of data center design. In many data centers, users may want to establish zones of different power density. These density zones will be defined by the equipment layout. The floor plan is therefore a critical tool to describe and specify density for data centers. 4.4.3 Complexity of distribution systems The floor plan can have a dramatic affect on the complexity of the power and cooling distribution systems. In general, longer rows, and rows arranged in the patterns, simplify power and cooling distribution problems, reduce their costs, and increase their reliability. 4.4.4 Cooling performance In addition to impacting the density capability of a data center, the floor plan can also significantly affect the ability to predict density capability. It is a best practice to know in advance what density capability is available at a given rack location and not to simply deploy equipment and hope for the best, as is a common current practice. An effective floor plan in combination with row-oriented cooling technologies allows simple and reliable prediction of cooling capacity. Design tools such as

APC InfraStruXure Designer can automate the process during the design cycle, and when layouts follow standard methods, off-the-shelf operating software such as APC InfraStruXure Manager can allow users to monitor power and cooling capacities in real time.

4.4.5 Electrical Efficiency Most users are surprised to learn that the electrical power consumption of a data center is greatly affected by the equipment layout. This is because the layout has a large impact on the effectiveness of the cooling distribution system. This is especially true for traditional perimeter cooling techniques. For a given IT load, the equipment layout can reduce the electrical power consumption of the data center significantly by affecting the efficiency of the air conditioning system. The layout affects the return temperature to the CRAC units, with a poor layout yielding a lower return air temperature. A lower return temperature reduces the efficiency of the CRAC units. The layout affects the required air delivery temperature of the CRAC units, with a poor layout requiring a colder supply for the same IT load. A lower CRAC supply temperature reduces the efficiency of the CRAC units and causes them to dehumidify the air, which in turn increases the need for energy-consuming humidification. The layout affects the amount of CRAC airflow that must be used in mixing

the data center air to equalize the temperature throughout the room. A poor layout requires additional mixing fan power, which decreases efficiency and may require additional CRAC units, which draw even more electrical power. A conservative estimate is that billions of kilowatt hours of electricity have been wasted due to poor floor plans in data centers. This loss is almost completely avoidable.

4.5 Basic Principles of Equipment Layout


The existence of the rack as the primary building block for equipment layouts permits a standardized floor planning approach. The basic principles are summarized as follows: Control the airflow using a hot-aisle/cold-aisle rack layout. Provide access ways that are safe and convenient. Align the floor or ceiling tile systems with the equipment. Minimize isolated IT devices and maximize row lengths. Plan the complete equipment layout in advance, even if future plans are not defined. Once these principles are understood, an effective floor planning method becomes clear. 4.5.1 Control of airflow using hot-aisle/cold-aisle rack layout The use of the hot-aisle/cold-aisle rack layout method is well known and the principles are described in other documents, such as ASHRAE TC9.9 Mission

Critical Facilities, Thermal Guidelines for Data Processing Environments 2004, and a white paper from the Uptime Institute titled Alternating Cold and Hot Aisles Provides More Reliable Cooling for Server Farms. The basic principle is to maximize the separation between IT equipment exhaust air and intake air by establishing cold aisles where only equipment intakes are present and establishing hot aisles where only equipment hot exhaust air is present. The goal is to reduce the amount of hot exhaust air that is drawn into the equipment air intakes. The basic hotaisle/cold aisle concept is shown in Figure Figure Basic hot-aisle/cold-aisle data center equipment layout plan

In the above figure, the rows represent the IT equipment enclosures (racks). The racks are arranged such that the adjacent rows face back to back, forming the hot aisles.

The benefits of the hot-aisle/cold-aisle arrangement become dramatic as the power density increases. When compared to random arrangements or arrangements where racks are all lined up in the same direction, the hot-aisle/cold-aisle approach allows for a power density increase up to 100% or more, without hot spots, if the appropriate arrangement of CRAC units is used. Because all cooling architectures (except for fully enclosed rack-based cooling) benefit dramatically from hot-aisle/cold-aisle layout, this method is a principal design strategy for any floor layout. 4.5.2 Align the floor and/or ceiling tiles with the equipment In many data centers the floor and ceiling tile systems are used as part of the air distribution system. In a raised floor data center, it is essential that the floor grille align with racks. If the racks and the floor grid do not align, airflow can be significantly compromised. It is also beneficial to align any ceiling tile grid with the floor grid. This means the floor grid should not be designed or installed until after the equipment layout is established, and the grid should be aligned or indexed to the equipment layout according to the row layout options. Unfortunately, specifiers and designers often miss this simple and no-cost optimization opportunity. The result is that either (1) the grid is misaligned with the racks, with a corresponding reduction in efficiency and density capability, or (2) the racks are aligned to the grid but a suboptimal layout results, limiting the number of racks that can be accommodated. 4.5.3 Pitch the measurement of row spacing The row length in a hot-aisle/cold aisle layout is adjustable in increments of rack width, which provides significant flexibility. However, the spacing between aisles has much less flexibility and is a controlling constraint in the equipment layout. The measurement of row-to-row spacing is called pitch, the same term that is used to describe the repeating center-to-center spacing of such things as screw threads, sound waves, or studs in a wall. The pitch of a data center row layout is the distance from one mid-cold-aisle to the next mid-cold-aisle (Figure ). Figure Pitch of a row layout

4.5.4 Minimize isolated IT devices and maximize row lengths The control of airflow by separating hot and cold air, as described above, is compromised at the end of a row where hot air can go around the side of the end rack and return to IT equipment air intakes on the back. Therefore, the theoretical ideal design of a data center is to have no row ends i.e. rows of infinite length. Conversely, the worst case situation would be rows of one-rack length i.e., isolated single racks. In addition, the ability to effectively implement redundancy is improved with longer rows. The goal of row layout is to maximize row length consistent with the goals of providing safe and convenient access ways. In general, a layout that provides longer row lengths is preferred, and a row layout that generates short rows of 1-3 racks should be avoided. 4.5.5 Special considerations for wide racks

Standard-width racks (2 ft or 600 mm) conveniently align with the width of raisedfloor tiles. When under floor cables must be distributed to such a rack, a hole is typically created in the tile directly below the rack to run the cables; if that particular rack is then re-located or removed, the tile is simply replaced with a new one. Wide racks that do not align with the standard raised floor tile width are creating a new challenge, because a rack may occupy two or even three tiles. If such a rack is removed, no longer can the tile simply be replaced with a new one, since the tile is partially underneath the neighboring rack as well. These issues can be avoided altogether by overhead power and data cable distribution. 4.5.6 Plan the complete floor layout in advance The first phase of equipment deployment often constrains later deployments. For this reason it is essential to plan the complete floor layout in advance.

4.5.7 Minimize isolated IT devices and maximize row lengths When row lengths are three racks or less, the effectiveness of the cooling distribution is impacted. Short rows of racks mean more opportunity for mixing of hot and cold air streams. For this reason, when rooms have one dimension that is less than 15-20 feet it will be more effective in terms of cooling to have one long row rather than several very short rows. 4.5.8 Standardized room dimensions There are preferred room dimensions for data centers, based on the pitch chosen. Given an area or room that is rectangular in shape, free of the constraints imposed by

support columns (described earlier), the preferred length and width are established as follows: One dimension of the room should be a multiple of the hot-aisle/cold-aisle pitch, plus a peripheral access-way spacing of approximately 2-4 tiles The other dimension of the room is flexible and will impact the length of the rows of racks When one of the dimensions of the room is not optimal, the performance of the room can be dramatically reduced, particularly if the room is smaller. The most obvious problem is that the number of equipment racks may be lower than expected because some space cannot be used. The second, and less obvious, problem is that when the ideal layout cannot be achieved, the power density and electrical efficiency of the system is reduced. To understand the effect of room dimension on the number of racks, consider a room with a fixed length of 28 feet and a variable width. In such a room, the length of a row would be 10 racks, allowing for 2 tiles (4 feet) at each row-end for access clearance. The number of racks that could fit in this room will vary as a function of the width of the room as shown in Figure.

Figure shows that the number of installable racks jumps at certain dimensions as new rows fit into the room. Furthermore, the chart shows that certain numbers of racks are preferred because the even row number permits a complete additional hotaisle/cold-aisle pair to be installed. The preferred width dimensions are indicated by the arrows, for the pitch (the most compact pitch A in this case) and perimeter clearances (2 tiles) defined.

Figure Impact of room dimension on number of rows

4.5.9 Location of support columns in room boundary layout The location of support columns in the room can dramatically affect the equipment layout, as previously illustrated. Therefore, when an option exists to locate room boundaries, the following guidelines apply:

For smaller rooms, arrange the room boundaries, if possible, so that no support columns are in the equipment area. Rooms should be rectangular, where possible. Unusual shapes, niches, and angles often cannot be effectively utilized and/or create a reduction in power density or electrical efficiency. For situations where columns are unavoidable but boundaries are flexible, the floor plan should be laid out as if no columns existed, based on the standardized dimensions of the room, and the pitch(es) required. Columns should then be located directly over any one particular rack location, preferably at a row end. For very large rooms, the location of the walls in relation to the columns is typically inflexible. When a column is located directly over a particular rack location, as the third bullet above suggests, it is important to block off any openings between the column(s) and the neighboring racks. If these gaps are not blocked with a filler panel, mixing of hot and cold air streams can occur and cooling performance can be compromised. 4.5.10 Phased deployments When a phased deployment is planned, there are two strategies that can be beneficial. These are: Creating area partitions Advance layout of future rows When a future phase has a very large uncertainty, area partitions or walls that subdivide the data center into two or more rooms can be used. The benefits are:

Ability to re-purpose areas in the future Ability to perform radical infrastructure modifications in one area without interfering with the operation of another area Ability to defer the installation of basic infrastructure (such as piping or wiring) to a future date The advent of modular row-oriented power and cooling architectures has reduced the need to provide radical infrastructure modifications during new deployments, and has greatly reduced the cost and uncertainty associated with installing base wiring and plumbing infrastructure. Therefore, the compelling need to partition data centers has been dramatically reduced. Nevertheless, retaining options such as future re-purposing of area is valuable for some users. The key to successful partitioning is to understand that partitions should NEVER be placed arbitrarily without first performing an equipment layout scenario analysis. This is because the floor layout can be seriously compromised by a poor choice of a partition position. During the setting of partitions or walls within a data center room, the same principles should be applied as those used when establishing the overall perimeter room boundaries. The standard spacing of rows must be considered. Failure to do this can result in problems (See Figure). Note that the location of the wall in the bottom scenario has caused row 5 of the equipment layout to be lost, representing 10 racks out of the 80 rack layout, or 12% of the total a significant loss of rack footprint space. Although the wall was only offset by a small amount, this loss occurs because the wall-to-wall spacing does not permit appropriate access ways if row 5 is included. Furthermore, the access way between row 6 and the wall has become a hot aisle. This reduces the confining effect of the hot-aisle/cold-aisle design and will result in a reduced power capacity for row 6. Furthermore, because the primary access path between row 6 and the wall is now a hot aisle, this creates an uncomfortable zone for personnel. These factors taken

together demonstrate how serious a small change in a wall location can be when partitioning a data center. Figure Impact of portioning placement on number of rack locations

4.6 Floor Planning Sequence


Using the rack as the basic building block for floor layout, and the row-pair pitch as the spacing template, a standardized floor layout approach is achievable. Starting with a floor plan diagram for the room, the basic principles are summarized as follows:

4.6.1 Identify and locate the room constraints First, identify and locate all physical room constraints: Columns verify the exact as-built dimensions Doorways Existing fixed equipment breaker panels, pipe connections, fire suppression equipment, cooling equipment 4.6.2 Establish key room-level options Next, identify what additional equipment will be placed in the room, and the options available for delivering/installing that equipment within existing room constraints: Identify additional equipment (besides the IT equipment or in-row power and cooling equipment) that will be placed in the room, including any additional cooling equipment, fire suppression equipment, power equipment, or user workstations. If the room uses a raised floor, determine the length(s) of the access ramp(s) and identify all possible options for locating the ramps It is critical at this stage to know if the facility will have a raised floor. Many new high-density data centers do not use a raised floor, so a raised floor should not be automatically assumed. Sometimes it is even appropriate to remove a raised floor from an existing site for new deployments.

4.6.3 Establish the primary IT equipment layout axis Every room has two primary layout axes, or directions that the rows can be oriented. The axis selection is one of the most critical decisions in a data center plan and has a large impact on performance and economy. When using a hot-aisle/cold-aisle row pair arrangement in the pitch determined necessary or preferred, test the two primary axis orientation layouts to establish if either has an obvious advantage. When performing the test layouts, ensure that: Columns are not located in main access ways (If no raised floor) Rows are aligned to the ceiling grid so that the cold aisles Contain complete tiles. There is sufficient clearance at row-ends and between rows and walls There is sufficient clearance/access around any fixed equipment in the room Access ramps, if required, are present and have been optimally located Any open areas or areas for another purpose face a cold aisle, not a hot aisle Locations have been found for any additional equipment identified in the room level options above Rows that are separated by an access way should not reverse the direction they face All rows align with the same axis (i.e., all a rows are parallel, with no perpendicular rows) The entire room is laid out in the floor plan, even if no immediate plans are in place to deploy some sections of the room

To determine the preferred layout, the following factors should be considered: Which axis most effectively keeps support columns out of the main access ways? Which axis allows for the most racks? Which axis works best with the preferred hot-aisle/cold-aisle pitch? Which axis has hot-aisle/cold-aisle row pairs and does not end up with an odd number of rows? Which axis has the fewest short rows or isolated racks? Which layout provides the desired aesthetic layout of the data center for purposes of viewing or tours, if that is a consideration. Different users may weigh the above criteria differently. It is common for users to choose a layout axis that meets aesthetic considerations without concern for the data center performance, and later regret their choice. The preferred method is to test both axes during planning and carefully decide the axis selection based on an understanding of the consequences. 4.6.4 Lock the row boundaries The process of selecting the primary layout axis typically establishes the row locations accurately. With the row locations established, it is critical to establish and validate the row boundaries. This includes setting the row end boundaries, and verifying the boundaries between fronts and/or backs of rows with respect to other equipment, columns, or walls. Access must be provided between row-ends and other obstructions using the following guidelines: For plain walls, a minimum of 2 tiles is an acceptable spacing for a row-end; larger data centers often prefer 3 tiles to provide better accessibility. For some layouts, it may be desired to end a row at a wall. However, this creates

a dead-end alleyway which may limit the length of the row based on code requirements. For long rows of over 10 racks, local regulations may require that breaks be placed in rows to allow personnel to pass through. This may also be of practical concern for technicians who need access to both sides of a rack without having to walk a long distance.

The spacing between the row front (cold aisle) or the row back (hot aisle) and Other equipment must be carefully checked to ensure that access ways are sufficient, and that any access required to those other devices for service or by regulation is sufficient and meets code. It must be verified that any other equipment that has been located as part of the floor plan is not constrained by piping, conduits, or access restrictions. The above restrictions and boundaries must be marked on the room layout before the axis selection and row layout are confirmed. For small data centers i.e., up to 2 rows of racks this floor planning process can occur as a paper study. As the room size grows, computer-aided tools that ensure consistent scale become necessary in order to accurately plan the floor layout. Ideally, the row layout and boundary areas should also be marked out using colored masking tape in the actual facility. This step is quite feasible for many smaller fit-out designs and for retrofits, and often identifies surprise constraints that were not realized during the conceptual plans.

4.6.5 Specify row/cabinet density Once the row boundaries and the orientation of the row axis have been established, the enclosure/cabinet layout can be performed. This begins with the partitioning of rows by buildout phase. For each phase, multiple zones or areas may exist, each with a unique density requirement. 4.6.6 Identify index points (for new room) If the data center has a pre-existing raised floor, then the actual location of the floor grid relative to the wall is pre-established and will have been comprehended in an earlier process step. However, for new rooms, the raised floor grid location is controlled by the floor layout. An index point for the raised floor grid should be established in the plan, and clearly and permanently marked in the room. It is absolutely essential that the contractor installing the raised floor align the grid to the index point during installation. If this is not done, it may not be possible to shift the layout later to align with the grid due to the boundary constraints. In a raised floor design, this can result in a massive loss of power density capability and a dramatic reduction of energy efficiency. This is a completely avoidable and terrible error that is commonly made during data center installations. Data centers that use a hard floor rather than a raised floor do not have this concern. If the data center uses a suspended ceiling for lighting and/or air return, aligning the index point to the ceiling grid is also highly recommended, but less critical than aligning with the floor grid. 4.6.7 Specify the floor layout The final step in the floor planning process is to specify the floor layout for subsequent design and installation phases of the data center project. The specification is documented as a detailed floor layout diagram, which includes all necessary room

and obstruction measurements, all rack locations identified, all unusable areas marked, and non-rack-based IT equipment that requires power and cooling noted. Ideally, this specification diagram is created in a computer-aided tool such as APCs InfraStruXure Designer, which subsequently allows for the complete design of the data centers physical infrastructure, detailed to the rack level. 4.6.8 Identify index points (for new room) If the data center has a pre-existing raised floor, then the actual location of the floor grid relative to the wall is pre-established and will have been comprehended in an earlier process step. However, for new rooms, the raised floor grid location is controlled by the floor layout. An index point for the raised floor grid should be established in the plan, and clearly and permanently marked in the room. It is absolutely essential that the contractor installing the raised floor align the grid to the index point during installation. If this is not done, it may not be possible to shift the layout later to align with the grid due to the boundary constraints. In a raised floor design, this can result in a massive loss of power density capability and a dramatic reduction of energy efficiency. This is a completely avoidable and terrible error that is commonly made during data center installations. Data centers that use a hard floor rather than a raised floor do not have this concern. If the data center uses a suspended ceiling for lighting and/or air return, aligning the index point to the ceiling grid is also highly recommended, but less critical than aligning with the floor grid.

4.7 Common Errors in Equipment Layout


Many users attempt rudimentary floor layout planning, yet still have downstream problems. Here are some of the most common problems observed by industries:

4.7.1 Failure to plan the entire layout in advance Most data centers begin to deploy equipment without a complete equipment deployment plan. As the deployment expands, severe constraints on the layout may emerge, including: Equipment groups grow toward each other and end up facing hot-to-cold instead of hot-tohot, with resultant hot spots and loss of power density capability Equipment deployments grow toward a wall, and it is subsequently determined that the last row will not fit but would have fit if the layout had been planned appropriately The rows have a certain axis orientation, but it is later determined that much more equipment could have been deployed if the rows had been oriented 90 the other way and it is too late to change it Equipment deployments grow toward a support column, and it is subsequently determined that the column lands in an access way, limiting equipment deployment but much more equipment could have been placed if the layout had been planned in advance Equipment deployments drift off the standard floor tile spacing and later high-density deployments are trapped, not having full tiles in the cold aisles, with a resultant loss of power density capability Most existing data centers have one or more of the problems listed above, with the attendant losses of performance. In typical data centers routinely observed, the loss of available rack locations due to these problems is on the order of 10-20% of total rack locations, and the loss of power density capability is commonly 20% or

more. These unnecessary losses in performance represent substantial financial losses to data center operators, but can be avoided by simple planning.

4.8 Data Center Multi-Tier Design Overview


The multi-tier model is the most common model used in the enterprise today. This design consists primarily of web, application, and database server tiers running on various platforms including blade servers, one rack unit (1RU) servers, and mainframes.

4.8.1 Why Use the Three-Tier Data Center Design? Why not connect servers directly to a distribution layer and avoid installing an access layer? The three-tier approach consisting of the access, aggregation, and core layers permit flexibility in the following areas: Layer 2 domain sizing When there is a requirement to extend a VLAN from one switch to another, the domain size is determined at the distribution layer. If the access layer is absent, the Layer 2 domain must be configured across the core for extension to occur. Extending Layer 2 through a core causes path blocking by spanning tree and has the risk of uncontrollable broadcast issues related to extending Layer 2 domains, and therefore should be avoided. Service modulesAn aggregation plus access layer solution enables services to be shared across the entire access layer of switches. This lowers TCO and lowers complexity by reducing the number of components to configure and manage. Consider future service capabilities that include Application-Oriented Networking (AON), ACE, and others. Mix of access layer modelsThe three-tier approach permits a mix of both Layer 2 and Layer 3 access models with 1RU and modular platforms, permitting a

more flexible solution and allowing application environments to be optimally positioned. NIC teaming and HA clustering supportSupporting NIC teaming with switch fault tolerance and high availability clustering requires Layer 2 adjacency between NIC cards, resulting in Layer 2 VLAN extension between switches. This would also require extending the Layer 2 domain through the core, which is not recommended. 4.8.2 Why Deploy Services Switch? When would I deploy Services Switch instead of just putting Services Modules in the Aggregation Switch? Incorporating Services Switch into the data center design is desirable for the following reasons: Large Aggregation LayerIf services are deployed in the aggregation layer, as this section scales it may become burdensome to continue to deploy services in every aggregation switch. The service switch allows for services to be consolidated and applied to all the aggregation layer switches without the need to physically deploy service cards across the entire aggregation layer. Another benefit is that it allows the aggregation layer to scale to much larger port densities since slots used by service modules are now able to be deployed with LAN interfaces. Mix of service modules and appliancesData center operators may have numerous service modules and appliances to deploy in the data center. By using the service chassis model you can deploy all of the services in a central fashion, allowing the entire data center to use the services instead of having to deploy multiple appliances and modules across the facility. Operational or process simplificationUsing the services switch design allows for the core, aggregation, and access layers to be more tightly controlled from a process change perspective. Security, load balancing, and other services

can be configured in a central fashion and then applied across the data center without the need to provide numerous access points for the people operating those actual services. Support for network virtualizationAs the network outside of the data center becomes more virtualized it may be advantageous to have the services chassis become the point where things like VRF-aware services are applied without impacting the overall traffic patterns in the data center.

4.8.3 Determining Maximum Servers

What is the maximum number of servers that should be on an access layer switch? What is the maximum number of servers to an aggregation module? The answer is usually based on considering a combination of oversubscription, failure domain sizing, and port density. No two data centers are alike when these aspects are combined. The right answer for a particular data center design can be determined by examining the following areas: OversubscriptionApplications require varying oversubscription levels. For example, the web servers in a multi-tier design can be optimized at a 15:1 ratio, application servers at 6:1, and database servers at 4:1. An oversubscription ratio model helps to determine the maximum number of servers that should be placed on a particular access switch and whether the uplink should be Gigabit EtherChannel or 10GE. It is important for the customer to determine what the oversubscription ratio should be for each application environment. The following are some of the many variables that must be considered when determining oversubscription: NICInterface speed, bus interface (PCI, PCI-X, PCI-E) Server platformSingle or dual processors, offload engines

Application characteristicsTraffic flows, inter-process communications Usage characteristicsNumber of clients, transaction rate, load balancing Failure domain sizingThis is a business decision and should be determined regardless of the level of resiliency that is designed into the network. This value is not determined based on MTBF/MTTR values and is not meant to be a reflection of the robustness of a particular solution. No network design should be considered immune to failure because there are many uncontrollable circumstances to consider, including human error and natural events. The following areas of failure domain sizing should be considered: Maximum number of servers per Layer 2 broadcast domain Maximum number of servers per access switch (if single-homed) Maximum number of servers per aggregation module Maximum number of access switches per aggregation module

Port densityThe aggregation layer has a finite number of 10GigE ports that can be supported, which limits the quantity of access switches that can be supported. When a Catalyst 6500 modular access layer is used, thousands of servers can be supported on a single aggregation module pair. In contrast, if a 1RU Catalyst 4948 is used at the access layer, the number of servers supported is less. Cisco recommends leaving space in the aggregation layer for growth or changes in design.

The data center, unlike other network areas, should be designed to have flexibility in terms of emerging services such as firewalls, SSL offload, server load balancing, AON, and future possibilities. These services will most likely

require slots in the aggregation layer, which would limit the amount of 10GigE port density available.

4.8.4 Determining Maximum Number of VLANs

What is the maximum number of VLANs that can be supported in an aggregation module?

Spanning tree processingWhen a Layer 2 looped access topology is used, which is the most common, the amount of spanning tree processing at the aggregation layer needs to be considered. There are specific watermarks related to the maximum number of system-wide active logical instances and virtual port instances per line card that, if reached, can adversely affect convergence and system stability. These values are mostly influenced by the total number of access layer uplinks and the total number of VLANs. If a data center-wide VLAN approach is used (no manual pruning on links), the watermark maximum values can be reached fairly quickly. Default Gateway Redundancy Protocol The quantity of HSRP instances configured at the aggregation layer is usually equal to the number of VLANs. As Layer 2 adjacency requirements continue to gain importance in data center design, proper consideration for the maximum HSRP instances combined with other CPU-driven features (such as GRE, SNMP, and others) have to be considered. Lab testing has shown that up to 500 HSRP instances can be supported in an aggregation module, but close attention to other CPU driven features must be considered.

4.8.5 Importance of Team Planning

Considering the roles of different personnel in an IT organization shows that there is a growing need for team planning with data center design efforts. The following topics demonstrate some of the challenges that the various groups in an IT organization have related to supporting a business ready data center environment: System administrators usually do not consider physical server placement or cabling to be an issue in providing application solutions. When the need arises for one server to be connected into the same VLAN as other servers, it is usually expected to simply happen without thought or concern about possible implications. The system administrators are faced with the challenge of being business-ready and must be able to deploy new or to scale existing applications in a timely fashion. Network/Security administrators have traditionally complied with these requests by extending the VLAN across the Layer 2 looped topology and supporting the server deployment request. This is the flexibility of having a Layer 2 looped access layer topology, but is becoming more of a challenge now than it was in the past. The Layer 2 domain diameters are getting larger, and now the network administrator is concerned with maintaining spanning tree virtual/logical port counts, manageability, and the failure exposure that exists with a large Layer 2 broadcast domain. Network designers are faced with imposing restrictions on server geography in an effort to maintain spanning tree processing, as well as changing design methods to include consideration for Layer 2 domain sizing and maximum failure domain sizing. Facilities administrators are very busy trying to keep all this new dense hardware from literally burning up. They also see the additional cabling as very difficult if not impossible to install and support with current design methods. The blocked air passages from the cable bulk can create serious cooling issues, and they are trying to find ways to route cool air into hot areas. This is driving the facilities administrators to look for solutions to keep cables minimized, such as

when using 1RU switches. They are also looking at ways to locate equipment so that it can be cooled properly.

These are all distinct but related issues that are growing in the enterprise data center and are creating the need for a more integrated team planning approach. If communication takes place at the start, many of the issues are addressed, expectations are set, and the requirements are understood across all groups.

5. Analysis
Data center architectures are evolving to meet the demands and complexities imposed by increasing business requirements to stay competitive and agile. Industry trends such as data center consolidation, server virtualization, advancements in processor technologies, increasing storage demands, rise in data rates, and the desire to implement "green" initiatives is causing stress on current data center designs. However, as this document discusses, crucial innovations are emerging to address most of these concerns with an attractive return on investment (ROI). Future data centers architectures will incorporate increasing adoption of 10 Gigabit Ethernet, new technologies such as Fibre Channel over Ethernet (FCoE), and increased interaction among virtualized environments. Data Centers are specialized environments that safeguard your company's most valuable equipment and intellectual property. A well-planned and effectively managed Data Center supports these operations and increases your company's productivity by providing reliable network availability and faster processing. In many ways your Data Center is the brain of your company. Your business' ability to

perceive the world (data connectivity), communicate (e-mail), remember information (data storage), and have new ideas (research and development) all rely upon it functioning properly. As businesses seek to transform their IT departments from support organizations into sources of productivity and revenue, it is more important than ever to design and manage these specialized environments correctly. A well-built Data Center does not just accommodate future growth and innovation; it acts as a catalyst for them. Companies that know their Data Center is robust, flexible, and productive can roll out new products, move forward with their business objectives, and react to changing business needs, all without concern over whether their server environment is capable of supporting new technologies, high-end servers, or greater connectivity requirements.

6. Findings, Inferences and Recommendations


Running out of data ports is perhaps the most common infrastructure shortcoming that occurs in server environments, especially in those whose rows contain infrastructure tailored to support a specific model of server. When it comes time to host different equipment, those cabinet locations must be retrofitted with different infrastructure. Fortunately, lack of connectivity is one of the easier issues to address. It can be remediated in one of two ways. One option is added structured cabling. As long as the installer is careful to work around existing servers and their connections, the upgrade can usually be completed without any downtime. This cabling needs to terminate somewhere in the Data Center, however, either at a network substation or a main networking row. These added ports might require more space than existing networking cabinets can provide, so be aware that more floor space might need to be allocated for them, which in turn reduces what is available to host servers.

A second option, particularly when the need is for copper connections, is to install either of two networking devicesa console server or console switchat the cabinet where more ports are needed. These networking devices can send multiple streams of information over one signal, a process known as multiplexing. For example, if you have several servers installed in a cabinet, instead of running a dozen patch cords from those devices to the Data Center's structured cabling under the floor, you run those patch cords to a console server or console switch andthanks to multiplexing you then run just one patch cord from that device to the structured cabling. Installing these networking devices can significantly expand the capacity of your Data Center's existing structured cabling.

This second approach is best used when:

There is limited internal space within the Data Center's network cabinets. Installing these networking devices in key server cabinets can reduce the space needed for additional patching fields.

Infrastructure within the Data Center plenum is chaotic, and installing more structured cabling may either restrict airflow or pose a downtime risk. Increasing ports by way of these networking devices involves only a fraction of the structured cabling than would otherwise be required.

The need for additional ports is temporary. Networking devices can be removed and reused more easily than structured cabling.

As Data Center cabinets fill up with servers, you might discover that its cooling infrastructure isn't up to the task of keeping the space cool. The overall ambient temperature of the server environment might become too warm, or else hot spots

might develop in areas where servers are tightly packed or a large device emits a high amount of exhaust. Presumably, you have already used the Data Center's floor tiles to good advantage, placing perforated floor tiles so that cooling is directed at known hot spots and sealing unwanted openings to maintain air pressure. There are many techniques for improving cooling in a server environment. Here are five:

Relocate air handler temperature sensors These devices are generally located at the cooling unit itself, where temperatures are often lower. Placing the sensors deeper within the room gives the air handlers more accurate readings about the Data Center's ambient temperature and can cause them to provide cooling for longer periods.

Install ducted returns These draw away more of the Data Center's heated air, channeling it into each handler's normal cooling cycle, and can reduce temperatures in sections of the room by a few degrees.

Distribute servers If tightly packed servers are causing hot spots, spreading such equipment out is a sure way to prevent them. This solution obviously has limited value if your Data Center also has space constraints. However, even if you don't have the option to only partially fill server cabinets in your Data Center, at least try to strategically locate devices so that the highest heat-producers aren't clustered together. It is easier to deal with several warmer areas in a server environment than one that is very hot.

Install self- cooling cabinetsReinstall the Data Center's most prodigious heat-generating servers into cabinets that are cooled by fans or chilled liquid. This eliminates hot spots at their source and might lower the room's overall ambient temperature.

Install additional air handlers Finally, if all else fails you might need to put in another air handler to increase how much cold air is being pumped in to the Data Center. Use the same approach that you would when designing the

room's cooling infrastructure from scratchtry to place the handler perpendicular to server rows and create a buffer area around it so that short cycling does not occur. When performing any work on your Data Center's cooling system that might require air handlers to be shut down, have multiple portable fans and spot coolers at the ready. Temperatures can rise quickly when air handlers are turned off. You might need to prop open the Data Center doors and use fans to blow hot air out of the room. For major cooling system work, try to schedule work to occur during colder weather, such as at night or during winter months or both. 6.1 Paradigm Shifts It is also possible that your Data Center has ample physical room and infrastructure available and yet still begins having problems hosting incoming equipment. This occurs when servers, networking devices, or other machines arrive that the server environment wasn't designed to accommodate. Maybe server manufacturers alter their designs, making machines that need more physical space or electrical power. Perhaps your company decides to pursue a different business goal, requiring equipment that your Data Center never had to host in years past. It could also be that technology changes, requiring new cabling media to accommodate it. Whatever the cause, this can be the hardest of shortcomings to deal with in a server environment, because you might not be able to overcome it by simply adding a few circuit panels or running more structured cabling. The physical layout of Data Center rows might need to be changed, including the physical relocation of both servers and infrastructure componentsall while the server environment remains online. If you are fortunatemore accurately, if you anticipated the need for future change you designed your Data Center infrastructure to be easily upgradeable. Flexible electrical conduits and lightly bundled structured cabling, with additional slack

provided, can enable you to reconfigure your under-floor infrastructure quickly and concentrate power and data connectivity where it is needed. Infrastructure components that enable you to use different media, such as multimedia boxes that can accommodate multiple connector types and electrical whips pre-wired to terminate in several types of receptacles, also make it easier to change elements of a server environment. If your Data Center possesses these types of infrastructure components, retrofitting the room might be as simple as having structured cabling and electrical conduits reterminated. More likely, however, you are going to have to make more dramatic and intrusive changes to the server environment. This might include rearranging server rows, removing and rerunning structured cabling, and either adding or relocating power distribution units or air handlers.

Here are several tips to follow when making significant infrastructure changes to your existing Data Center:

Upgrade the room in phases, say a couple of server rows at a time, rather than trying to overhaul it all at once. Retrofitting a live Data Center is like performing surgery on a conscious patient. Breaking the task down into segments reduces the effect of downtime and makes it less likely for something to go wrong on a large scale. If work can be completed over a long period of time, you might even be able to coordinate infrastructure changes with server lifecycles, that is as servers are decommissioned and replaced with newer models.

When retrofitting the Data Center dictates physically moving equipment, take advantage of devices that have dual power supplies. Strategically shifting power plugs from an old power receptacle to a new one can enable you migrate a server to a new part of the Data Center, or onto a different power source that won't be shut down, and enable you to avoid downtime.

If work in the Data Center might produce debris or airborne particles, shut down the fire suppression system to avoid setting it off accidentally. If new servers are installed in the Data Center while it is undergoing a retrofit, place them according to how the room is going to be designed ultimately. Arrange them to be part of the new layout, not another piece of equipment that must be relocated later.

An example of a recent paradigm shift for Data Centers is the emergence of 1U servers. At the start of this century, server manufacturers began producing low profile servers that were high performing and relatively inexpensive compared to earlier generations of devices. IT departments in many companies opted to replace older, larger systems with dozens of 1U servers. They are particularly popular for computing applications that can pool the power of multiple servers. For these applications, a cluster of 1U devices provides both flexibility and redundancy: flexibility, because exactly how many servers are dedicated to a task can be altered as needed and redundancy because even if a server fails there are several others that continue processing. Low-profile servers are also desirable for companies that lease Data Center space. Most hosting facilities charge based upon how much floor space a client occupies, so using smaller servers can reduce those costs. Despite their merits, 1U servers are difficult to host in many Data Centers. When clustered together, they are heavy, draw large amounts of power, produce a lot of heat, and require a high amount of data connections in a very small spacenot what most pre-existing server environments were designed and built to accommodate.

It isn't easy monitoring a distributed environment. Unchecked server growth fills data center space rapidly. Heterogeneous platforms and operating systems make provisioning and inventorying a long and laborious process. Power supply limitations can prevent cooling upgrades or curtail the addition of servers. It is difficult to know what IT resources are available, making it extremely difficult to plan effectively for future needs. Capacity planners, therefore, have two key questions to answer: How do we use the existing environment? And what is our total installed capacity? Most of the time there isn't an easy response. The picture is muddled by the fact that the capacity planner is often being forced to compare apples to oranges. The rising energy costs of running a data center are gaining more and more attention as they are already in the range of $3.3 billion annually, according to IDC. As a result, the Environmental Protection Agency (EPA) and Department of Energy are now creating standard ratings for energy efficiency benchmarks, forcing companies to be more conscious of their energy use and environmental impact. Many companies, however, are wary of such new regulations and standards, as they think itll mean incurring new costs to meet them.

This growing number of servers and data centers, naturally causes an increase in power and energy consumption, quickly escalating the amount of resources needed and raising environmental concerns even more. Thirty percent responded that they could find an additional 20 percent capacity. By having the proper tools in place, companies can keep their data center lean, mean, and green.

Sometimes the driving force behind the retrofit of a Data Center is that the room has become so disorganized and cluttered that it is problematic to install new equipment. Tangled patch cords, unlabeled structured cabling and electrical whips, and poor cooling distribution can all make a server environment vulnerable to downtime every

time a new device is installed. Before embarking upon a major construction project to add infrastructure or expand an overworked server environment, see if any of the following can remediate the problems and make the space more usable: Use the right length patch cords for the job System administrators sometime plug in servers and networking devices using whatever patch cords happen to be at hand rather than locating the correct lengths of cable that are needed to make connections. This results in 15 feet of cable being used to go 4 feet (4 meters of cable used to go 1 meter). Excess cable length is left to dangle from the device or patch panel it is plugged into, perhaps coiled with a tie wrap or perhaps not. Over time, this creates a spider web of cable that blocks access to servers, pre-installed cabling ports, and electrical receptacles. This tangled web not only reduces the usability of Data Center infrastructure, but it all presents a snagging hazard that can cause accidental downtime. Replace overly long patch cords and power cables with those that are the correct length. Also, remove cords and cables that aren't plugged in to functioning servers and are simply leftovers from decommissioned equipment. Do this within cabinets and under the raised floor. Many servers come standard with power cables that are 6 feet (1.8 meters) long. This is useful for reaching a power receptacle under a raised floor or in a ceiling-mounted raceway, but it is much longer than necessary if you are installing the device into a server cabinet with its own power strips. You can tie wrap the excess length, but even that still dangles somewhere within the server cabinet. To reduce this problem, I stock power cables that are the same type provided with the servers, but just 2 feet (61 centimeters) long. The shorter cables are much less likely to become tangled or get snagged when someone is installing or removing equipment into a cabinet.

Add wire management Even when Data Center users run the right length of patch cord, you end up with hanging cables if adequate wire management is not provided. Install new or larger cable management at cabinet locations where cable glut interferes with access to infrastructure or equipment. Together with replacing correct cable lengths, installing wire management can free up access to existing data ports and electrical receptacles. These steps also make troubleshooting easier, improve airflow around servers, and improve the overall appearance of the Data Center.

Make sure that people are correctly using infrastructure You can spend hundreds of thousands of dollars on structured cabling, but it is worthless if Data Center users string patch cords between server cabinets rather than use the infrastructure that is installed. Failing to properly use the infrastructure in a server environment leads to disorganization, makes troubleshooting difficult, and can create situations that are hazardous to both equipment and Data Center users. For example, imagine that someone installs a server into a cabinet within the Data Center, and they don't understand that each cabinet location is provided with dedicated circuits from two different power sources. Wanting redundant power for their server's dual power supplies, they plug one power cable in to the power strip of their own server cabinet and then string the other power cable to the strip in an adjacent cabinet. This adds unnecessary electrical draw onto the adjacent power strip, making it more susceptible to tripping a circuit. It also ties the two adjacent server cabinets together. If a time ever comes to relocate either cabinet, and whoever is moving it is unaware that a power cord has been strung between them, an accident might occur that could harm someone or damage a server.

Install power strips with known electrical ratings Do you know how much equipment you can install into your server cabinets before you overload their power strips? If not, you are either underutilizing server cabinet space or risking downtime every time you plug in new equipment. Swap out any mystery power strips with ones whose amp ratings are known. If possible, have the power strip rated for the same amperage as the circuit it is plugged into. This enables you to maximize existing electrical circuits and server cabinet space.

Redeploy floor tiles If cooling is a problem and your Data Center has a raised floor, check that floor tiles are deployed to best advantage. Close off unnecessary openings in the floor surface and open perforated tiles closest to highly populated cabinets. Close or relocate floor panels that are especially close to air handlers and might be causing short cycling. Make sure underfloor infrastructure isn't blocking air coming from the room's air handlers. If it is, try to shift the location of structured cabling and electrical conduits to enable better circulation.

6.2 Below are the top ten Data Center Design Guidelines: The following are the top ten guidelines selected from a great many other guidelines, many of which are described throughout this book. 1. Plan ahead. You never want to hear Oops! in your data center. 2. Keep it simple. Simple designs are easier to support, administer, and use. Set things up so that when a problem occurs, you can fix it quickly. 3. Be flexible. Technology changes. Upgrades happen.

4. Think modular. Look for modularity as you design. This will help keep things simple and flexible. 5. Use RLUs, not square feet. Move away from the concept of using square Footage of area to determine capacity. Use RLUs to define capacity and make the data center scalable. 6. Worry about weight. Servers and storage equipment for data centers are Getting denser and heavier every day. Make sure the load rating for all supporting structures, particularly for raised floors and ramps, is adequate for current and future loads. 7. Use aluminum tiles in the raised floor system. Cast aluminum tiles are strong and will handle increasing weight load requirements better than tiles made of other materials. Even the perforated and grated aluminum tiles maintain their strength and allow the passage of cold air to the machines. 8. Label everything. Particularly cabling! It is easy to let this one slip when it Seems as if there are better things to do. The time lost in labeling is time gained When you dont have to pull up the raised floor system to trace the end of a single cable. And you will have to trace bad cables!

9. Keep things covered, or bundled, and out of sight. If it cant be seen, it cant be messed with. 10. Hope for the best, plan for the worst. That way, youre never surprised. Most existing data centers have one or more of the problems listed above, with the attendant losses of performance. In typical data centers routinely observed, the loss of available rack locations due to these problems is on the order of 10-20% of total rack locations, and the loss of power density capability is commonly 20% or more. These unnecessary losses in performance represent substantial financial losses to data center operators, but can be avoided by simple planning. When row lengths are three racks or less, the effectiveness of the cooling distribution is impacted. Short rows of racks mean more opportunity for mixing of hot and cold air streams. For this reason, when rooms have one dimension that is less than 15-20 feet it will be more effective in terms of cooling to have one long row rather than several very short rows. Use of best-practices air management, such as strict hot aisle/cold aisle configuration, can double the computer server cooling capacity of a data center. Combined with an airside economizer, air management can reduce data center cooling costs by over 60%1. Removing hot air immediately as it exits the equipment allows for higher capacity and much higher efficiency than mixing the hot exhaust air with the cooling air being drawn into the equipment. Equipment environmental temperature specifications refer primarily to the air being drawn in to cool the system.

A higher difference between the return air and supply air temperatures increases the maximum load density possible in the space and can help reduce the size of the cooling equipment required, particularly when lower-cost mass produced package air handling units are used. Poor airflow management will reduce both the efficiency and capacity of computer room cooling equipment. Examples of common problems that can decrease a Computer Room Air Conditioner (CRAC) units usable capacity by 50%2 or more are: leaking floor tiles/cable openings, poorly placed overhead supplies, under floor plenum obstructions, and inappropriately oriented rack exhausts. Specify and utilize high efficiency power supplies in Information Technology (IT) computing equipment. High efficiency supplies are commercially available and will pay for themselves in very short timeframes when the total cost of ownership is evaluated. For a modern, heavily loaded installation with 100 racks, use of high efficiency power supplies alone could save $270,000-$570,0002 per year and decrease the square-footage required for the IT equipment by allowing more servers to be packed into a single rack footprint before encountering heat dissipation limits. Cooling load and redundant power requirements related to IT equipment can be reduced by over 10 20%, allowing more computing equipment density without additional support equipment (UPSs, cooling, generators, etc.). In new construction, downsizing of the mechanical cooling equipment and/or electrical supply can significantly reduce first cost and lower the mechanical and electrical footprint. When ordering servers, power supplies that meet at least the minimum efficiency recommendations by the SSI Initiative (SSI members include Dell, Intel, and IBM).

When appropriate, limit power supply oversizing to ensure higher and more efficient load factors. Select the most efficient UPS system that meets the data centers needs. Among double conversion systems (the most commonly used data center system), UPS efficiency ranges from 86% to 95%. Simply selecting a 5% higher efficiency model of UPS can save over $38,000 per year in a 15,000 square foot data center, with no discernable impact on the data centers operation beyond the energy savings. In addition, mechanical cooling energy use and equipment cost can be reduced. For battery-based UPS systems, use a design approach that keeps the UPS load factor as high as possible. This usually requires using multiple smaller units. Redundancy in particular requires design attention; operating a single large UPS in parallel with a 100% capacity identical redundant UPS unit (n+1 design redundancy) results in very low load factor operation, at best no more than 50% at full design build out. Evaluate the need for power conditioning. Line reactive systems often provide enough power conditioning for servers, and some traditional double conversion UPS systems (which offer the highest degree of power conditioning) have the ability to operate in the more efficient line conditioning mode, usually advertised as economy or eco mode. 6.3 Below are recommendations to build Tier 3 data center: At least two access providers should serve this data center with the providers cabling from their central offices or POPs (Points of Presence) separated by at least 66 feet along their route. Mantraps at all entrances to the computer room should control letting more than one person in by the use of only one credential.

A signal reference grid (SRG) and lightning protection system should be provided. If the HVAC systems air conditioning units are served by a waterside heat rejection system, the components of these systems are to be sized to maintain design conditions, with one electrical switchboard removed from service. The piping system or systems are dual path. Two independent sets of pipes are to be used for data centers using chilled water. Sufficient capacity and distribution should be available to simultaneously carry the load on one path while performing maintenance or testing on the other path. If there are errors during operation or any other unplanned activities, disruption will be caused. Data center design should follow industry standards for best practices. Industry guidance is on the way in the form of an emerging industry standard for data centers. This document, to be published as Telecommunications Industry Association ANSI/TIA/EIA-942, Telecommunications Infrastructure Standard for Data Centers, lists requirements and provides recommendations for data center design and construction. TIA-942 helps consultants and end-users design an infrastructure that will last years without forklift renovations. The standard also gives information on cooling, power, room sizing and other information useful in data center design. Below are proposals by World Network leader CISCO has given to follow: Realize the full potential of your data center investment by improving your network performance, availability, security, and QoS. Adopt and combine technologies to create a data center network that continuously evolves to sustain a competitive business. Proactively address potential data center issues before they affect operations. Achieve and maintain a comprehensive, end-to-end data center optimization solution. Make sure that the data center plays a strategic role in your efforts to

protect, optimize, and grow your business. Achieve operational excellence by providing informal training that prepares your staff to knowledgeably manage data center technologies.

7 Conclusion
In many ways your Data Center is the brain of your company. Your business' ability to perceive the world (data connectivity), communicate (e-mail), remember information (data storage), and have new ideas (research and development) all rely upon it functioning properly. A well-built Data Center does not just accommodate future growth and innovation; it acts as a catalyst for them. Companies that know their Data Center is robust, flexible, and productive can roll out new products, move forward with their business objectives, and react to changing business needs, all without concern over whether their server environment is capable of supporting new technologies, high-end servers, or greater connectivity requirements. It is safe to assume that routers, switches, servers, and data storage devices will advance and change in the coming years. They will feature more of something than they do now, and it will be your Data Center's job to support it. Maybe they will get bigger and heavier, requiring more power and floor space. Maybe they will get smaller, requiring more data connections and cooling as they are packed tighter into the Data Center. They might even incorporate different technology than today's machines, requiring alternate infrastructure. The better your server environment responds to change, the more valuable and cost-effective it is for your business. New equipment can be deployed quicker and easier, with minimal cost or disruption to the business. Data Centers are not static, so their infrastructure should not be either. Design for flexibility. Build infrastructure systems using components that are easily changed or

moved. This means installation of patch panels that can house an array of connector types and pre-wiring electrical conduits so they can accommodate various electrical plugs by simply swapping their receptacle. It also means avoiding items that inhibit infrastructure mobility. Deploy fixed cable trays sparingly, and stay away from proprietary solutions that handcuff you to a single brand or product. Make the Data Center a consistent environment. This provides stability for the servers and networking equipment it houses, and increases its usability. The room's modularity provides a good foundation for this, because once a user understands how infrastructure is configured at one cabinet location, he or she will understand it for the entire room. Build on this by implementing uniform labeling practices, consistent supplies, and standard procedures for the room. If your company has multiple server environments, design them with a similar look and feel. Even if one Data Center requires infrastructure absolutely different from another, use identical signage, colorcoding, and supplies to make them consistent. Standardization makes troubleshooting easier and ensures quality control. Above all, your Data Center has to be reliable. Its overarching reason for existence is safeguarding your company's most critical equipment and applications. Regardless of what catastrophes happen outsideinclement weather, utility failures, natural disasters, or something else unforeseenyou want your Data Center up and running so your business continues to operate. To ensure this, your Data Center infrastructure must have depth: standby power supplies to take over when commercial electricity fails, and redundant network stations to handle the communication needs if a networking device malfunctions, for example. Primary systems are not the only ones susceptible to failure, so your Data Center's backup devices might need backups of their own. Additionally, the infrastructure must be configured so there is no Achilles Heel, no single component or feature that makes it vulnerable. It does little good to have

multiple standby power systems if they are all wired through a single circuit, or to have redundant data connections if their cable runs all enter the building at one location. In both examples, a malfunction at a single point can bring the entire Data Center offline. As IT organizations look for ways to increase the cost-effectiveness and agility of their data centers, they need to understand the current state of their architecture and determine which changes can best help them achieve their business and IT goals.

Ensure more efficient use of data center facilities. Gain operational efficiencies and cost savings through standardization and asset consolidation. Increase asset utilization to increase flexibility and reduce costs. Reduce energy costs. Extend the working life of capital assets. Optimize use of space, power, and cooling infrastructure. Avoid or defer construction of new facilities. Reduce business impact of localized and large footprint disaster events. Improve productivity through enhanced application and data availability. Meet corporate and regulatory compliance needs. Improve data security and compliance. Extend desktop hardware lifecycles. Extend business continuity and disaster recovery to enterprise desktops.

Data Center solutions enables IT organizations to meet business continuance and corporate compliance objectives, and provide benefits that include:

Reducing business impact of localized and large footprint disaster events. Improving productivity through enhanced application and data availability. Meeting corporate and regulatory compliance needs and improving data security.

8. An overview of the organization:


Waters India is a global leader in delivering innovative communications, information and entertainment. We offer voice, data and video products and services over intelligent wireless, broadband and global IP networks that meet customers' growing demand for speed, mobility, security and control. As a committed corporate citizen, we use our advanced communications services to address important issues confronting our society. 8.1 Key Industry verticals

Financial Services Manufacturing Telecom Product Engineering Life Sciences Independent Software Vendors(ISV) Retail Media & Entertainment Energy & Utilities Logistics & Transportation

8.2 Key Services In addition to these, Waters India serves its customers through following Horizontal Business Units across various industries:

Telecommunication Wireless Services Application Development Application Management IT Consulting Infrastructure Management

Enterprise Application Services Customer Intelligence Services & Business Process Outsourcing (CIS & BPO) BI & DW Enterprise integration Process Consulting Customer Interaction Services & Business Process Outsourcing Engineering Services IT Governance Business Process Management

Global Locations
1.

Americas

Brazil Canada Mexico U.S.A.

2.

Europe, Middle East & Africa (EMEA)


Finland Germany South Africa Sweden The Netherlands UAE United Kingdom

3.

Asia Pacific (APAC) All Major locations in ASIA

Australia

India Japan Singapore

WATERS INDIA Data Center Characteristics Data center is designed for computers, not people. As a result, Waters India. data center typically have no windows and minimal circulation of fresh air. Waters India Data center range in size of 10000 sq. feet dedicated to housing servers, storage devices, and network equipment. Data center room is filled with rows of IT equipment racks that contain servers, storage devices, and network equipment. Data center include power delivery systems that provide backup power, regulate voltage, and make necessary alternating current/direct current (AC/DC) conversions. Before reaching the IT equipment rack, electricity is first supplied to an uninterruptible power supply (UPS) unit. The UPS acts as a battery backup to prevent the IT equipment from experiencing power disruptions, which could cause serious business disruption or data loss. In the UPS the electricity is converted from AC to DC to charge the batteries. Power from the batteries is then reconverted from DC to AC before leaving the UPS. Power leaving the UPS enters a power distribution unit (PDU), which sends power directly to the IT equipment in the racks. Electricity consumed in this power delivery chain accounts for a substantial portion of overall building load. Electricity entering servers is converted from AC to low-voltage DC power in the server power supply unit (PSU). The low-voltage DC power is used by the servers internal components of, such as the central processing unit (CPU), memory, disk drives, chipset, and fans. The DC voltage serving the CPU is adjusted by load specific voltage regulators (VRs) before reaching the CPU. Electricity is also routed to storage devices and network equipment, which facilitate the storage and transmission of data.

The continuous operation of IT equipment and power delivery systems generates a significant amount of heat that must be removed from the data center for the equipment to operate properly. Cooling in data center is often provided by computer room air conditioning (CRAC) units, where the entire air handling unit (AHU) is situated on the data center floor. The AHU contains fans, filters, and cooling coils and is responsible for conditioning and distributing air throughout the data center. In most cases, air enters the top of the CRAC unit and is conditioned as air passes across coils containing chilled water pumped from a chiller located outside of the data center room. The conditioned air is then supplied to the IT equipment (primarily servers) through a raised floor plenum. Cold air passes through perforated floor tiles, and fans within the servers then pull air through the servers. The warmed air stratifies toward the ceiling and eventually makes its way back to the CRAC unit intake. Most air circulation in data center is internal to the data center zone. The majority of data centers are designed so that only a small amount of outside air enters. Some data centers provide no ductwork for outside air to directly enter the data center area. Instead, outside air is only provided by infiltration from adjacent zones, such as office space. Other data centers admit a relatively small percentage of outside air to keep the zone positively pressurized. Data center use a significant amount of energy to supply three key components: IT equipment, cooling, and power delivery. These energy needs can be better understood by examining the electric power needed for typical data center equipment in and the energy required to remove heat from the data center.

Table - Component Peak Power, Consumption for a Typical Server

Component CPU Memory Disks Peripheral slots Motherboard Fan PSU losses Total

Peak Power (Watts) 80 36 12 50 25 10 38 251

9.1 Data Centre Design & Layout


The main items for consideration in data centre design are broken down into four main groups: Structural Supporting Environment Cabling Infrastructure Security and Monitoring We cant ignore other sub items those play vital role in datacenter design: Air conditioning UPS Generators Gas Fire Suppression Air sampling systems Raised Floors

Infrastructure Cabling Environmental Monitoring Security products Racks etc. 9.2 Data Centre Design Structural The structure and the location of the building is an important consideration when it comes to the data centre design. Recent new standards cover both the structural element of the room or building and its geographic location and covers many different aspects of the premises. 9.2.1 These include: Location; both geographical and physically within the building i.e. within a flood plain (building), below water level (building or room within a building) Construction methods; materials used in the construction of the building i.e. timber framed or metal and concrete Access for deliveries Proximity to hazards; airports, oil refineries, train lines, motorways, power stations, chemical works, embassies, military locations etc You have to install all the elements necessary to ensure a working and standards compliant environment, including: Fire rating of walls Floor loading Floor void Floor grid configuration Ceiling Height Door sizes; from the loading bay to the data centre Steps or ramps Lifts

All of these items, and more, must be taken into consideration at the design stage to ensure the room meets all necessary requirements. 9.3 Data Centre Design Supporting Environment Any data centre, computer room or server room is only as good as the supporting services which maintain the environment within the specifications as laid down by the equipment manufactures. This environment must be maintained 24/7 or the equipment within the room could fail with potentially catastrophic consequences for the business which it supports. It is therefore essential that the data centre is designed to ensure that redundancy and resiliency of equipment is catered for in the initial plan and is not added as an after thought, therefore compromising space and other services. You have to consider following elements for your data centre to ensure a working and standards compliant environment: Electrical supply Lighting UPS Generator Air Conditioning Humidity Control Fire Suppression There can be many variations to each of these services and the key to a successful implementation is that all services are co-ordinated to co-exist in what can sometimes be very confined spaces. The appointment of a properly qualified project manager can greatly improve the overall build time and smooth implementation of any data centre project. Data Centre Standards Ltd has been involved in the building and refurbishment of many Data Centers in the UK and Europe and can provide essential first hand experience and

guidance.As per the guidelines, Waters India has placed precision temperature monitoring and control device in the Datacenter. All the datacenter devices are power feeded by 2 UPS dedicated for Data center only to provide high uptime. Both are of 160 KVA. There are 2 different power supply source have been provided in Data center for redundancy purpose.3 central generators (1x3500 KVA and 2x1500 KVA) installed in the facility to cater power requirement in case direct line power supply not available. For data center and other Hub rooms we have provisioned separate split AC despite of central AC duct feeded to all location just for redundancy purpose. Separate Fire fighting devices are placed in all Hub rooms and Data center in addition with smoke/fire detection and prevention system.

9.4 Raised Floor Waters India is using raised floor concept inside the data center. Raised floor is an option with very practical benefits. It provides flexibility in electrical and network cabling, and air conditioning. A raised floor is not the only solution. Power and network poles can be located on the floor and air conditioning can be delivered through ducts in the ceiling. Building a data center without a raised floor can address certain requirements in ISP/CoLo locations. Wire fencing can be installed to create cages that you can rent out. No raised floor allows these cages to go floor to ceiling and prohibits people from crawling beneath the raised floor to gain unauthorized access to cages rented by other businesses. Another problem this eliminates in an ISP/CoLo situation is the loss of cooling to one cage because a cage closer to the HVAC unit has too many open tiles that are decreasing subfloor pressure. However, some ISP/CoLo locations have built

facilities with raised floor environments, because the benefits of a raised floor have outweighed the potential problems listed above. Drawbacks to the no-raised-floor system are the very inefficient cooling that cannot easily be rerouted to other areas, as well as the problems associated with exposed power and network cabling. A raised floor is a more versatile solution. Waters India data center a raised floor system where supply coming from tiles in the cold aisle. Hot aisle/cold aisle configuration is created when the equipment racks and the cooling systems air supply and return are designed to prevent mixing of the hot rack exhaust air and the cool supply air drawn into the racks. All equipment is installed into the racks to achieve a front-to-back airflow pattern that draws conditioned air in from cold aisles, located in front of the equipment. The rows of racks are placed back-to-back, and holes through the rack (vacant equipment slots) are blocked off on the intake side to create barriers that reduce recirculation, as shown in the graphic below in figure 11.4. With proper isolation, the temperature of the hot aisle no longer impacts the temperature of the racks or the reliable operation of the data center; the hot aisle becomes a heat exhaust. The HVAC system is configured to supply cold air exclusively to the cold aisles and pull return air only from the hot aisles.

Figure 11.4 The hot rack exhaust air is not mixed with cooling supply air and therefore can be directly returned to the air handler through various collection schemes, returning air at a higher temperature, often 85F or higher. The higher return temperature extends economization hours significantly and/or allows for a control algorithm that reduces supply air volume, saving fan power. In addition to energy savings, higher equipment power densities are also better supported by this configuration. Under floor distribution systems should have supply tiles in front of the racks. Open tiles may be provided underneath the racks, serving air directly into the equipment. However, it is unlikely that supply into the bottom of a rack alone will adequately cool equipment at the top of the rack without careful rack design. For proper ventilation Waters India bought special designed racks so that the ideal air management system would duct cooling air directly to the intake side of the rack and draw hot air from the exhaust side, without diffusing it through the data center room space at all.

9.5 Data Centre Design Cabling Infrastructure A data centre, computer room or server room by its very nature will house an abundance of interconnecting cables. The cabling industry is now moving at such a pace that it is important to select a cabling infrastructure that will cope with your

day one requirements and any new technologies which may be used within the data centre in the coming years. Data Centre Design Cabling Infrastructure A data centre, computer room or server room by its very nature will house an abundance of interconnecting cables. The cabling industry is now moving at such a pace that it is important to select a cabling infrastructure that will cope with your day one requirements and any new technologies which may be used within the data centre in the coming years. Waters India is using copper cabling CAT6 UTP (Solid cables) in office buildings, Data Center, and other installations to provide connectivity. Copper is a reliable medium for transmitting information over shorter distances; its performance is only guaranteed up to 109.4 yards (100 meters) between devices where solid cable provide better performance and are less susceptible to interference, making them the preferred choice for use in a server environment. For rest connectivity longer than 100 meters Waters India using fiber cabling which can handle connections over a much greater distance than copper cabling, 50 miles (80.5 kilometers) or more in some configurations. Because light is used to transmit the signal, the upper limits of how far a signal can travel along a fiber cable is related not only to the properties of the cable but also to the capabilities and relative location of transmitters. Besides distance, fiber cabling has several other advantages over copper:

Fiber provides faster connection speeds. Fiber isn't prone to electrical interference or vibration. Fiber is thinner and lighter weight, so more cabling can fit in to the same size bundle or limited spaces. Signal loss over distance is less along optical fiber than copper wire.

Waters India is using multimode fiber to provide connectivity over moderate distances, such as in most Data Center environments or among rooms within a single building. A light-emitting diode (LED) is its standard light source. The term multimode refers to the several rays of light that proceed down the fiber. Waters India use Fiber connectivity even for less than 100 meters distance for higher data rate transmission like between switch to switch (Backbone connectivity). 9.6 Aisles and Other Necessary Open Space Aisle space should allow for unobstructed passage and for the replacement of racks within a row without colliding with other racks. The optimal space would allow for the turn radius required to roll the racks in and out of the row. Also, rows should not be continuous. Unbroken rows make passage from aisle to aisle, or from the front of a rack to the back, very time consuming. Such clear passage is particularly important in emergency situations. The general rule of thumb for free floor space is between 40 and 50 percent of the square footage.

Figure 9.6 Proper Aisle Space and Non-Continuous Rows

How aisle space is designed also depends upon air flow requirements and RLUs. When designing the center, remember that the rows of equipment should run parallel to the air handlers with little or no obstructions to the air flow. This allows for cold air to move to the machines that need it, and the unobstructed return of heated air back to the air conditioners Be sure to consider adequate aisle space in the initial planning stages. In a walls within-walls construction where the data center is sectioned off within a building, aisle space can get tight, particularly around the perimeter. 9.7 Network Redundancy Waters India has built electrical redundancy provided to servers by electrical conduits running from more than one power distribution unit, network redundancy is provided by structured cabling running from more than one networking device. Whereas electrical conduits are hardwired into the PDUs, however, structured cabling is standalone infrastructure that any networking devices can be plugged in to. Each cable is essentially providing its own path, and you just need additional networking devices to make them redundant. As long as you provide abundant structured cabling throughout the Data Center, you increase redundancy as much as you want by simply installing more networking devices at the network row and network substations. If you want to provide a minimum level of redundancy over the entire Data Center, install a second set of networking devices in the network row and patch to key components at the network substations. If you want to provide an even greater level of redundancy, double the networking devices at each network substation as done by WATERS INDIA. Providing this redundancy may or may not require additional cabling infrastructure. It depends upon how many network connections a given server requires, and how many

servers are patched into a Data Center's network devices. Most servers require a minimum of two connections, one for a primary Ethernet connection and another for either a console connection or a secondary Ethernet connection. 9.8 Data Centre Design Security and Monitoring

Security and monitoring are two key factors to ensure that a data centre, computer room or server room will run undisturbed. Good security will ensure that access is controlled and in so doing, interference of important and sensitive equipment by untrained and unauthorized personnel is prevented. Monitoring of the environment can also help to prevent incidents which could otherwise disrupt or destroy equipment within the data centre. There are various types of security and monitoring equipment available, much of it designed specifically for the data centre environment. Waters India has installed access controllers (program based) on each door within the facility with the appropriate access right to its employees. These access controllers are centrally managed by access control team. For Data center access Waters India has implemented 2 level of security, one is of Card level (Employee Identity) and other one is of PIN code level security as mentioned in the figure below and Data center access is provided only for datacenter team members. Other than this Waters India monitoring its campus facility with the help of digitals cameras placed all over the facility. All points of access should be controlled by checkpoints, and coded card readers. Below figure 9.8 shows one of the access controller type installed in Waters India.

Figure 9.8 Cipher Lock at Restricted Access Doorways

9.9 Data Center Network Architecture and Engineering The Data Center network architecture is a key component of the Service Oriented Infrastructure. How the network infrastructure is designed and implemented plays a key role in what level of service availability and survivability the IT resources can offer. In many cases, the network is grown organically with little consideration for future growth or physical/logical separation requirements. As the application services infrastructure expands, it becomes more of a challenge to maintain and purposefully plan for performance and availability of the network. Additionally, it is paramount to understand not only the data center WAN and LAN infrastructure, but also the remote site WAN infrastructure in order to match expected application performance and availability characteristics on an end-to-end basis. Waters India follows a standard, proven approach that includes the following activities: identification of touch points within the current environment, creation of configuration standards, development of test plans (network and end user), development of integration and migration plans and process for turnover to the operational environment. As-built documentation and knowledge transfer are key deliverables. Design areas of focus are LAN Architectures (IP Routing Architecture, Layer 2 Switching), WAN Architectures (MPLS VPN, IPSEC VPN, and traditional VPN), WLAN Architectures, Optical Architectures

(DWDM, SONET), Content Delivery Architectures, IPv6 and QoS Architectures. Waters India is using most of the technology/topology mentioned above to run the Datacenter. The detail of the same is given in below sections. Waters India . new facility at Hyderabad, Begumpt , caters corporate and staff for different processes. This new facility have connectivity to Waters Indias Center-I facility in addition to two Data centers in US, Billerica and US San Jose and one in UK. Other the Data Centers are connected to over MPLS backbone from a Service Provider. MPLS SP has placed dual CPE at , and other Data Centers. CPEs have dual local-loop to different PEs for redundancy.Center-I will be connected to via Ethernet links. Waters India has procured 3x10mbps links between Center-I and . These links have L-3 connectivity between Core switches at both locations. This solution will enable Waters India to route Voice and data between and Hyderabad, Begumpt Center-I facility. Waters India has also procured links from ISP for providing internet access to its users at and wants to have load balancing and redundancy on multiples ISP links. Waters India has procured AS number space. Waters India has also setup VPN capability for its remote and telecommuters. Waters India has designed a Tier 3 data center that is concurrently maintainable because the redundant components are not on a single distribution path (as in Tier 2). The data center can have infrastructure activity going on without disrupting the computer hardware operation in any way. This data center is manned 24 hours a dayfor maintenance, planned activities, repair and replacement of components, addition or removal of capacity components, testing, etc. Waters India data center is designed to be upgraded to Tier 4 when the business case justifies the cost of the upgrade (additional protection). This datacenter has sufficient

capacity and distribution available to simultaneously carry the load on one path while performing maintenance or testing on the other path. Network Topology Figure presents voice and data network architecture at Waters India:

10 Solution resiliency
10.1 LAN resiliency For sake for understanding and explanation, Waters India network setup has been divided into two areas that are: Internet / Public Area, for accessing public network Intranet / Corporate area, which will connect to Waters Indias international Data centres in US & UK. In First Phase deployment is done at First Floor Fully. Second and Ground Floor each have one Catalyst 2960-48 port switch. As depicted in network LAN figure below: 10.2 Intranet Area network layout

10.3 Intranet Area The new facility is having three floors, however in its first phase of project implementation only first floor is covered. Waters India. has procured two Catalysts 6509 with Sup-720 engine for its LAN campus switching network. Campus network is deployed as industrial standard network. Core switches in Waters India campus network provides core and distribution level access where are edge switches which include Catalyst 3750 and Catalyst 2960 switches serving access layer connectivity. There are three hub rooms at each floor which will serve local area connectivity for that floor. Server room is separate to hub rooms and accommodates all core LAN / WAN / Voice equipments. Both the core switches are installed in the server room with redundant power supplies. However there are no Sup level redundancy in each Catalyst 6509 box. For better through-put Voice and Data network are divided into logically separate VLANs subnets. Layer 3 routing between the VLANs is done at core switches. There is an ether-channel of 2 TenGig ports on Sup-engine between the coreswitches. Ether-channel will avoid downtime due to single link failure and also provide high speed data transfer rates. All access switches have dual-uplinks to both the core switches. Redundant uplink is blocked via STP. In case any uplink/Core switch goes down, the redundant uplink changes from blocked to forwarding state and all traffic starts routing via this link. Core Switch-1 (ODD) is acting as root for all the odd VLANs (1,3,5,7) & secondary root for all even VLANs (2,4,6,8) and Core Switch-2 (EVEN) is acting as root for all the even VLANs & secondary root for all odd VLANs.

Cisco 3750-POE switch is configured for IP Phone and agent PCs connectivity. However, in case any access switch fails, all IP Phones and Agent PCs connected to that switch will become unreachable causing loss of production. To resolve Waters India Tech team suppose to pre configured 1 POE 3750 switch and 1 Edge switch 2960 to replace faulty so that the production downtime can be minimized. Both the core switches have been procured with single sup-engine and dualpower supplies. Any problem in Supervisory engine can cause service outage and all devices connected to the switch may become unreachable. Recalculation of STP may also be triggered in the network in case of Supervisory engine on any Core-switch fails. Waters India has various application servers which are deployed in server room. For isolation for server farm all servers are connected to high port density access switches dedicated to server farm only and those server farm switches are connected to core switches. Server farm is also logically separate as on different VLAN. 10.4 Internet Area network layout Figure 10.4 shows Internet area layout.

10.5 Internet Area For internet access Waters India has procured bandwidth from VSNL and Second ISP is Waters India. These two links are connected to Cisco 3825 routers. These routers are serving as first level of defence towards internet. These routers are also have hardened ACL configuration to secure corporate network. These two routers are connected to Cisco 3750 layer -3 switches these switches termed as outside switches of the network. As a matter of traffic load balancing, Waters India has procured AS number space from its service providers. Multi-homing and load balancing is achieved via BGP configuration on outside switches. In internet area there is internet visitors area from where visitors are allowed to access internet. For this purpose Waters India has procured 4400 series

WLC and 1200 light weight wireless access points. WLC Wireless LAN controller is directly connected to outside switches and registers APs in visitor area for providing access to network. Waters India has installed Video Conferencing unit, which established connections via public network. The VC (Life Science) is used for video conferencing within Waters India and clients meeting. In internet area one pair of Firewall is installed in active standby fashion, this serve as perimeter security for corporate network. Various site to site VPN with client and between Waters India offices has been configured and establish on this perimeter security firewall. Waters India has various application servers like WEB Server, Mail server, Proxy server, FTP server and client application base servers which are deployed in DMZ zone area of perimeter security firewall. Default gateway for perimeter security firewall is layer three switches (Outside Switches). Inside leg of perimeter firewall is connected to corporate network firewalls outside interface. This will also serve internet access to corporate network users. So for internal users, there is 2 layer of security. Waters India has configured firewalls to allow only client specific and business required applications. Hence restriction on firewall is port, source and destination based. There is no direct access to internet for internal users, its only through the proxy where Waters India has deployed content based filtering with the help of WEBSENSE. 10.6 Backbone resiliency in intranet area 10.6.1 PHASE-I

Waters India has procured 3x10 MB of Ethernet link connectivity between Center-I and facility. These links were primary path for connectivity to US & UK data centres in first phase of deployment of facility at Hyderabad, Begumpt . But now US & UK data centres are directly access through MPLS cloud. Theses ethernet links are configured as Active as well as redundant to each other. In order to achieve the objective both sites core switch comprising Cisco Catalyst 6509. For Waters India has procured 2 NOS of Catalyst 6509 switches. Physical connectivity is as per network diagram below in figure 10.6.2 10.6.2 to Center-I connectivity network layout

2 Nos of links are connected to Catalyst 6509 A switch and remaining 1 Nos is connected to Catalyst 6509 B switch. All three links are connected in Layer 3 domain in three different subnets. There will be 1 Nos of Layer 3 connectivity between Catalyst 6509 switches in addition to ether-channel at both facilities.

As shown all four switches are in single EIGRP domain. For all voice subnet Core switch A at facility is primary path and Core switch B is secondary path. Also will assure that no load balancing will happen between the links for voice traffic and that the voice traffic will traverse through single path by implementing Policy Base routing with redundancy through other available path from any of the Core switch with priority high set which will insure better voice quality. Data traffic routes as per the available path learned through EIGRP dynamic routing.

10.6.3 PHASE-II Waters India has procured two MPLS link connectivity from two different ISPs for its facilitys connectivity to US & UK Data centres. For local end redundancy Waters India has procured local loop connectivity from two different service provider. MPLS CPE routers are placed at facility only. Primary link from each service provider is Ethernet handoff and secondary link is serial. In corporate network pair of ASA5520 firewall is configured to secure corporate network from MPLS cloud. Waters India has MPLS connectivity to its US & UK Data centres over Service Provider network. Any outage in service provider MPLS network is handled and managed by the SP. There are two MPLS CPEs placed at facility. Both the CPE routers are connected to Cisco-3750 Layer-3 switch. All corporate data and voice traffic flows between data centres and facility through MPLS cloud.

10.7 LAN Switching

10.7.1 Design Requirements and Assumptions No. of LAN connections to be deployed in first phase at first floor w.r.t to each hub room. On the basis of data and voice connections VLAN domains are defined. Total no. of Production Site for this LAN design is . For now in first phase at first floor 8 NOS of Catalyst 2960-48 switch has been installed in each hub room along with 1 NOS of Catalyst 3750-48 POE switch in each hub room. At ground floor and Second floor each have 1 NOS of Catalyst 2960-48 port switch. 1 NOS of Catalyst 2960-8 port switch is installed in each hub room and 1 NOS of Catalyst 2960-8 port has been installed at Ground floor. This is used for direct public network access in premises. In first phase of installation 100 IP Phone connections are deployed at first floor. All access layer switches have dual leg fibre LC to LC connectivity to core switches for redundancy purpose. Each core switch is having one number of SUP-720-3B module, there is no redundancy with respect to Supervisory module redundancy available on Core switches. However, dual-power supplies are available on both the switches. 10.7.2 Technical Prerequisites Telnet passwords Waters India has Cisco TACACS server in place through which all network devices are authenticated which is integrated with AD. As mention in above sections traffic load balancing or load sharing is configured for the internet connection with the help of ISP.

Waters India has installed Syslog server for capturing log of all network devices. Waters India has installed DC, DHCP, DNS,WSUS, File server, FTP and Proxy/Websense Servers in the network to authenticate end user to gain network access, IP address assignment, name resolution, patch update, File management, File transfer and Internet access management with content filtering respectively. All switches are in single VTP domain for the ease of configuration within switching domain. VTP domain is password protected for the security prospect. 10.7.3 VLAN ID and Subnet Info VLAN would be assigned as per the process requirement. Every process or department would be in separate VLAN. Inter VLAN routing will be configured on Core switches. 10.7.4 Rack layout Following is the Device Rack layout. U No. 42 41 40 39 38 37 36 35 34 33 Rack # 1 Rack # 2 Rack # 3 Rack # 4

Public Switch Primary(3750-24TSE) DMZ Primary Switch(3750G-24TSS1U) Public Switch

Secondary(375024TS-E) 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 VSNL Public Router(CISCO3825) Waters India Public Router(CISCO3825) Client Switch(WSC3750-24TS-S) Client Switch(WSC3750-24TS-S) Internal Firewall Primary(ASA 5520) MPLS Switch Primary(3750-24TSE) MPLS Switch Secondary(375024TS-E) Packet Shaper(3500) External Firewall Primary(ASA 5520) External Firewall Secondary(ASA 5520) Wireless External(WLC440212-K9) DMZ Secondary Switch(3750G-24TSS1U)

Internal Firewall Secondary(ASA 5520) Secure Access Control (ACS-113) Wireless Internal(WLC440250-K9)

Client Bharti MPLS Router(2800) Client Reliance MPLS Router(2800)

Waters India MPLS Primary Router(Cisco-2800) Waters India MPLS Secondary Router(Cisco-2800) VSNL MPLS Secondary Router(Cisco-2800) VSNL MPLS Primary Router(Cisco-2800)

Core Switch - ODD (WS-6509-E-Series)

Core Switch EVEN (WS-6509-E-Series)

VG Router(CISCO3845-MB)

12 11 10 9 8 7 6 5 4 3 2 1 Figure 10.7.4 Waters India has to arrange the Rack Space as per the following U space of devices: Equipment Catalyst3750 Switch Catalyst 6509-E Cisco 3825 router Cisco WLC Cisco ASA 5520 Firewall Table 10-7-4 10.7.5 POWER CONSIDERATION Cisco equipment minimum power points require is listed as follows: Equipment Catalyst 3750 Switch Catalyst 6509-E Cisco 3825 router Cisco WLC Cisco ASA 5520 Table 10-7-5 Power 5/15A Req. 1 2 2 2 1 U- Space Required 1U 13U 2U 1U 1U

Waters India has provisioned efficient power and power point in the Data center to feed every device. To achieve this Waters India has setup 2x160 KVA UPS dedicated for Data center in redundant fashion. Proper earthing (1V- 3V) has been done for whole facility.

10.8 Spanning Tree Design Spanning-tree is required in the LAN campus network infrastructure to ensure a loopfree forwarding topology in presence of redundant Layer 2 paths. Spanning tree should never be disabled even if the topology is designed to be loop free. In Waters India network Rapid-PVST spanning tree algorithm is deployed for fast convergence. Root switches will be segregated between the VLANs for traffic load sharing. 10.8.1 Trunking Configured 802.1Q (dot1q) trunks between core switches & between core-access switches on the uplink interfaces. Trunk is nothing but a point-to-point link to carry the traffic of multiple VLANs & also allows us to extend VLANs across an entire network. 802.1Q has been chosen because this is an industry-standard trunking encapsulation whereas ISL being Cisco proprietary has some limitations like it is not supported on following switching modules as on date: WS-X6502-10GE WS-X6548-GE-TX,WS-X6548V-GE-TX,WS-X6548-GE-45AF WS-X6148-GE-TX, WS-X6148V-GE-TX, WS-X6148-GE-45AF

10.8.2 EtherChannel An EtherChannel bundles individual Ethernet links into a single logical link that provides the aggregate bandwidth of up to eight physical links. 10.8.3 Spanning Tree Protocol The IEEE 802.1w standard was developed to take 802.1Ds principle concepts and make the resulting convergence much faster. This is also known as the Rapid Spanning Tree Protocol (RSTP). RSTP defines how switches must interact with

each other to keep the network topology loop free, in a very efficient manner. Like 802.1D, RSTPs basic functionality can be applied as a single or multiple instances. Core Switch-1 acts as root for all Odd VLANs & secondary root for all Even VLANs and Core Switch-2 acts as root for all Even VLANs & secondary root for all Odd VLANs.

10.8.4 HSRP This is configured so as to provide redundancy for L3 path. Important point to note is that L3 path for particular VLAN traffic will converge with L2 STP path so that even in case of device failure, L2 & L3 convergence occurs together on same device. A single HSRP group is used between both Core switches. Core-1 Switch is responsible or L3 active for all Odd VLANs and standby for all Even VLANs. Like wise, Core-2 switch is responsible for L3 Active for all Even VLANs and standby for Odd VLANs. 10.8.5 Helper Address Configured switch for helper address so that the end user machines can get the IP address from DHCP server sits in Server VLAN.Each L3 VLAN interface except server VLAN configured for helper address. 10.8.6 Wireless Waters India has procured two 4400 series WLC controller and 20 APs for its wireless network. One WLC controller has been installed in Public area which has direct connectivity to Public network and provides wireless access to visitors at Waters India.

Second WLC controller has been installed in corporate area which is directly connected to private network facility of Waters India. This provides wireless access to Waters India users. As of now, 8 internal and 8 External APs are installed for accesses to internet as well as corporate network access. In this scenario Internal APs are associated to corporate area WLC, have non-broadcasted SSID and other 8 external APs are associated directly to internet area WLC, have broadcasted SSID with share key access configured. Corporate SSID is associated to exiting Active domain controller of Waters India for user level authentication. While External SSID is preshared.

Bibliography

Grow A Greener Data Center, by Douglas Alger Data Center Projects Plan, by Neil Rasmussen & Wendy Torell Enterprise Data Center Design and Methodology, by Rob Snevely Enterprise Data Center Design and Methodology, by Rob Snevely Enterprise Data Center Design and Methodology, by RobSnevely High performance Data Centers, A Design Guidelines

Vous aimerez peut-être aussi