Vous êtes sur la page 1sur 100

HP BladeSystem Matrix 6.3 Planning Guide

HP BladeSystem Matrix 6.3 Planning Guide HP Part Number: 646940-001 Published: May 2011 Edition: 1

HP Part Number: 646940-001 Published: May 2011 Edition: 1

© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Microsoft, Windows, and Windows Server are U.S. registered trademarks of Microsoft Corporation.

Contents

1 Overview

5

HP BladeSystem Matrix documents

5

Planning summary

6

HP BladeSystem Matrix infrastructure

7

HP BladeSystem Matrix components

9

2 HP BladeSystem Matrix services planning

14

Servers and services to be deployed in HP BladeSystem Matrix

14

Application services

14

Management services

16

3 HP BladeSystem Matrix customer facility planning

29

Racks and enclosures planning

29

Data center requirements

29

Virtual Connect domains

30

4 HP BladeSystem Matrix solution storage

35

Virtual Connect technology

35

Storage connections

35

Storage volumes

37

5 HP BladeSystem Matrix solution networking

40

Network planning

40

Virtual Connect Ethernet uplink connections

42

Virtual Connect Flex-10 Ethernet services connections

43

Manageability connections

45

6 HP BladeSystem Matrix pre-delivery planning checklist

49

7 Next steps

50

8 Support and other resources

51

Contacting HP

51

Related information

53

A Dynamic infrastructure provisioning with HP BladeSystem Matrix

54

Example 1—An agile test and development infrastructure using logical servers

54

Example 2—An agile test and development infrastructure with IO

60

B Sample configuration templates

68

C Optional Management Services integration notes

76

HP BladeSystem Matrix and HP Server Automation

76

HP BladeSystem Matrix and Insight Recovery

76

HP BladeSystem Matrix and Insight Control for VMware vCenter Server

76

HP BladeSystem Matrix and Insight Control for Microsoft System Center

77

D HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration

Guidelines

80

Virtual Connect FlexFabric hardware components

80

FlexFabric interconnects/mezzanines – HP BladeSystem c7000 port mapping

81

HP BladeSystem c7000 enclosure FlexFabric module placement

82

FlexFabric configurations using only HP G7 BladeSystem servers

83

FlexFabric configurations using only HP G6 or i2 BladeSystem servers

85

FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers

87

HP BladeSystem Matrix configuration guidelines for mixing FlexFabric with Flex-10

90

Glossary

91

Index

95

1 Overview

This guide is the recommended initial document for planning an HP BladeSystem Matrix infrastructure solution. The intended audience for this guide is pre-sales and HP Services involved in the planning, ordering, and installation of an HP BladeSystem Matrix-based solution.

Planning is the key to success; early planning is the key to creating an HP BladeSystem Matrix order, which moves on to smooth, successful, and satisfactory delivery. This guide is intended for use, along with a planning worksheet, to capture planning decisions, customer-provided details, and HP BladeSystem Matrix configuration parameters for future implementation.

Effective planning requires knowledge of BladeSystem technology, including Virtual Connect (VC) FlexFabric, VC Flex-10 Ethernet and Fibre Channel (FC); knowledge of FC shared storage, including fabric zoning, redundant paths, N_Port ID Virtualization (NPIV), and logical unit number (LUN) provisioning; knowledge of software configuration planning and functionality, including HP Insight Orchestration (IO), Central Management Server (CMS) software, OS deployment, and any customer-provided management software in connection with the HP BladeSystem Matrix implementation.

The HP BladeSystem Matrix Starter Kits and optional expansion kits provide configuration options that enable integration into a customer’s existing environment. This document is intended to guide you through the planning processes by outlining the decisions involved and data collected in preparing for a HP BladeSystem Matrix solution implementation.

There are two points during the HP BladeSystem Matrix implementation delivery process where design decision input and user action are required:

This document outlines both sets of input information.

1. Pre-Order: Before placing the HP BladeSystem Matrix order, you must plan and specify requirements and order options.

2. Pre-Delivery: Before the delivery of the HP BladeSystem Matrix physical infrastructure, you must coordinate the environmental and configuration details to make sure the on-site implementation service can begin immediately.

HP BladeSystem Matrix documents

The HP BladeSystem Matrix documents table shows the documentation hierarchy of the HP BladeSystem Matrix infrastructure solution. Read this guide before ordering and configuring the HP BladeSystem Matrix, and use the guide in conjunction with the HP BladeSystem Matrix Release Notes and HP BladeSystem Matrix Compatibility Chart.

Table 1 HP BladeSystem Matrix documents

Phase

Document titles

HP BladeSystem Matrix Documentation CD

Planning

HP BladeSystem Matrix 6.3 Compatibility Chart

Yes

The Compatibility Chart provides version information of HP BladeSystem Matrix components.

Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide

No

The “Before you begin” section of this document describes storage, networking, and SAN zoning considerations when implementing an HP BladeSystem Matrix Recovery Management (Insight Recovery) configuration.

Using HP

HP BladeSystem Matrix 6.3 Release Notes

Yes

BladeSystem

The release notes provide key information on HP BladeSystem Matrix and its

Matrix

components.

Table 1 HP BladeSystem Matrix documents (continued)

Phase

Document titles

HP BladeSystem Matrix Documentation CD

HP BladeSystem Matrix 6.3 Getting Started Guide

Yes

The getting started guide provides instructions on how to design your first HP BladeSystem Matrix infrastructure template and then create (or provision) an infrastructure service using that template after the installation is complete.

HP BladeSystem Matrix 6.3 Troubleshooting Guide

Yes

The troubleshooting guide provides information on troubleshooting tools and how to recover from errors in a HP BladeSystem Matrix environment.

HP BladeSystem Matrix Step-by-Step Use Case Guides and demo videos

Yes

The use cases provide test and video instructions on how to build six different solutions corresponding to the six included demos.

The latest updates to the HP BladeSystem Matrix solution are located on the HP website, http:// www.hp.com/go/matrixcompatibility. The supported hardware, software, and firmware versions are listed in the HP BladeSystem Matrix Compatibility Chart. Updates to issues and solutions are listed in the HP BladeSystem Matrix Release Notes.

White papers and external documentation listed above are located on the HP BladeSystem Matrix Infrastructure 6.x product manuals page or on the HP BladeSystem Matrix Documentation CD.

HP BladeSystem Matrix QuickSpecs are located at http://h18004.www1.hp.com/products/ quickspecs/13297_div/13297_div.pdf and for HP-UX, see http://h18004.www1.hp.com/

Planning summary

HP BladeSystem Matrix is a platform that optimally creates an HP Converged Infrastructure environment that is simple and straightforward to buy and use.

This document presents steps to guide you through the HP BladeSystem Matrix planning process.

Figure 1 HP BladeSystem Matrix planning steps

Figure 1 HP BladeSystem Matrix planning steps HP BladeSystem Matrix infrastructure HP BladeSystem Matrix embodies the

HP BladeSystem Matrix infrastructure

HP BladeSystem Matrix embodies the HP Converged Infrastructure enabling provisioning, deployment, and management of application services. The following key components enable this infrastructure:

Converged infrastructure consisting of virtual I/O, shared storage, and computer resources

Management environment with physical and virtual machine provisioning and workflow automation, capacity planning, Disaster Recovery (DR)-ready and auto spare failover, continuous optimization, and power management

Factory and on site integration services

Planning begins with understanding what makes up each component. Some components might include existing services found in the customer data center. Other components are automatically provided by, or optionally ordered with, HP BladeSystem Matrix.

The physical infrastructure as provided by HP BladeSystem Matrix consists of the following components:

HP BladeSystem Matrix FlexFabric enclosures include the following:

HP BladeSystem c7000 Enclosure with power and redundant HP Onboard Administrator (OA) modules

Redundant pair of HP VC FlexFabric 10Gb/24-Port modules

HP BladeSystem Matrix Flex-10 enclosures include the following:

HP BladeSystem c7000 Enclosure with power and redundant OA modules

Redundant pair of HP VC Flex-10 10Gb Ethernet modules

Redundant pair of HP VC 8Gb 24-Port FC

The following components are included by default, but can be deselected:

HP 10000 G2 series rack

HP ProLiant DL360c G7 server functioning as a Central Management Server

The following figure illustrates a basic HP BladeSystem Matrix configuration. Many components displayed in the diagram are discussed in detail in this guide, and are carried through to the HP BladeSystem Matrix Setup and Installation Guide. The examples in this document are based on this sample configuration. Additional detailed application examples are located in “Appendix A—Dynamic infrastructure provisioning with HP BladeSystem Matrix” (page 54). For an Insight Recovery implementation, these steps are required for the HP BladeSystem Matrix configurations at both the primary and recovery sites.

Figure 2 Basic HP BladeSystem Matrix infrastructure

sites. Figure 2 Basic HP BladeSystem Matrix infrastructure Management infrastructure The physical infrastructure

Management infrastructure

The physical infrastructure provided by the customer’s data center includes power, cooling, and floor space.

The management infrastructure as provided by HP BladeSystem Matrix consists of the following components:

HP Insight Software Advisor

HP Insight Dynamics

HP Insight Dynamics capacity planning, configuration, and workload management

IO

HP Insight Recovery (HP IR) (setup requires an additional per event service)

HP Insight Control

HP Insight Control performance management

HP Insight Control power management

HP Insight Control virtual machine management

HP Insight Control server migration

HP Insight Control server deployment

HP Insight Control licensing and reports

HP iLO Advanced for BladeSystem

HP Virtual Connect Enterprise Manager (HP VCEM) software

HP Insight Remote Support Advanced (formerly Remote Support Pack)

HP Systems Insight Manager (HP SIM)

HP System Management Homepage (HP SMH)

HP Version Control Repository Manager (HP VCRM)

Windows management instrumentation (WMI) Mapper

HP Insight managed system setup wizard

Optional management infrastructure, which can integrate with HP BladeSystem Matrix, includes the following components (discussed throughout this guide):

Insight Control for Microsoft System Center (additional per event service required)

Insight Control for VMware vCenter Server (additional per event service required)

HP Server Automation software (customer-provided)

HP Ignite-UX software (customer-provided)

Microsoft System Center server (customer-provided)

VMware vCenter server (customer-provided)

The customer provided components also include network connectivity, SAN fabric, and network management such as domain name system (DNS), dynamic host configuration protocol (DHCP), time source, and domain services. The HP BladeSystem Matrix management components integrate with the customer’s existing management infrastructure.

The factory integration and integration services are described in the HP BladeSystem Matrix QuickSpecs.

HP BladeSystem Matrix components

The following components are available when ordering an HP BladeSystem Matrix infrastructure:

Four or more Blade servers, which form the server pools

One or more CMS servers to host the management services for the environment

Starter Kits, which contain the infrastructure needed for a fully-working environment when populated with additional server blades

Expansion kits, which extend the HP BladeSystem Matrix with additional enclosures, infrastructure, and blades

HP BladeSystem Matrix enclosure licenses

Rack infrastructure

Power infrastructure

FC SAN storage

iSCSI SAN storage (optional)

Switches, transceivers and signal cables

Other licenses to enable the HP BladeSystem Matrix environment

For all HP BladeSystem Matrix components and support options, see the HP BladeSystem Matrix QuickSpecs. Additional components such as FC SAN switches and network switches might be required to integrate the HP BladeSystem Matrix solution with the customer’s existing infrastructure and can be included with the HP BladeSystem Matrix order.

Table 2 HP BladeSystem Matrix components

Hardware Component

Choose blades

Fill Starter and Expansion Kits to capacity – these will form your server resource pool for HP BladeSystem Matrix.

See the Compatibility Chart for supported blade hardware.

Choose 1 or more CMS servers

Choose

Blades configured to order

In HP BladeSystem Matrix Flex-10 Starter or Expansion Kits all blades require an host bus adapter (HBA) mezzanine card. When ProLiant G6 or Integrity i2 blades are integrated within HP BladeSystem Matrix FlexFabric Starter or Expansion Kits, a NIC FlexFabric Adapter is required for all blades in the enclosure. For solutions with all ProLiant G7 blades, the NIC FlexFabric Adapter LOM is embedded on the blade so no additional modules or mezzanines are required. See “HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines” (page 80) for more information about these configuration options.

DL360 G7 Matrix CMS Server

Default selection for CMS

Includes 10Gb NIC

Does not include SFPs or cables

BL460c G6 Matrix CMS Server

Selection for an all-blade solution

Alternate CMS Server

Right-sized per specific customer needs, ordered or customer provided

The alternative CMS host must meet all the CMS hardware requirements listed by the HP Insight Software 6.3 Support Matrix and within this document.

Choose 1 HP BladeSystem Matrix Starter Flex-10 Starter Kit for Integrity with HP-UX Kit

HP BladeSystem c7000 Enclosure

Redundant OA modules

Fully populated with 10 active cool fans

Redundant VC-Enet Flex-10 modules

Redundant VC-FC 8Gb 24-port modules

8 full-height blade bays available

Flex-10 Starter Kit for ProLiant

Selection

Table 2 HP BladeSystem Matrix components (continued)

Hardware Component

Fully populated with six, 2400W power supplies

six C19/C20 single phase power inputs available

HP BladeSystem Matrix licenses required, but not included with Starter Kits (see "Select HP BladeSystem Matrix licenses" in this table.)

Choose 1 or more Expansion Kits to grow the HP BladeSystem Matrix

HP BladeSystem c7000 Enclosure

Redundant OA modules

Fully populated with 10 active cool fans

Fully populated with six, 2400W power supplies

six C19/C20 single phase power inputs available

Choose

Redundant VC-Enet Flex-10 modules

Redundant VC-FC 8Gb 24-port modules

16 half-height server blade bays available

FlexFabric Starter Kit for ProLiant

Redundant VC-FlexFabric modules

16 half-height blade bays available

Flex-10 Expansion Kit for Integrity

8 full-height blade bays available

HP BladeSystem Matrix license not included

Flex-10 Expansion Kit for ProLiant

16 half-height blade bays available

HP BladeSystem Matrix licenses included

FlexFabric Expansion Kit for ProLiant

16 half-height blade bays available

HP BladeSystem Matrix licenses included

Select HP BladeSystem Matrix licenses

HP BladeSystem Matrix licenses are either

offered as a required order option,

included in the kit. Software license ordering requirements are outlined in the

HP BladeSystem Matrix QuickSpecs. HP-UX 11i Matrix Blade 2Skt PSL LTU Per Socket Licenses

HP BladeSystem Matrix licenses for Integrity: licenses required for both Starter Kits and Expansion kits. Minimum 8 licenses required.

HP-UX 11i Matrix Blade 4Skt PSL LTU Per Socket Licenses

HP BL Matrix SW 16-Svr 24x7 Supp Insight Software

One required for each ProLiant Starter Kit.

This HP BladeSystem Matrix license is included with both ProLiant Expansion Kits.

for BL860c i2

for BL870c i2

HP-UX 11i Matrix Blade 8Skt PSL LTU Per Socket Licenses

for BL890c i2

or

HP BladeSystem Matrix licenses for ProLiant: required to purchase license

Starter Kit. License purchase not needed for Expansion Kits (already included).

for

HP VCEM BL7000 one enclosure license

Required for each HP BladeSystem Matrix with HP-UX Starter or Expansion Kit

Selection

Choose 1 or more racks

HP 10000 G2 racks

Customer provided

Choose power infrastructure

HP PDUs

Each HP BladeSystem Matrix enclosure requires six C19/C20 connections

Monitored power distribution units (PDU)s recommended for manageability and to reduce the number of power

Customer provided PDUs

Redundant power configuration recommended (i.e. order PDUs in pairs)

connections required per rack

Choose supported FC SAN Storage

If the customer chooses to provide an existing array, the SAN array must be certified for HP BladeSystem c-Class servers (see HP StorageWorks and BladeSystem c-Class Support Matrix).

Fibre Channel HP 3PAR F-Class and T-Class storage systems

At this time, HP 3PAR storage systems can be purchased individually, on a separate order installed in a separate rack. A single 3PAR system may consist of multiple cabinets.

Table 2 HP BladeSystem Matrix components (continued)

Hardware Component

SAN storage must be qualified with the VC-FC or VC-FlexFabric modules by the storage vendor (see SPOCK for qualified HP SAN Storage)

(Optional) Add supported iSCSI SAN Storage

Supported in HP BladeSystem Matrix as a backing store for VM guests.

See the HP BladeSystem Matrix QuickSpecs and HP BladeSystem Matrix Compatibility Chart for recommendations and requirements.

Add switches, transceivers and signal cables

See the HP BladeSystem Matrix QuickSpecs and HP BladeSystem Matrix Compatibility Chart for recommendations and requirements.

Other licenses to enable the HP BladeSystem Matrix environment

Storage licenses: purchase requirements depend on choice of storage, (some examples listed to the right).

Hypervisor licenses: Refer to the Quick

Specs for order options.

Choose

HP StorageWorks EVA

EVAs may be ordered in a HP BladeSystem Matrix rack, or in separate racks for better expandability.

HP StorageWorks XP Array

Ordered in a separate rack.

Other HP StorageWorks FC Storage

Customer provided third party FC storage

HP StorageWorks P4300 G2 7.2TB SAS Starter SAN Solution

Order up to 8 of these to build a 16 node cluster

HP StorageWorks P4500 G2 10.8TB SAS Virtualization SAN Solution

Add the 10Gb NIC option for high bandwidth storage applications

Other HP StorageWorks iSCSI solutions

Customer provided third party iSCSI storage

Configured to order

Ethernet switches and FC SAN switches are required to complete the solution.

Transceivers and signal cables are required for uplinks to switches. The number and type of uplinks for Ethernet, SAN, and VC Stacking may be determined upon completion of this document. Consult the Quick Specs of individual components for compatible transceiver or cable choices.

Customer provided

FC SAN switches must support NPIV.

HP StorageWorks XP Command View Advanced Edition (if an HP XP array is ordered, although the Remote Web Console can be used alternatively)

HP Command View EVA License To Use to host boot and data LUNs (if an HP EVA is purchased)

VMware licenses

Hyper-V licenses

Selection

Customer responsibilities

The customer can select and configure multiple physical Integrity or ProLiant server blades and additional HP BladeSystem Matrix expansion kits.

If the default HP ProLiant DL360 G7 management server is not selected, the customer is required to provide a compatible ProLiant server to function as the CMS.

The customer also provides connectivity to the HP BladeSystem Matrix infrastructure. The number and type of LAN connections is part of the network planning phase of this document.

IMPORTANT: Be sure that FC SAN SFP+ transceivers are used for FC SAN uplinks, and Ethernet Be sure that FC SAN SFP+ transceivers are used for FC SAN uplinks, and Ethernet SFP/SFP+ transceivers are used for Ethernet uplinks. VC Flex-10 modules only support Ethernet uplinks and VC FC modules only support FC SAN uplinks.

IMPORTANT: VC FlexFabric modules have dual personality faceplate ports; only ports 1 through 4 may be used as FC SAN uplinks (4Gb/8Gb). Additionally, although all VC FlexFabric ports support 10Gb Ethernet uplinks, only ports 5 through 8 support both 1Gb and 10Gb Ethernet uplinks. Using the wrong port or SFP/SFP+ transceiver for any uplink will result in an invalid and unsupported configuration.

IMPORTANT: Two additional VC FlexFabric interconnect modules must be purchased when a NIC FlexFabric Adapter mezzanine card is purchased for each blade. This includes any ProLiant G6 or Integrity i2 configuration.

When the optional StorageWorks EVA4400 Array is ordered, two embedded FC SAN switches provide connectivity from HP BladeSystem Matrix enclosures to the array. If the EVA is not included with the Starter Kit, the customer must provide connectivity to a compatible FC SAN array. Customer-supplied FC switches to the external SAN must support boot from SAN and NPIV functionality. Refer to the HP website (http://www.hp.com/storage/spock) for a list of switches and storage that are supported by VC FC. Registration is required. Following the login, go to the left navigation and click on Other HardwareVirtual Connect. Then click on the module applicable to the customer’s solution:

HP Virtual Connect FlexFabric 10Gb/24port Module for c-Class Blade System

HP Virtual Connect 8Gb 24-Port Fibre Channel Module for c-Class Blade System

HP Virtual Connect 4Gb / 8Gb 20-Port Fibre Channel Module for c-Class Blade System

NOTE: This module is not in a Starter or Expansion kit. Matrix conversion services are required.

Provisioning of suitable computer room space, power, and cooling is based on specifications described in the HP BladeSystem Matrix Quick Specs. When hardware is to be installed in customer provided racks, the customer must order hardware integration services. If the customer elects to not order these services, hardware installation must be done properly prior to any HP BladeSystem Matrix implementation services.

When implementing Insight Recovery, two data center sites are used: a primary site that is used for production operations and a recovery site that is used in the event of a planned or unplanned outage at the primary site. Each site contains a complete HP BladeSystem Matrix configuration with an intersite link that connects the sites. Protecting data at the primary site is accomplished by using data replication to the recovery site. Network and data replication requirements for implementing Insight Recovery are described in Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide.

Using this document to formulate a plan early on is an essential part of the order process for HP BladeSystem Matrix.

IMPORTANT: Each secondary Matrix CMS in a federated environment requires the purchase of a Matrix starter Each secondary Matrix CMS in a federated environment requires the purchase of a Matrix starter kit and corresponding services, just as with the primary Matrix CMS implementation. The following chapter covers the planning considerations of a federated CMS in further detail.

2 HP BladeSystem Matrix services planning

Servers and services to be deployed in HP BladeSystem Matrix

Begin planning the HP BladeSystem Matrix configuration and implementation by analyzing your application services and their infrastructure requirements.

Application services can consist of simple or multi-tier, multi-node physical and virtual servers and associated operating system(s), and storage and network requirements. For example, a two-tier database service can consist of an application tier that includes two to four virtual machines while the database tier consists of one or two physical server blades.

Management services can include the monitoring, provisioning and control of application services using such components as Insight Dynamics, server deployment, and VMware vCenter server.

Server planning required for the HP BladeSystem Matrix Installation and Startup Service

Plan management servers to be installed and configured as follows:

Management Servers hosting the following services:

Insight Software CMS

Insight Control server deployment for environments with ProLiant blade servers

HP Ignite-UX (pre-existing) for environments with HP-UX and Integrity blade servers

SQL Server (or can be installed in a customer-provided SQL server farm)

Required storage management software: HP Command View Enterprise Virtual Array (EVA) or XP Command View Advanced Edition or other storage management software required

Hypervisor host A and B (Integrity VM, Microsoft Hyper-V, VMware ESX, ESXi)

Windows, Linux, or HP-UX operating system for a newly created logical server

Unused server for logical server move operation target demonstration

(Optional) Allocated for IO automated deployment targets

When implementing Insight Recovery, a similar plan is required for the recovery site.

When implementing a federated CMS, the first CMS installed becomes the primary CMS. Any subsequent CMS which is then installed and joined with the federation is called a secondary CMS. A federated CMS may consist of up to five CMS servers (1 primary and 4 secondary). The Insight Orchestration software is only installed on the primary CMS. Each secondary CMS contains the full Insight Software stack, except for Insight Orchestration.

Insight Software stack, except for Insight Orchestration. IMPORTANT: Each secondary Matrix CMS of a federated CMS

IMPORTANT: Each secondary Matrix CMS of a federated CMS requires purchase of a Matrix starter kit and corresponding services, just as with the primary Matrix CMS implementation.

Application services

This section outlines the type of information you need when planning application services deployed on HP BladeSystem Matrix. These services may be deployed as logical servers or automatically provisioned by the infrastructure orchestration capabilities of Insight Dynamics.

The following defines the information to collect when describing HP BladeSystem Matrix application services:

Service name:

A label used to identify the application or management service

Optionally, one or more tiers of a multi-tiered application

The server name on which the application or management service is hosted

Host type and configuration:

Physical blades

Server model (e.g. BL870c i2)

Processor and memory requirements

Virtual machines

Hypervisor (ESX, Hyper-V, HP VM)

Processor and memory requirements

Software and OS requirements:

List of applications or management services running on the server

Operating System types:

Windows Server

Red Hat Enterprise Linux

SUSE Linux Enterprise Server

HP-UX

Hypervisor OS:

VMware ESX

Hyper-V on Windows Server 2008

HP Integrity VM on HP-UX

SAN storage and fabric:

Boot from SAN required for directly deployed physical servers

Boot from SAN recommended for VM hosts

FC or iSCSI SAN required for VM guest backing store

LUN size and RAID level

Remote storage for recovery

Network connectivity:

Connectivity to corporate network.

Private network requirements, for example, VMware service console, VMotion network

Bandwidth requirements

The application services examples used in this document are based on use cases described in Exploring the Technology behind Key Use Cases for HP Insight Dynamics for ProLiant servers. For details on how the HP BladeSystem Matrix infrastructure solution can be used to provision a dynamic test and development infrastructure using logical servers or IO templates, see the examples in “Appendix A—Dynamic infrastructure provisioning with HP BladeSystem Matrix” (page 54).

For Insight Recovery implementations, discuss with the customer what Insight Recovery's DR capabilities and determine the VC-hosted physical blades and/or VM-hosted logical servers the customer wants Insight Recovery to protect. These logical servers are known as DR-protected logical servers. In addition, sufficient computer resources (physical blades and VM hosts) must be available at the recovery site for a successful Insight Recovery failover. See the Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for more information.

Some customers may not yet be able to articulate the specific details of their failover requirements. In this case, HP recommends that several of the logical servers created as part of the HP BladeSystem Matrix SIG implementation be used as DR-protected logical servers to demonstrate an HP IR configuration and its failover functionality.

Planning Step 1a—Define application services

Use the following template to list the services to be deployed by the HP BladeSystem Matrix infrastructure. If the management service will be hosted by HP BladeSystem Matrix, make sure to include the Management Service description previously provided.

Table 3 Application services in the HP BladeSystem Matrix environment

Service

Host configuration

Software

Storage requirements Network requirements

(service name)

(tier #1 of service)

(server)

(server type)

(installed software)

(SAN requirements)

(LAN requirements)

(tier #2 of service)

(server)

(server type)

(installed software)

(SAN requirements)

(LAN requirements)

software) (SAN requirements) (LAN requirements) IMPORTANT: HP BladeSystem Matrix infrastructure is based on

IMPORTANT: HP BladeSystem Matrix infrastructure is based on the requirement of boot from SAN for directly deployed physical servers and recommended for VM hosts.

Management services

The HP BladeSystem Matrix solution requires an Insight Software management environment. This environment consists of a CMS running Insight Software, a deployment server, storage management (for example, HP Command View EVA), and a SQL server. This environment may also include separate customer-provided servers for the optional management infrastructure mentioned previously. See the following paragraphs discussing separate servers.

Planning the Insight Software CMS

If you have not already performed detailed planning for the CMS, download and run "The HP Systems Insight Manager Sizer" currently found online at HP ActiveAnswers (an approximately 40 MB zip file that contains a Windows setup.exe). The sizer does not include all the Insight Software being installed in this example. Additional disk space requirements are later in this section. There is additional CMS planning information available in the HP Insight Software SIM information library: http://h18004.www1.hp.com/products/servers/management/unified/infolibraryis.html

NOTE: When planning a federated CMS, the plan for the primary and each secondary CMS must include exclusion ranges in its VCEM instance to remove overlap between all the current and planned instances of VCEM residing in the same data center.

NOTE: If you are considering configuring the CMS in a high availability cluster either now or in the future, the CMS must be configured within a Windows domain and not as a standalone workgroup. HP does not currently support data migration of a CMS from a workgroup to a Windows domain.

Server hardware

Table 4 Confirm the CMS meets the minimum hardware requirements

Component

Specification

Server

HP ProLiant BladeSystem c-Class server blades (G6 or higher series server is recommended), or an HP ProLiant ML300, DL300, DL500 or DL700 (G3 or higher series server is recommended)

Memory

12GB for 32-bit Windows management servers (deprecated)

32GB for 64-bit Windows management servers, appropriate for maximum scalability (see below)

Processor

2 Processor dual core (2.4 GHz or faster recommended)

Disk space

150GB disk space is recommended. If usage details are known in advance, a better estimate may be obtained from the disk requirements section below.

File Structure

New Technology File System

DVD Drive

Local or virtual/mapped DVD Drive required

There are several commonly used choices for installing and configuring a CMS with the HP BladeSystem Matrix:

CMS on a rack-mounted ProLiant DL or ML server

CMS on a ProLiant server blade

CMS running from mirrored local disks

CMS running from a SAN-based disk image (boot from SAN)

A federated CMS, consisting of a primary CMS and one to four secondary CMSs

Each of these has benefits and tradeoffs.

When choosing between a server blade and a racked server configuration, consider the environment's purpose. When choosing to implement the CMS as a server blade, keep in mind that an improper change to the VC-Ethernet network, server profile, or SAN network definitions can render the CMS on a blade unable to manage any other device, including the OA or VC modules. Well-defined processes for management and maintenance operations can mitigate this risk. When hosting the HP BladeSystem Matrix CMS within a HP BladeSystem Matrix enclosure, exercise greater care when accessing VCEM or the VC modules.

When choosing the storage medium for the CMS, the default choice is to run the CMS from a SAN-based disk image. In environments where SAN availability may not be guaranteed (or uniform) it may be preferable to install a fully functional CMS on the mirrored local disk. However, this limits the choices, process, and time for recovery in the event of a hardware failure or planned maintenance.

NOTE: If this server is deselected, the customer must supply or order another server that meets the requirements for CMS.

Considerations when a CMS is not a server blade

When a server other than a server blade in HP BladeSystem Matrix is used as the CMS, consider the following requirements in addition to the requirements listed in the HP Insight Dynamics Installation and Configuration Guide.

Networking connections

The CMS must connect to multiple networks, which are common with those defined inside the HP BladeSystem Matrix environment. In the default configuration for HP BladeSystem Matrix, these networks are named:

Management

Production

If the CMS is also the deployment server for HP BladeSystem Matrix, the server must also connect to:

Deployment

If vCenter is not running, the VMotion networks do not need to be brought into the CMS in either the BL or external server case.

Also ensure that the server has adequate physical ports and are configured for virtual local area networks (VLAN)s for any other networks to be used with HP BladeSystem Matrix.

When implementing Insight Recovery, the CMS at the primary and recovery sites must be accessible to each other using a fully qualified domain name (FQDN).

SAN connections

In configurations where the CMS is either booted from SAN or also running storage software, the server requires necessary SAN HBAs and connectivity into the HP BladeSystem Matrix SAN.

Disk requirements

See the HP Insight Software Support Matrix, which shows several different supported combinations of HP SIM, Insight Control server deployment (RDP), and their databases. In addition to the disk space required for the CMS operating system, the requirements for Insight Software are summarized here for planning purposes:

20GB for install of Windows Server 2008 R2 Enterprise Edition (recommended CMS operating system)

20GB for install or upgrade of HP Insight Software

Allot 8GB for OS temp space

Allot 4GB for each OS to deploy. This additional storage must be accessible to the Insight Control server deployment software.

Allot 65MB per workload on Windows or Linux managed systems or 35MB per workload on HP-UX managed systems. These allotments are for collecting and preserving a maximum of four years of data for use by Insight Capacity Advisor.

Allot 4GB (CMS DB) per 100 workloads to preserve historical data for Insight Global Workload Manager.

The HP SIM Sizer can help estimate the long-term disk space requirements for logging events and other historic data based on your number of managed nodes and retention plans.

Ignite-UX server

Ignite-UX is required for all HP BladeSystem Matrix with HP-UX installations.

Considerations for a federated CMS

In IO, scalability can be increased through federated CMS configuration that contains one primary CMS with full HP Insight Software installation and up to four secondary CMSs with Insight Software, but without IO. IO provisioning is managed through the primary CMS and executed across all CMSs in the federated CMS environment.

In a federated CMS configuration, DNS lookups of participating CMSs are required for successful IO operation. DNS is used to resolve CMS hostnames to IP addresses. On the primary CMS, forward and reverse DNS lookups must work for each secondary CMS. DNS lookups must be resolved using the FQDN of each system.

In a federated CMS configuration, primary and secondary CMSs share the same deployment servers, such as the Insight Control deployment server and Ignite-UX server. Deployment servers should be registered in the primary CMS and they must each have their own deployment network that the physical blade servers can access for enabling physical and virtual deployment. Registering the deployment server on the primary CMS requires network access between these servers (via the deployment or management LAN).

Creating a federated CMS configuration can always be achieved for new installations, and sometimes can be achieved for upgrade scenarios. New installs (6.3 or later) are always in federated mode so you may add a secondary CMS provided that exclusion ranges are configured appropriately in VCEM on the primary and new secondary CMS. When upgrading from a prior version to 6.3 or later, the CMS will not be in federated mode. If this existing CMS has IO installed, then upgrading to a primary CMS requires ATC engagement to preserve IO services and templates. An existing CMS could also become a secondary CMS but the IO services will be lost, because IO must be uninstalled first.

Table 5 (page 19) outlines supported configurations of a federated CMS with associated management software. See Figure 5 (page 26) for an illustrated example configuration of a federated CMS.

NOTE: The configuration of VMM templates takes place on the CMS that manages the Hyper-V hosts.

Table 5 Supported management software with a federated CMS

Management software

Is it supported for a federated CMS?

Single, shared between primary and secondary CMSs

Multiple, each CMS has one instance

HP Server Automation

Yes

Yes

Yes 1

Ignite-UX Server

Yes

Yes

Yes 1

HP Insight Control server deployment (RDP)

Yes

Yes

Yes 1

vCenter Server

Yes

Yes

Yes

CommandView Server

Yes

Yes

Yes 2

HP Insight Orchestration

Yes

Yes

No

HP Insight Control (except RDP)

Yes

No

Yes

HP Insight Control for Microsoft System Center

Yes

No

Yes

Table 5 Supported management software with a federated CMS (continued)

Management software

Is it supported for a federated CMS?

Single, shared between primary and secondary CMSs

Multiple, each CMS has one instance

HP Insight Control for VMware vCenter Server

Yes

No

Yes

HP Insight Foundation

Yes

No

Yes

HP Insight Dynamics capacity planning, configuration, and workload management

Yes

No

Yes

HP VCEM

Yes

No

Yes 2

Microsoft SQL Server (CMS database)

Yes

No

Yes

Microsoft System Center

Yes

No

Yes

HP Insight Recovery

No

N/A

N/A

HP Cloud Service Automation (CSA)

No

N/A

N/A

1 The primary CMS must have access to all deployment servers in a federated CMS configuration.

2 Multiple VCEM instances co-exist in a single data center with federated CMS configurations. There is one instance for each primary and secondary CMS. When these instances share CommandView and/or networks, it is critical to avoid any media access control (MAC) and worldwide name (WWN) conflicts by configuring exclusion ranges for each instance of VCEM.

Additional management servers

If you plan to use the HP StorageWorks XP Command View Advanced Edition, a separate storage management server must be allocated for the XP CV AE software.

When implementing Insight Recovery, there must be separate storage management servers at each site to manage the local array storage (EVA or XP). See Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for more information.

In environments where the number of managed nodes and virtual machines is large, HP recommends

a separate database server to host the CMS information.

VMware vCenter Server must be provided and managed by the customer on a separate server if the customer is managing VMware ESX hosts in the HP BladeSystem Matrix environment. Insight Control for VMware vCenter Server should not be installed on the CMS.

HP Server Automation or HP Ignite-UX must be provided and managed on separate servers if the customer is using either of these software technologies for HP BladeSystem Matrix deployments.

HP BladeSystem Matrix is capable of performing operating system deployment, operating system customization, and application deployment through HP Server Automation. To plan for integration of HP Server Automation with HP BladeSystem Matrix, become familiar with the instructions detailed in Integrating HP Server Automation with HP BladeSystem Matrix/Insight Dynamics.

Microsoft System Center must be provided on separate servers if the customer desires to use this software technology as an additional management console for servers in a HP BladeSystem Matrix environment. If used, Insight Control for Microsoft System Center is installed on the separate servers.

Management server scenarios

When planning for this environment, take into consideration the purpose of HP BladeSystem Matrix deployment and current and future growth. The following scenarios assist in determining the configuration of the management environment.

Limited environment—Demo, evaluation, testing, or POC

Enclosures

1 to 2 enclosures

Mix of up to 250 physical and virtual servers

Management server

DL360 G7 with 2 processors and 32GB memory

Windows Server 2008 R2 Enterprise Edition

Insight Software

Insight Control server deployment

SQL Express 2005 or 2008 (installed by Insight Software). SQL Express is not recommended for medium or large environments.

Storage management software, for example HP Command View EVA (can be installed on a separate server if required by the customer)

Network connections

Production LAN (uplinked to data center)

Management LAN (uplinked to data center)

Deployment LAN (uplinked to data center)

For an illustration of a limited HP BladeSystem Matrix infrastructure as described above, please see Figure 2 (page 8) in the overview chapter.

ProLiant standard environment

Enclosures

1 to 4 enclosures

Up to 70 VM hosts. A VM host is a system with a hypervisor installed on it to host virtual machines. A host machine can host more than one virtual machine.

Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits of a non-federated CMS. This limit is 1,500 logical servers (ProLiant nodes, virtual and physical) when using 64-bit Windows (32-bit CMS has been deprecated).

Management servers

Server 1

DL360 G7 with 2 processors and 32GB memory

Windows Server 2008 R2 Enterprise Edition

Insight Software

Insight Control server deployment

Server 2

DL360 G7 with 2 processors and 32GB memory

Windows Server 2008 R2 Enterprise Edition

SQL Server 2005 (or can be installed in a separate SQL server farm)

Storage management software (may also be installed on a separate server)

Network connections

Production LAN, Management Servers #1, #2

Management LAN, Management Servers #1, #2

Deployment LAN, Management Server #1 only

Figure 3 HP BladeSystem Matrix infrastructure configured with ProLiant managed nodes

Matrix infrastructure configured with ProLiant managed nodes Integrity standard environment Enclosures • 1 to 4

Integrity standard environment

Enclosures

1 to 4 enclosures

Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits of a non-federated CMS. This limit is 800 logical servers (count of HP-UX nodes, virtual and physical).

Management servers

Server 1

Insight Software

Server 2

SQL Server 2005

Server 3 (can be installed in a separate SQL server farm)

HP Ignite-UX server

Server 4 (can be combined with SQL Server 2 if required)

Storage management software

Network connections

Production LAN, Management Servers #1, #2

Management LAN, Management Servers #1, #2, #3, #4

Deployment LAN, Management Server #3 only

SAN A and B; Management Server #4 and each Starter and Expansion kit

Figure 4 HP BladeSystem Matrix infrastructure configured with Integrity managed nodes

infrastructure configured with Integrity managed nodes Federated environment—positioning for growth Enclosures

Federated environment—positioning for growth

Enclosures

Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits of a federated CMS and positioned for additional growth. Each CMS’s resource pool starts with a BladeSystem Matrix Starter kit and is expanded with BladeSystem Matrix Expansion kits. All infrastructure, management servers, and resource pools of the federation must be collocated in the same data center.

Limits when all logical servers are ProLiant managed nodes:

1 primary CMS and up to 4 secondary CMSs

1,500 nodes for each secondary CMS

1,000 nodes for the primary CMS

6,000 nodes maximum across primary and secondary CMS resource pools

Limits when all logical servers are Integrity managed nodes:

1 primary CMS and up to 4 secondary CMSs

800 nodes for each secondary CMS

600 nodes for the primary CMS

3,200 nodes maximum across primary and secondary CMS resource pools

Management servers

Server 1 (primary CMS)

Insight Software

Servers 2 through 5 (secondary CMSs)

Insight Software, excluding Insight Orchestration

Servers 6 through 10 (SQL servers)

SQL Server 2005

Server 11 (Deployment server)

Ignite-UX, Server Automation or Insight Control server deployment

Server 12 (Deployment server)

Additional deployment server (optional)

Server 13 (Storage management server)

Storage management software, for example HP CommandView EVA or XP edition (can be combined with another server only for EVA edition; XP edition must be installed on a separate server)

Server 14 (Storage management server)

Other/additional storage management software

Network connections

Production LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10

Management LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10, #11, #12, #13, #14

Deployment LAN, Management Servers #11, #12

SAN A and B; Management Server #13 and the primary CMS’s virtual connect domain group (VCDG)

SAN C and D; Management Server #14 and some secondary CMSs’ VCDGs

NOTE: SAN switch infrastructure and storage management servers may be shared across CMS boundaries only if VCEM exclusion ranges are configured so that each CMS has a non-overlapping range of WWNs. An example of this is SAN C and D, illustrated in figure 5.

NOTE: When VCDGs share any networks, but are managed in resource pools of more than one CMS (as shown in figure 5), VCEM exclusion ranges are mandatory to prevent overlap of MAC addresses.

Figure 5 HP BladeSystem Matrix infrastructure configured with a federated CMS

Figure 5 HP BladeSystem Matrix infrastructure configured with a federated CMS 26 HP BladeSystem Matrix services

Planning Step 1b—Determine management servers

When deploying the BladeSystem management environment, the Insight Software components are placed on the same ProLiant server along with infrastructure management tools. This configuration is outlined in the HP BladeSystem Setup and Installation Guide.

The example table below shows management services implemented using the following configuration choices. Note that most environments will not require all of the servers and services shown here. See “Optional Management Services integration notes” (page 76) for more information.

All Insight Software components that make up the management environment reside on the same physical blade.

The optional StorageWorks EVA4400 is included with HP BladeSystem Matrix and is managed through HP Command View EVA hosted by the Management Service.

The production network carries application data traffic and is connected to the data center.

The management network provides operating system control and is connected to the data center.

The deployment LAN is used by the Insight Control server deployment server to exclusively respond to PXE boot request and automated operating system installation. Other deployment technologies require a separate deployment network.

Table 6 Example management services for the HP BladeSystem Matrix environment

Service

Host configuration

Software

 

Storage

Network

 

requirements

requirements

Matrix CMS

Physical

Windows Server®

Boot from SAN

Production

#1

DL360 G7

2008 SP2 (64-bit)

 

Management

2 Processors

Insight Software

Deployment

 

32GB memory

Insight Control server

 
 

deployment

 
 

HP Command View EVA

 

SQL Server (installed by Insight Software)

Ignite-UX

Provided by Customer

HP-UX 11i V3

 

Production

Server

Physical (rack mount)

HP Ignite-UX

Management

Itanium-based

Deployment

 

Integrity OVMM

SA primary

Provided by Customer

HP Server Automation

 

Production

core

Physical (rack mount)

software

Management

 

Deployment

MSC #1

Provided by Customer

W2K3 R2

 

Production

Physical (rack mount)

Microsoft System Center Configuration Manager

 

Management

 

Insight Control for MSC

 
 

(CM

integration

modules)

 

MSC #2

Provided by Customer

W2K8 R2

 

Production

Physical (rack mount)

MSC Operations

Management

 

Manager

 
 

MSC VM Manager

Insight Control for MSC

 

(OM

& VMM modules)

vCenter

Provided by Customer

W2K8 R2

 

Production

Server

VMware vCenter

Management

 

software

VMotion

 

Insight Control for VMware vCenter Server

 
• Insight Control for VMware vCenter Server   IMPORTANT: When multiple OS deployment technologies

IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control server deployment, Server Automation) are planned for an HP BladeSystem Matrix installation, a unique and separate deployment LAN must exist for each deployment server.

IMPORTANT: Insight Software, SQL Server, and at least one deployment technology are in all HP BladeSystem Matrix implementations. Storage software for FC storage is also required in HP BladeSystem Matrix implementations (such as Command View EVA). Any other services must be running on customer provided servers. Separate installation services may be ordered with HP BladeSystem Matrix implementations to deliver Insight Control for Microsoft System Center and/or Insight Control for VMware vCenter Server. See the Appendix for additional integration details.

3 HP BladeSystem Matrix customer facility planning

Customer facility planning is not just about floor space, power, and cooling. It is the physical realization of all the services, networking and storage that combine to form an HP BladeSystem Matrix solution. A good facility plan contains known requirements balanced with consideration of future requirements.

Racks and enclosures planning

In this section, various infrastructure services are identified to enable HP BladeSystem Matrix implementation. If the service exists in the current customer environment, note the server name, IP or other relevant parameters adjacent to the infrastructure service.

Planning Step 2a—HP BladeSystem Matrix rack and enclosure parameters

Complete the following template identifying basic information about the racks and enclosures in this order. Be sure to include the choice of enclosure implemented (Matrix Flex-10, Matrix FlexFabric, or Matrix with HP-UX).

Table 7 Racks and enclosures plan

Item

Value

Matrix rack #1

Rack Model

Rack Name

Matrix Enclosure #1 (Starter Kit)

Enclosure Model

Enclosure Name

Enclosure Location (Rack Name, U#)

Data center requirements

Customer responsibility

Data center facility planning for BladeSystem installation is located in the HP BladeSystem c-Class Site Planning Guide.

Planning Step 2b—Determine HP BladeSystem Matrix facility requirements

Table 8 Facility requirements

Facility power

Facility power connection characteristics

Voltage, Phase

Receptacle type

Circuit rating

Circuit de-rating percentage for the locality

UPS or WALL

Power redundancy? (If yes, specify labeling scheme)

Planning metrics for rack:

Value

(20% for NA/JP, 0% for much of the EU, or Custom percent)

Table 8 Facility requirements (continued)

Facility power

Rack weight estimate (in Kg or lbs)

Airflow estimate (in CMM/CFM)

Watts (W). Volt-Amps (VA) Estimate for rack

Thermal limit per rack (in Watts) (customer requirement – compare to estimate)

Quantity and type of PDUs for rack

Monitored PDUs only

Additional uplink & IP address

SNMP community strings

Installation characteristics:

Identify data center location

Side clearances/floor space allocation

Verify ready to receive and install rack

Virtual Connect domains

Value

For example, IP address on management LAN

For example, set to match current infrastructure

A VC domain represents the set of VC-Ethernet, FC modules, and server blades that are managed

together in a single c7000 enclosure, or multiple connected enclosures (up to 4). VC domains are managed by a virtual connect manager (VCM).

A VC domain group is the collection of one or more VC domains. HP VCEM is used to define the

VC domain group and manage the pool of MAC addresses, WWNs, and server profiles within the domains.

The following steps show how to determine whether to connect multiple enclosures into a single domain or use standalone domains under VCEM. The steps also show how to select unique MAC addresses, WWN addresses, and virtual serial numbers.

Determine enclosure stacking

If one or more HP BladeSystem Matrix expansion kits within the rack are being considered, then

review the following information to determine whether a multi-enclosure VC domain configuration will be required. Stacking is used only for VC-Ethernet modules (Flex-10 or FlexFabric).

For enclosures with VC Flex-10 and VC-FC modules, HP recommends defining one VC domain per rack. This simplifies cabling, conserves data center switch ports, and is straightforward to implement. For enclosures with VC FlexFabric modules, HP recommends one VC domain per enclosure to maximize available bandwidth for FC SAN and LAN uplinks.

Interconnecting the modules to create a multi-enclosure domain allows all Ethernet NICs on all server blades in the VC domain to have access to any VC uplink port. Only LAN traffic will route through stacking links. FC SAN traffic does not flow over stacking links. Only perform multi-enclosure stacking with VC FlexFabric if the stacking link requirements do not conflict with the per enclosure SAN uplink requirements. By using these module-to-module links, a single pair of uplinks can be used as the data center network connections for the entire VC domain, which allows any server blade to be connected to any Ethernet network.

Reasons to configure multi-enclosure domains

Data center switch ports or switch bandwidth are in short supply. VC stacking creates bandwidth sharing amongst enclosures, which conserves data center switch bandwidth.

VC stacking creates bandwidth sharing among enclosures, which conserves data center switch bandwidth. Customer desires multi-enclosure domain configuration.

Reasons to configure single-enclosure domains

All traffic must be routed through the network.

VC routes intra-enclosure (for example, server port to server port) within the domain via the cross-links. If the customer requires further manageability of this traffic, use single VC domains for each enclosure.

Physical isolation

The services, networking, and storage environments of each enclosure remain physically isolated.

Any other situations in which bandwidth sharing between enclosures is not desirable or allowed.

Customer desires single-enclosure domain configuration.

Stacking link configurations

The following considerations are for stacking VC Flex-10 Ethernet modules as well for stacking of VC FlexFabric modules.

All VC-Ethernet modules within the VC domain must be interconnected.

Any combination of cables can be used to interconnect the VC modules.

Two built-in 10Gb links are provided between modules in horizontally adjacent bays.

Faceplate ports 7 and 8 are shared with the two built-in links, meaning that when port 7 or 8 is enabled (i.e. used as an uplink), the corresponding built-in stacking link is disabled.

Supported cable lengths on 10Gb stacking links are 0.5 to 7 meters.

Supported cable lengths on 10Gb uplinks are 3 to 15 meters.

VC FC uplinks must always exist per enclosure as FC traffic is not transmitted across stacking links.

Simple stacking examples are diagrammed in the QuickSpecs for the HP Virtual connect Flex-10 10Gb Ethernet Module for c-Class BladeSystem: http://h18004.www1.hp.com/products/

Figure 6 Multi-enclosure stacking enclosure cabling (VC modules are in Bays 1 & 2 for each enclosure)

(VC modules are in Bays 1 & 2 for each enclosure) Example VC domain stacking configurations

Example VC domain stacking configurations based upon the number of enclosures are shown above. The one-meter cables are sufficient for stacking short links to adjacent enclosures, while

three-meter cables are sufficient for stacking links that span multiple adjacent enclosures. The OA linking cables required for stacking are not shown in the figure.

HP recommends that uplinks alternate between left and right sides, as shown in green.

The examples show stacking of ports 5 and 6 while keeping the two internal cross-links active in a multi-enclosure domain configuration – this is a total of four 10GbE stacking ports of shared bandwidth across enclosures (80Gbps line rate). The two internal cross-links remain active as long as ports 7 and 8 are unused.

Order the following cables for each multi-enclosure domain:

Quantity 1, 2, or 3 of Ethernet Cable 4ft CAT5 RJ45 for 2, 3 or 4 enclosures, respectively to be used as OA backplane links (not in figure).

Quantity 2, 4, or 6 of HP 1m SFP+ 10GbE Copper Cable for 2, 3 or 4 enclosures, respectively to be used as VC stacking links.

Order fixed quantity 2 of HP 3m SFP+ 10GbE Copper Cable to be used as wrap-around VC stacking links in VC domains with 3 or 4 enclosures.

Assign unique Virtual Connect MAC addresses

The MAC addresses assigned by VCEM must be unique throughout the data center. In the data center, there may be other BladeSystem enclosures with a range of assigned MAC addresses. Make sure to assign a range that does not conflict with those enclosures.

Federated CMS configurations have VCEM instances for each primary and secondary CMS. When VCDGs in multiple VCEM instances share networks now or may share in the future, it is critical to avoid any MAC conflicts by configuring exclusion ranges so that non-overlapping usable ranges exist for each CMS.

When implementing an HP IR configuration, if the primary and recovery site DR-protected servers share a common subnet, make sure that there is no conflict between the MAC addresses that VCEM assigns on both sites. One way to avoid conflicts is by using the sets of 64 MAC address ranges that VCEM provides with the “exclusion ranges” feature. An example of using exclusion ranges is included in Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide.

Assign unique Virtual Connect WWN addresses

The WWN addresses assigned by VCEM must be unique throughout the data center. You may have existing BladeSystem enclosures with a range of assigned WWN addresses. Make sure to assign a range that does not conflict with those enclosures.

Federated CMS configurations have VCEM instances for each primary and secondary CMS. When VCDGs in multiple VCEM instances share SANs now or may share in the future, it is critical to avoid any WWN conflicts by configuring exclusion ranges so that non-overlapping usable ranges exist for each CMS.

When implementing an HP IR configuration, if the primary and recovery site DR-protected servers share a common SAN Fabric, make sure that there is no conflict between the WWN addresses that VCEM assigns on both sites. One possible way to avoid conflicts is by using the sets of 64 WWN address ranges that VCEM provides with the “exclusion ranges” feature. An example of using exclusion ranges is included in Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide.

Select virtual serial numbers

Use virtual serial numbers to provide a virtual identity for your physical server blades; this allows you to easily move server identities. Ensure that each VC domain uses a unique range of virtual serial numbers.

Planning Step 2c—Virtual Connect domain configurations

Each VC domain and the VC domain group must be assigned names. In most cases, a single VCDG is adequate for each HP BladeSystem Matrix implementation.

In a federated CMS configuration, portability groups cannot be shared between CMSs (primary and/or secondary). One VCDG will be configured per each CMS in a typical BladeSystem Matrix federated CMS.

Table 9 Virtual Connect domain configuration

Item

Value

Virtual Connect Domain Group #1

Name

VCDG name

List the names of each VCD in this VCDG

VCD name(s)

Virtual Connect Domain #1

Name

VCD name

List the names of each enclosure in this VCD

Enclosure name(s)

Multi-enclosure stacking N/A, recommended, minimum or other?

MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64

WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64

Serial numbers HP-defined or user-defined? If HP-defined, select unique range 1–64

Virtual Connect Domain #2

Name

VCD name

List enclosures in domain

Enclosure name(s)

Multi-enclosure stacking N/A, recommended, minimum or other?

MAC addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64

WWN addresses VCEM-defined, HP-defined or user-defined? If HP-defined, select unique range 1-64

Serial numbers HP-defined or user-defined? If HP-defined, select unique range 1–64

4 HP BladeSystem Matrix solution storage

After you determine the application and infrastructures services included in the HP BladeSystem Matrix solution, it is time to make several decisions regarding interconnectivity options, storage requirements, and customer provided infrastructure.

For more detailed information about the processes outlined in this section, see the HP BladeSystem Matrix Setup and Installation Guide. For HP Insight Recovery (IR) implementations, this process must be used for both the primary and recovery sites.

Virtual Connect technology

This section identifies the network and storage connections used by the application services running on the HP BladeSystem Matrix physical servers. The external network and storage connections are mapped to physical servers using VC virtualization technology. VC is implemented through VC FC, VC-Ethernet with Flex-10 capability and VC FlexFabric with Flex-10 and FC capabilities. VC is managed in the HP BladeSystem Matrix environment using HP VCEM. An HP VCEM software license is included in each HP BladeSystem Matrix kit.

Storage connections

The VC FC or VC FlexFabric modules in a HP BladeSystem Matrix solution enable the c-Class administrator to reduce FC cabling by making use of NPIV. Because it uses an N-port uplink, it is connected to data center FC switches that support the NPIV protocol. When the server blade HBAs or FlexFabric Adapters log in to the fabric through the VC modules, the HBA WWN is visible to the FC switch name server and can be managed as if it was connected directly.

The HP VC FC acts as an HBA aggregator where each NPIV-enabled N-port uplink can carry the FC traffic for multiple HBAs. The HP VC FlexFabric modules translate FCoE from the blades into FC protocol. With VC FlexFabric, FlexFabric Adapters on blade servers, not HBAs, are sending the FCoE traffic across the enclosure midplane.

are sending the FCoE traffic across the enclosure midplane. IMPORTANT: The HP VC FC uplinks must

IMPORTANT: The HP VC FC uplinks must be connected to a data center FC switch that supports NPIV. See the switch firmware documentation for information to determine whether a specific switch supports NPIV and for instructions on enabling this support.

The HP BladeSystem Matrix VC FC module has eight uplinks. The HP BladeSystem Matrix VC FlexFabric module has eight uplinks, four of which are dual personality uplinks which may be used as a FC uplink. In either case, each uplink is completely independent of the other uplinks and has a capability of aggregating up to 16 physical server HBA N-port links into an N-port uplink through the use of NPIV. Multiple VC FC module uplinks can be grouped logically into a VC fabric when attached to the same FC SAN fabric. This feature enables access to more than one FC SAN fabric, as well as enabling a flexible and fully redundant method to connect server blades to FC SANs.

Planning Step 3a—Collect details about the customer provided SAN storage

The default configuration as described in the HP BladeSystem Matrix System installation and configuration documentation consists of an EVA and switches in the enclosure to create a complete self-contained SAN. In the case of a customer choosing an alternative storage configuration, the following information is required for planning the installation.

For details on support storage options, see the HP BladeSystem Matrix Quick Specs.

Table 10 Storage and fabrics

Question

Does some or all the SAN already exist?

Response

Will the matrix rack and enclosures be connected to an already installed and working SAN and array, or will some or all of the SAN storage be installed for the HP BladeSystem Matrix solution?

Number of separate SANs

Number of switches per SAN (assume 2):

Number of arrays

Planning Step 3b—FC SAN storage connections

The number of SAN connections per enclosure will vary depending on the number of redundant

paths the customer chooses and the number of separate SAN environments they plan to connect.

A typical solution has two SAN connections to the enclosure that connects the enclosure to an EVA.

The two connections are for high availability through SAN multi-pathing.

Table 11 FC SAN storage connections

 

Customer SAN name One of multiple

Storage controller

VC FC SAN profile

Note

connections to the same SAN

WWPN

1

Minimum of 1

2

Typically a second connection to first SAN for HA

3

4

5

6

NOTE: Every CMS in a federated CMS environment manages its own storage pool. Therefore storage pool entries must be created on each CMS for the portability groups that the CMS is managing.

Planning Step 3c—iSCSI SAN Storage connections

managing. Planning Step 3c—iSCSI SAN Storage connections IMPORTANT: iSCSI is not supported with Integrity nodes.

IMPORTANT: iSCSI is not supported with Integrity nodes.

Whenever iSCSI is used as a VM guest backing store, follow the best practice of separating iSCSI

traffic from other network traffic. Physical separation is preferred for providing dedicated bandwidth (independent VC Ethernet uplinks) and logical separation (VLANs) is important when sharing switching infrastructure. Any bandwidth sharing between iSCSI and other network traffic can be problematic. When implementing iSCSI as a VM backing store, make sure that an iSCSI network

is added to your list of networks (in addition to Management, Production, Deployment and vMotion

networks). Relevant examples of network configurations applicable to HP BladeSystem Matrix environments for VMware with HP StorageWorks P4000 SANs are located in the white paper Running VMware vSphere 4 on HP Left-Hand P4000 SAN solutions (http://h20195.www2.hp.com/

For iSCSI SAN solutions in the HP portfolio, visit http://www.hp.com/go/iSCSI for more information.

Table 12 Example iSCSI SAN Storage connections

Network

Host uplink

Router uplink (Data Signal type IP address center switch and port)

Provision type (Static, or DHCP)

iSCSI

P4300 G2

DC1-switch/Port1

1000Base-T

Node#1/Port1

iSCSI

P4300 G2

DC2-switch/Port1

1000Base-T

Node#1/Port2

iSCSI

P4300 G2

DC1-switch/Port2

1000Base-T

Node#2/Port1

iSCSI

P4300 G2

DC2-switch/Port2

1000Base-T

Node#2/Port2

iSCSI

P4300 G2

DC1-switch/Port3

1000Base-T

Node#3/Port1

iSCSI

P4300 G2

DC2-switch/Port3

1000Base-T

Node#3/Port2

iSCSI

P4300 G2

DC1-switch/Port4

1000Base-T

Node#4/Port1

iSCSI

P4300 G2

DC2-switch/Port4

1000Base-T

Node#4/Port2

iSCSI

Enclosure1:Bay1:Port3

DC1-switch/Port25

10GBase-T

iSCSI

Enclosure1:Bay2:Port3

DC2-switch/Port25

10GBase-T

Storage volumes

HP recommends that the CMS be configured to boot from SAN. To facilitate the flexible movement of management services across blades and enclosures, these services must be configured to use shared storage for the OS boot image, the application image, and the application data. HP also recommends that virtual machine hosts also boot from SAN.

If connectivity to customer provided SAN storage is desired, the FC switch must support the NPIV protocol. Access to the switch will be required by HP Services personnel to deploy boot from SAN LUNs. Fabric zones are required in a multi-path environment to ensure a successful operating system deployment.

Storage requirements

For each server profile, consider the boot LUN and any additional data storage requirements and list those parameters in the following table.

The HP BladeSystem Matrix Starter Kit on-site implementation services include the deployment of operating systems on a limited number of configured LUNs on the new or existing customer SAN. For more details about HP BladeSystem Matrix Starter Kit Implementation Services, see the HP BladeSystem Matrix Quick Specs.

The Replicated To column refers to the Insight Recovery remote storage controller target and data replication group names for the replicated LUNs. HP BladeSystem Matrix is disaster recovery ready, which means HP IR licenses are included and the HP IR feature can be enabled by applying Insight Dynamics licenses on supported ProLiant server blades. Application service recovery can be enabled by configuring a second HP BladeSystem Matrix infrastructure at a remote location and enabling storage replication between the two sites. Continuous access software and licenses are also required. If XP storage is used, Cluster Extension for XP software version 3.0.1 or later is required. See Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for additional information on storage and data replication requirements.

The following table summarizes the type of information needed when planning application and management services deployed on HP BladeSystem Matrix.

Table 13 Storage volumes

Server

Use and size

vDisk (LUN) name vHost name

Replicated to

Connected to

(server name)

(LUN properties)

(xxxx_vdisk)

(xxxx_vhost)

(remote target and (Local SAN

 

data replication group name, if replicated)

storage target)

The following details define the type of information needed when planning VC FC connections for application services deployed on HP BladeSystem Matrix:

Server name:

A label used to identify the application or management service

Optionally, may consist of one or more tiers of a multi-tiered application

The server name on which the application or management service is hosted

Use and size:

The purpose and characteristics of the LUNs associated with the FC connection, for example, boot LUN; the LUN ID, and the LUN size.

vDisk (LUN) name:

The vDisk label assigned to the LUN

vHost name:

The vHost label assigned to the LUN

Replicated to:

Specifies the remote storage controller WWPN and data replication group name, if using HP Insight Recovery

Connected to:

Specifies the local storage controller WWPN hosting this LUN

Table 14 Example storage volumes for management services

Server

Use and size

vDisk (LUN) name

vHost name

CMS

146GB boot

matrix_cms_vdisk

matrix_cms_vhost

Replicated to

Connected to

N/A

1 F400_3PAR

1 CMS storage is not replicated using HP IR as a second CMS is required at the remote location.

Application services storage definition examples

“Appendix A—Dynamic infrastructure provisioning with HP BladeSystem Matrix” (page 54) provides storage definition examples for use with application services using logical servers and with IO templates.

Planning Step 3d—Define storage volumes

Based on the service template completed previously, record the shared storage requirements, size, and connections for each service. If the service will be replicated using Insight Recovery, complete the Replicated to column.

Table 15 Example storage volumes for application services

Server

Use and size

vDisk (LUN) name

vHost name

Replicated to Connected to

VM Host 1

20GB boot

esx1_vdisk

esx1_vhost

None

1 F400_3PAR

VM Host 2

20GB boot

esx2_vdisk

esx2_vhost

None

1 F400_3PAR

ESX shared disk

500GB VMFS

esx_shared_vdisk

esx1_vhost, esx2_vhost

None

1 F400_3PAR

Test W2K3 Host

20GB boot

sp_w2k3_sys_01_vdisk

sp_w2k3_sys_01_vhost

None

1 F400_3PAR

1

Test W2K8 Host

40GB boot

sp_2008_sys_01_vdisk

sp_2008_sys_01_vhost

None

1 F400_3PAR

2

{DB1}

###GB

xxxx_vdisk

xxxx_vhost

None

1 (storage target)

{DB2}

###GB

xxxx_vdisk

xxxx_vhost

None

1 (storage target)

{App1}

###GB

xxxx_vdisk

xxxx_vhost

None

1 (storage target)

{App2}

###GB

xxxx_vdisk

xxxx_vhost

None

1 (storage target)

1 Storage configurations for Insight Recovery are not covered in this example

Isolating VM Guest storage from VM Host OS files

When performing multiple concurrent VM provisioning requests on the system drive of a hypervisor host, the disk I/O can become saturated during the virtual hard drive replication, which can cause the host to become unstable or unresponsive, or both. Current and future HP Insight Dynamics orchestration service requests can fail because the orchestration software is unable to successfully query the host for resource information and virtual machine-specific information. HP recommends planning hypervisors with separate disks for the hypervisor system drive and the backing storage for virtual machines. Doing so will result in greater performance and lower risk of starving the hypervisor of required I/O bandwidth. HP Insight Dynamics orchestration services offer the ability to control which devices are used for provisioning the virtual machine. To avoid this problem, see the HP BladeSystem Matrix Setup and Installation Guide for configuration steps to exclude hypervisor boot volumes from use.

Microsoft Hyper-V

Consult the Hyper-V Planning and Deployment Guide: http://www.microsoft.com/downloads/

This document describes the separation of network traffic of the hypervisor host from virtual machines, where it recommends: Use a dedicated network adapter for the management operating system of the virtualization server. The HP recommendation, which has been validated by rigorous testing, is that the principle of isolating hypervisor resources from virtual machine resources should be applied to virtual machine storage as well as networking.

The following site recommends that administrators Avoid storing system files on drives used for Hyper-V storage: http://blogs.technet.com/vikasma/archive/2008/06/26/

The following site recommends that administrators Place the pagefile and operating system files on separate physical disk drives: http://www.microsoft.com/whdc/ system/sysperf/Perf_tun_srv.mspx

VMware ESX

Most production ESX Server customers concentrate their virtual machine disk usage on external storage, such as a FC SAN, a hardware or software initiated iSCSI storage device, or a remote NAS file server (using the NFS protocol).

5 HP BladeSystem Matrix solution networking

Network planning

This section identifies and collects the network configuration used to manage the HP BladeSystem Matrix enclosures. It is assumed that separate networks are used for production (for example, application level communications) and management communications (for example, managing servers and services). Distinct networks are not required, and the two networks can be one and the same. Each deployment network can only host a single deployment service, so planning to use multiple deployment technologies require multiple, distinct deployment networks.

Collect the following customer network details that you will use to assign enclosure management and application services network information.

Planning Step 4a—Collect details about the customer provided networks

The following details define information you need when planning networks for HP BladeSystem Matrix:

Network name—The VC network profile name

IP address (network number) – The representative (masked) address for the network

Subnet mask – A bit mask used to determine the membership of an IP address in a network

Deployment server – The server which handles deployment to the network

IP range for auto-provisioning – The addresses available to HP Insight Dynamics for static assignment to servers when HP Insight Dynamics provisions an instance of an application service

VLAN tag—The VLAN id or tag associated with this network

Preferred Link Connection Speed—The default speed for server profile connections mapped to this network

DHCP server – The address of the DHCP server for each network

DNS server – The DNS server addresses for each network

Gateway IP address – The default gateway for address routing external to the network

DNS domain name – The DNS suffix specific to a network

SMTP host – SMTP mail services are required for HP Insight Dynamics workflow notifications. The CMS or another host can be configured to forward notifications

Time source – Having a time source is essential for services to function as designed

source is essential for services to function as designed IMPORTANT: When multiple OS deployment technologies

IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control server deployment, Server Automation) are planned for a HP BladeSystem Matrix installation, a unique and separate deployment LAN must exist for each deployment server.

IMPORTANT: A federated CMS is highly dependent on the DNS configuration. On the primary CMS, forward and reverse DNS lookups must work for each secondary CMS. DNS lookups need to be resolvable using the FQDN of each system.

Table 16 Configuration of networks and switches

Item

Value

Production LAN

IP address (network number)

Subnet mask

Table 16 Configuration of networks and switches (continued)

Item

Value

IP range for auto-provisioning

VLAN tag

Preferred link connection speed

Gateway IP address

DHCP server

DNS server #1

DNS server #2

DNS domain name

Management LAN

IP address (network number)

Subnet mask

IP range for auto-provisioning

VLAN tag

Preferred link connection speed

DHCP server

DNS server #1

DNS server #2

Gateway IP address

DNS domain name

Deployment LAN

IP address (network number)

192.168.1.0

Subnet mask

255.255.255.0

Deployment server

(Insight Control server deployment, HP Server Automation, or HP Ignite-UX)

VLAN tag

Preferred link connection speed

DHCP server

DNS server #1

N/A

DNS server #2

N/A

Gateway IP address

N/A

DNS domain name

N/A

VMotion LAN

IP address (network number)

192.168.2.0

Subnet mask

255.255.255.0

VLAN tag

Preferred link connection speed

Table 16 Configuration of networks and switches (continued)

Item

Value

DHCP server

DNS server #1

N/A

DNS server #2

N/A

Gateway IP address

N/A

DNS domain name

N/A

Other Network services

SMTP host

Time source

Virtual Connect Ethernet uplink connections

Each Flex-10 interconnect module has several numbered Ethernet connectors. All of these connectors can be used to connect to a data center switch (uplink ports), or they can be used to stack VC modules as part of a single VC domain (stacking ports).

Networks must be defined within the VCM so that specific, named networks can be associated with specific external data center connections. These named networks can then be used to specify networking connectivity for individual servers and application services.

The simplest approach to connecting the defined networks to the data center is to map each network to a specific uplink port. Whether a single or multi-enclosure domain is defined, any server has access to any Ethernet port.

For a minimal production ready configuration, HP recommends that you define a single network using multiple uplinks (uplink port set). This configuration can provide improved throughput and availability. One data center uplink port is defined using the “A” side (such as, Bay1 or left side) VC Ethernet module and the second port defined on the “B” side (such as Bay 2 or right side) VC Ethernet module.

The following table is an example of how the networks can be defined in a multi enclosure domain. The “Production” and “Management” networks are defined with redundant, cross-enclosure A/B uplink connections to the data center switches; the “Deployment” network traffic, such as a network dedicated to deployment services, is routed entirely within the enclosures so a data center uplink is not required.

Table 17 VC Ethernet uplink connections example

Network name VC Uplinks (Enclosure VC module ports)

Router uplinks (Data center switch Signal type and port)

Production

Enclosure1:Bay1:Port2

DC1net port #4

Enclosure2:Bay2:Port2

DC1net port #5

Management

Enclosure1:Bay1:Port3

DC1net port #6

Enclosure 2:Bay2:Port3

DC1net port #7

Deployment

N/A

N/A

In situations where the customer has VLANs in place on the data center networks, or the number of uplinks are constrained, you can combine a number of networks in a shared uplink set.

Table 18 VC Ethernet uplink connections example using Shared Uplink Sets

SUS name

(Networks)

SUS 1

Production

Management

VC Uplinks

Router uplinks

Signal type

Enclosure1:Bay1:Port1

DC1 port #4

10GBase-SR

Enclosure 2:Bay2:Port1

DC1 port #5

10GBase-SR

The following details define information you need when planning VC Ethernet connections for HP BladeSystem Matrix:

Network name—The VC network profile name.

Shared uplink set (SUS) name—Optionally, the VC Shared Uplink Set name, when multiple networks share uplinks

VC Uplinks (Enclosure VC module ports)—The VC uplink Ethernet ports. If deploying redundant connections, specify additional ports as required. One VC Flex-10 transceiver must be ordered for each uplink port. Verify compatibility with data center switch transceivers and optical cables.

Router uplinks (Data center switch and port)—The uplink data center switch name and port number that is the destination of this connection.

Signal type—The physical signal cabling standard for the connection

Planning Step 4b – Virtual Connect Ethernet uplinks

The VC uplink recommendation for a typical production environment is described in the HP BladeSystem Matrix Setup and Installation Guide. Complete the table by identifying the VC Ethernet ports used for uplink connections to the data center networks, and VLAN tags if required. If the same uplinks ports will share Production and Management data traffic, then VLAN tags and a SUS is defined.

Table 19 VC Ethernet uplink connections with sample list of networks

Network name

VC uplinks

Router uplinks

Signal type

Production

Management

Deployment

VMotion

iSCSI

Integrity OVMM

SG heartbeat

SG failover

(other network)

Enclosure.bay.port

Switch.port

Virtual Connect Flex-10 Ethernet services connections

Flex-10 technology is a hardware-based solution that enables users to partition a 10-Gb/s Ethernet (10GbE) connection and regulate the data speed of each partition. While capable of supporting 10 Gb/s bandwidth, the VC-Ethernet interconnect is compatible with lower speed switches.

Each Flex-10 network connection can be dynamically fine-tuned from 100 Mb/s to 10 Gb/s to help eliminate bottlenecks and conserve network capacity. Data center bandwidth requirements vary depending on the application. For example, TCP/IP communications, such as email, file share, web services, may consume 1 Gb/s of bandwidth. Data center management traffic, such as remote

desktop or virtual machine may consume 2 Gb/s and Inter-process communications used in cluster control could consume upward of 4 Gb/s in bandwidth.

Using VC Flex-10 you can define a network that does not use any external uplinks. This creates a cable–less network within the VC domain.

The following details define information you need when planning VC Flex-10 Ethernet connections for application services deployed on HP BladeSystem Matrix:

Server name:

A label used to identify the application or management service

Optionally, can consist of one or more tiers of a multi-tiered application

The server name on which the application or management service is hosted

Network:

The VC network profile name

Port Assignment:

The Flex NIC port connected to this network.

Used when specifying a physical blade not auto-provisioned by IO.

Flex-10 bandwidth:

Specifies the Flex-10 bandwidth allocation for this NIC.

Used when specifying a physical blade not auto-provisioned by IO.

PXE settings:

Specifies the PXE options (Enabled, Disabled, Use BIOS) for this NIC.

Used when specifying a physical blade not auto-provisioned by IO.

Continuing with the services examples developed previously in the “Servers and services to be deployed in HP BladeSystem Matrix” section, and using the following table, define VC Ethernet parameters for those services.

Management services network configuration

The management server network connections consist of connection to the production and management subnets. The deployment network is used by the deployment server.

Table 20 Network host connections example for management services

Server

Connection

Port assignment

Flex-10 Bandwidth

PXE setting

 

allotment

Management servers

CMS

Management

1a, 1b

1Gb

Disable

Production

2a, 2b

2Gb

Disable

Insight Control server deployment

Deployment

1a, 1b

1Gb

Enabled

Production

2a, 2b

2Gb

Disable

Application Services network connectivity examples

Appendix A—Dynamic infrastructure provisioning with HP BladeSystem Matrix provides network connectivity examples for use with application services using logical servers and with IO templates.

Planning Step 4c—Define services VC Ethernet connections

Record the connections, type, and destination for each service based on the service template you completed previously.

Table 21 Network host connections example for application services

Server

Connection

Port assignment 1

(server names)

(VC Ethernet

(connection type)

connection #1)

(server names)

(VC Ethernet

(connection type)

connection #2)

Flex-10 Bandwidth

PXE setting 1

allotment 1

(connection

(uplink destination)

bandwidth)

(connection

(uplink destination)

bandwidth)

1 These parameters can be specified when defining network connections to physical blades not auto-provisioned by IO, such as the CMS, deployment server, SQL Server, and ESX hosts.

NOTE: Currently, IO can only provision a network with a single VLAN ID mapped to a single Flex NIC port. Even though the VC profile network port definition allows traffic from multiple networks to be trunked over a single NIC (with VLAN ID tagging), IO cannot express this in a service template. Ensure that any server blade provisioned by IO has enough NIC ports to individually carry the defined networks.

Manageability connections

The following table lists required network connections to properly configure and manage each HP BladeSystem Matrix enclosure. Some connections can be provisioned using static address, DHCP, or the recommended EBIPA. This table lists the physical network connections and IP address requirements for the BladeSystem enclosure management connections.

Table 22 Required management connections for HP BladeSystem Matrix enclosures

Network

Host uplink

Router uplink (Data Signal type center switch and port)

IP address

Provision type (EBIPA, Static, or DHCP)

Management

OA #1

1000Base-T

Static

Management

OA #2

1000Base-T

Static

Management

VC Ethernet #1

Through OA

Multiplexed

EBIPA

 

connection

Management

VC Ethernet #2

Through OA

Multiplexed

EBIPA

 

connection

Management

VC Fibre #1

Through OA

Multiplexed

EBIPA

 

connection

Management

VC Fibre #2

Through OA

Multiplexed

EBIPA

 

connection

Management

Optional – VC Domain

Through OA

Multiplexed

Static

IP

connection

Management

Enclosure iLO Range

Through OA

Multiplexed

EBIPA

 

connection

If the EVA4400 (or EVA6400, EVA8400), P4300 G2 (or P4500 G2) or other storage solution is included in the HP BladeSystem Matrix configuration, the following is a sample of the required network connections. Other storage solutions such as a HP StorageWorks XP Array, HP 3PAR F-Class InServ storage system or HP 3PAR T-Class InServ storage system have similar network connection requirements.

Table 23 Additional network connections for storage management

Network

Host uplink

Router uplink (Data Signal type IP address Provision type center switch and port)

Management

EVA4400 ABM MGMT port

100Base-T

Static

Management

EVA4400 Fibre switch

100Base-T

Static

#1

Management

EVA4400 Fibre switch

100Base-T

Static

#2

Management

P4300 G2 Node #1

100Base-T

Static

Management

P4300 G2 Node #2

100Base-T

Static

Management

Other SAN switch

100Base-T

Static

Management

Other FC Storage controller

100Base-T

Static

Other devices included in the HP BladeSystem Matrix configuration require management network connections such as monitored PDUs and network switches.

Table 24 Other additional network connections

Network

Host uplink

Router uplink (Data Signal type IP address Provision type center switch and port)

Management

Monitored PDU #1

100Base-T

Static

Management

Monitored PDU #2

100Base-T

Static

Management

Network Switch #1

N/A

N/A

Static

Management

Network Switch #2

N/A

N/A

Static

Planning Step 4d—Define manageability connections

Based on the HP BladeSystem Matrix required network connections provided in the previous table, use the following template to record the various IP addresses required to manage the BladeSystem enclosures. Use one template for each enclosure ordered.

Table 25 HP BladeSystem Matrix management connections

Network

Host uplink

Router uplink (Data Signal type IP address center switch and port)

Provision type (EBIPA, Static, or DHCP)

Management

Starter Kit OA #1

1000Base-T

Management

Starter Kit OA #2

1000Base-T

Management

Starter Kit VC-Enet #1

Through OA

Multiplexed

 

connection

Management

Starter Kit VC-Enet #2

Through OA

Multiplexed

 

connection

Management

Starter Kit VC-FC #1

Through OA

Multiplexed

 

connection

Management

Starter Kit VC-FC #2

Through OA

Multiplexed

 

connection

Management

Optional – VC Domain

Through OA

Multiplexed

IP

connection

Table 25 HP BladeSystem Matrix management connections (continued)

Network

Host uplink

Router uplink (Data Signal type IP address center switch and port)

Provision type (EBIPA, Static, or DHCP)

Management

Starter Kit iLO Range

Through OA

Multiplexed

(starting IP –

 

connection

ending IP)

Management

Expansion Kit #1 OA #1

1000Base-T

Management

Expansion Kit #1 OA #2

1000Base-T

Management

Expansion Kit #1

Through OA

Multiplexed

VC-Enet #1

connection

Management

Expansion Kit #1

Through OA

Multiplexed

VC-Enet #2

connection

Management

Expansion Kit #1 VC-FC

Through OA

Multiplexed

#1

connection

Management

Expansion Kit #1 VC-FC

Through OA

Multiplexed

#2

connection

Management

Expansion Kit #1 iLO

Through OA

Multiplexed

(starting IP –

Range

connection

ending IP)

Management

EVA4400 ABM MGMT port

100Base-T

Management

EVA4400 Fibre switch

100Base-T

#1

Management

EVA4400 Fibre switch

100Base-T

#2

Management

P4300 G2 Node #1 MGMT

100Base-T

Management

P4300 G2 Node #2 MGMT

100Base-T

Management

Other SAN switch

100Base-T

Management

Other FC Storage controller