Académique Documents
Professionnel Documents
Culture Documents
2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Index
Item #
1
Section description
MSA2000fc G2 Introduction
Product description MSA2000fc G2 Benefits MSA2000fc G2 Models MSA2000fc G2 Chassis
2 3 4 5 6 7
Product Overview (Features comparison) Performance Numbers (Comparison) MSA200fc G2 a la carte Strategy & SKUs MSA2000fc G2 Supported Configurations and Cable Diagrams Unified LUN Presentation (ULP) MSA2000fc G2 Management
Improved Web Based Interface Storage Management Utility (SMU) Getting Started with MSA2000fc G2 Comparison between Old and New SMU Creating Vdisk Creating Volumes Mapping a Volume
2 March 2009
Twice the number of drives, improved performance, and support for small form factor drives with Integrity and ProLiant servers
LFF
Top enhancements:
Adds support for ProLiant Small Form Factor drives giving high spindle count in a dense configuration Increases the overall capacity to 60 LFF or 99 SFF drives. Start small, add as your needs grow up to 60TB! Expands support to the HP-UX operating system and the powerful line of Integrity servers
2 March 2009
Whats new
NEW MSA2000fc G2 MSA2 3.5 Large Form Factor LFF ProLiant 2.5 Small Form Factor SFF 60 LFF 99 SFF X86 Integrity, HP 9000 Windows Linux HP-UX OpenVMS Snapshot (max 255 snaps) 512
MSA2000fc G2 benefits
New MSA2000 G2 features
Support for 2.5-in. drives Support for MSA70 enclosure New MSA2300fc high-speed controller Increased scalability & 512 LUN support HP-UX and OpenVMS support and Integrity servers New DC power options
4 Gb Fibre Channel two-port arrays 256 LUNs with 16TB LUNs supported 1GB Transportable cache per controller RAID 0, 1, 3, 5, 6, 10, 50 Controller-based snapshot and clone capability Direct or switch attach, boot from SAN where possible
Support for concurrent use of SAS and SATA Brower-based management Windows, Linux, VMware, Hyper-V, Xen Non-disruptive on-line controller code update ProLiant and most industry standard x86 support Support expansion to additional drive enclosures
2 March 2009
MSA2312fc
Drive bays
MSA2 LFF Drives
Generation Indicator
MSA2324fc
Drive bays
ProLiant SFF Drives
2 March 2009
MSA2000fc G2 Chassis
MSA2000fc G2 (LFF)
MSA2000fc G2 (SFF)
10
2 March 2009
Product overview
Features
Storage Controllers
MSA2000sa
Dual Active/Active Hot Swap (Single Controller Option available) 3Gb SAS 2-port/Controller 1GB Standard 2U 12 3.5 SAS & SATA (Supports SAS and SATA drives in the same enclosure)
MSA2000fc G2
New
MSA2000i
Dual Active/Active Hot Swap (Single Controller Option available) 1GbE 2-port/Controller 1GB Standard 2U 12 3.5 SAS & SATA (Supports SAS and SATA drives in the same enclosure)
Dual Active/Active Hot Swap (Single Controller Option available) 4Gb FC 2-port/Controller 1GB Standard 2U 12
2.5 SAS & SATA 3.5 SAS & SATA (Supports SAS and SATA drives in the same enclosure)
Technology
Drives Supported
5.4TB base capacity up to 21.6TB using 450GB SAS drives 12TB base capacity up to 48TB using 1TB SATA drives
5.4TB base capacity up to 27 TB using 450GB SAS drives 12TB base capacity up to 60TB using 1TB SATA drives
5.4TB base capacity up to 21.6TB using 450GB SAS drives 12TB base capacity up to 48TB using 1TB SATA drives
12
2 March 2009
MSA2000sa
MSA2000fc G2
512 16TB Emulex and QLogic 1+4 enclosures (LFF) 1+3 enclosures (SFF)
New
MSA2000i
256 16TB Industry standard 1GB Ethernet 1+3 enclosures
4
SAS 146GB 15K,
64
2.5 SAS 72 & 146 GB
16
SAS 146GB 15K,
Supported Drives
15K 3.5SAS 146GB 15K, 300GB 15K, 450GB SATA 750GB, ITB 8-nodes (Microsoft) 16 nodes (Linux) coming
# of Cluster Node
13
2 March 2009
MSA2000sa
Windows 2008, Windows 2003, RH and SuSE Linux VMware Out-of-band CLI & Web-based interface (WBI)
MSA2000fc G2
New
MSA2000i
Windows 2003, RH and SuSE Linux VMware Out-of-band CLI & Web-based interface (WBI) MPIO DSM Controller based snapshot and clone ProLiant servers ProLiant Blades servers 3rd Party x86 Non-disruptive
Windows 2008, Windows 2003, RH and SuSE Linux VMware Out-of-band CLI & Web-based interface (WBI)
MPIO DSM Controller based snapshot and clone ProLiant servers 3rd Party x86 Non-disruptive
Value
3.0.0
3.0.0
3.0.0
14
2 March 2009
MSA2000sa 3 Gb SAS
21,400 8,800 13,500 1,300 560 20,500 2,400 1,300 780 5,900
10,600 4,900 6,800 700 350 10,200 2,000 700 380 3,300
8,200 4,500 6,100 300 260 7,800 1,600 300 270 3,200
Controller-less CHASSIS: DC-powered AJ950A AJ951A HP StorageWorks 2012 Modular Smart Array 3.5-in Drive Bay DC-power Chassis HP StorageWorks 2024 Modular Smart Array 2.5-in Drive Bay DC-power Chassis (LFF) (SFF)
18
2 March 2009
19
2 March 2009
One with 24 Small Form Factor (SFF) drive bays One with 12 Large Form Factor (LFF) drive bays
[Also introducing DC-power versions of each controller-less chassis]
MSA2300fc G2 FC Controller MSA2000 3.5 Enclosure I/O module
ADD 1 or 2
Note: 24 SFF drive bay chassis is for controllers only. The MSA70 is the SFF JBOD to be used with the MSA2300fc
20 2 March 2009
1 with twelve 3.5 LFF drive bays (AJ948A) 1 with twenty-four 2.5 SFF drive bays (AJ949A)* 1 with twelve 3.5 LFF drive bays (AJ950A) 1 with twenty-four 2.5 SFF drive bays (AJ951A)*
Step 2
Pick the module(s) you need (2300fc controller, 3.5 JBOD module)
To make an MSA2000fc G2 array head
2300fc G2 Modular Smart Array Controller (AJ798A) MSA2000 Drive Enclosure I/O Module (AJ751A) You can assemble a single or dual MSA2300fc RAID head and either accommodate LFF or SFF drives. You can assemble a single or dual I/O MSA2000 3.5 Disk Enclosure You CANNOT assemble a SFF JBOD from these components. The MSA70 is the supported SFF JBOD
21
2 March 2009
MSA2000fc G2 transition
NEW A La Carte
Controller-less chassis 2012 Modular Smart Array 3.5-in Drive Bay Chassis 2024 Modular Smart Array 2.5-in Drive Bay Chassis AJ948A Bundles Bundles Kit upgrade
NEW Kits
MSA2300 SAN Starter Kit LFF MSA2300 SAN Starter Kit SFF SAN Starter HA Upgrade Kit AJ954A AJ955A AJ956A
Controller-less chassis
AJ949A
CURRENT FC
EOL
MSA2012fc
EOL
Single Controller Modular Smart Array Dual Controller Modular Smart Array Dual Enhanced Controller Modular Smart Array 2000fc Modular Smart Array Controller
MSA2012fc
EOL
MSA2324fc
AJ797A
AJ744A
2300fc Modular Smart Array Controller 2000fc Modular Smart Array Controller
AJ798A
AJ744A
22
2 March 2009
Up to three twenty-five drive MSA70 JBODs 2.5 DP ProLiant SAS and/or SATA drives
Max configuration with sixty Large Form Factor MSA2 drives (LFF)
Twelve drive LFF array head with one or two
Up to four twelve-drive MSA2000 3.5 Drive 3.5 MSA2 DP SAS and/or SATA drives
Up to three twelve-drive MSA2000 3.5 Drive 2.5 DP ProLiant SAS and/or SATA drives 3.5 MSA2 DP SAS and/or SATA drives
MSA2000
MSA2000 G2
Important Tips
MSA2300fc G2 controller now uses mini-SAS connector Existing 2012fc/2212fc can be upgraded via controller swap For more info review whitepaper @ www.hp.com/go/msa2000fc
25
2 March 2009
Single domain
With MSA2000 JBODs
PCIe 5 X8 PCI-X 5 133 MHz PCIe 5 X8 PCI-X 5 133 MHz 4 X8 4 PCIe 4 4 X8 4 PCIe 4 MHz 133 X16 MHz 133 X16
Dual 2Gbit FC
3 X4
PCIe 3
X8 X4 1 2 X4 1 PCIeX4
Dual 2Gbit FC
3 X4
PCIe 3
X8 X4 1
PCIe
iLO2
UID
PCIe
2 X4
1 PCIeX4
iLO2
UID
DL380g5
DL380g5
0 1 - >
2- 3>
4-5>
6- 7>
8- 9 >
26
2 March 2009
Single domain
With MSA70 JBODs
PCIe PCI-X PCIe PCI-X 5 5 X8 133 MHz PCIe 4 5 5 X8 133 MHz 4 X8 4 4 X8 4 PCIe 4 133 X16 MHz MHz 133 X16
Dual 2 Gbit FC
3 X4
PCIe
X8 X4 1 2 PCIe X4 1 PCIe X4
Dual 2 Gbit FC
3 X4
PCIe 3
X8 X4 1
iLO2
UID
2 PCIeX4
1 PCIe X4
iLO2
UID
DL380g5
DL380g5
0-1>
2- 3 >
4-5>
6- 7 >
8-9>
UID
UID
UID
27
2 March 2009
Dual domain
With MSA2000 JBODs
PCIe PCI-X PCIe PCI-X 5 5 X8 133 MHz 5 5 X8 133 MHz 4 X8 4 PCIe 4 4 X8 4 PCIe 4 133 X16 MHz 133 X16 MHz
Dual2
Gbit FC
3 X4
PCIe
X8 X4 1 2 PCIe X4 1 PCIe X4
Dual2
Gbit FC
3 X4
PCIe 3
X8 X4 1
iLO2
UID
2 PCIeX4
1 PCIe X4
iLO2
UID
DL380g5
DL380g5
0-1>
2-3 >
4-5>
6-7 >
8-9>
0-1>
2-3>
4-5>
6-7>
8- 9 >
28
2 March 2009
Dual domain
With MSA70 JBODs
PCIe 5 X8 PCI-X 5 133 MHz PCIe PCI-X 5 5 X8 133 MHz 4 4 PCIe 4 4 X8 4 PCIe 4 X8 MHz X16 133 133 X16 MHz
Dual2
Gbit FC
PCIe 3 X4
X4 X8
Dual2
1 2 X4 1
Gbit FC
3 X4
PCIe 3
X8 X4 1
PCIe
PCIeX4
iLO2
UID
2 PCIeX4
1 PCIe X4
iLO2
UID
DL380g5
DL380g5
0-1>
2-3 >
4-5>
6-7 >
8-9>
0-1>
2-3>
4-5>
6-7>
8- 9 >
UID
UID
UID
29
2 March 2009
What is ULP?
Unified LUN
Presentation
The intent of ULP is to make all LUNs in the system accessible through all ports on both Control Units ULP appears to the host as an active-active storage system where the host can choose any available path to access a LUN regardless of vdisk/LUN ownership Uses the T10 Technical Committee of INCITS* Asymmetric Logical Unit Access (ALUA) extensions, in SPC3**, to negotiate portals (paths) with aware host systems. Unaware host systems see all paths as being equal.
30
2 March 2009
31
2 March 2009
ULP design
Underlying concept is still vdisk ownership Vdisk ownership is transparent to host system ULP keeps the raid & disk firmware intact
32
2 March 2009
Controller A
A read cache B read mirror A write cache B write mirror
Controller B
A read mirror B read cache A write mirror B write cache LUN 1 B Owned
33
2 March 2009
Controller B
A read mirror B read cache A write mirror B write cache LUN 1 B Owned
34
2 March 2009
LUNs 0-4 are available on all 4 host ports Multi-Path software sees 2 instances of LUNs RTPGs indicate preferred paths to MPIO software LUNs 0,2,4 preferred path is A0 LUNs 1,3 preferred path is B0 MPIO defaults to round robin I/O pattern, may be changed to take advantage of preferred paths
A1
A0
A
B0 B1
B
LUN 0 A Owned
35 2 March 2009
LUN 1 B Owned
LUN 2 A Owned
LUN 3 B Owned
LUN 4 A Owned
CUB fails, vdisk ownership transfers to CUA Same single WWN is presented All LUNs still presented through CUA MP software continues uninterrupted I/O Surviving controller reports all paths as preferred
A0
A1
A
B0 B1
B
LUN 0 A Owned
36 2 March 2009
LUN 1 B Owned
LUN 2 A Owned
LUN 3 B Owned
LUN 4 A Owned
Installation overview
38
Unpack the array Obtain the necessary accessories and equipment Mount the controller and expansion trays in a rack or cabinet Connect the AC power to the two power modules Perform initial power-up Connect the management hosts to the controller tray Connect the data hosts to the controller tray Use WBI or the CLI to set the Ethernet IP address, netmask, and gateway address, for each controller module Use WBI, or the CLI, to set the array date and time; change the management password Set the basic array configuration parameters Plan and implement your storage configuration
2 March 2009
Start and configure a terminal emulator, such as HyperTerminal using the following settings:
Terminal Emulator Display Settings
Terminal Emulation Mode ANSI (for color support) Font Terminal Translations None Columns 80 Connector COM1 (typically) Baud rate (bits/sec) 115,200 Data bits 8 Parity None Stop bits 1 Flow control None
39
2 March 2009
At the prompt (#), type the following command to set the IP address for controller A
Set network-parameters ip <address> netmask <netmask> gateway <gateway> controller <a|b>
Verify Ethernet connectivity by pinging the IP addresses Optional At the prompt (#), use the same command to set the IP address for controller B except b
40
2 March 2009
MSA2000fc G2 Management
Web-based Storage Management Utility (SMU) It is the primary interface for configuring and managing the system. A web server resides in each controller module. SMU enables you to manage the system from a web browser that is properly configured and that can access a controller module through an Ethernet Command Line View (CLI) The embedded CLI enables you to configure and manage the system using individual commands or command scripts through an out-of-band RS-232 or Ethernet connection
2)
TIP: SMU uses popup windows to indicate the progress of user-requested tasks.
Therefore, disable any browser features or tools that block popup windows
2 March 2009 42
43
2 March 2009
TIP
44
In a web browser, type a controller module IP address in the address or location field and press Enter. Type the Username: manage Type the Password: !manage
2 March 2009
Creating Vdisk
Select Create a vdisk Click the Provisioning Click Create Vdisk This will bring up the Vdisks screen
Note Drive Configs: RAID levels 3,5, 6 can contain a max of 16 disks RAID level 0, 50 and 10 can contain up to 32 disks
45 2 March 2009
Creating Vdisk
Select the disks to include in the vdisk Select the RAID level Determine if you want a spare drive dedicated to this vdisk Click on Create Vdisk
Note Drive Configs: RAID levels 3,5, 6 can contain a max of 16 disks RAID level 0, 50 and 10 can contain up to 32 disks
46 2 March 2009
Creating Vdisk
You may now check the Vdisk properties by clicking on the newly created vdisk in the left panel
47
2 March 2009
Creating volumes
Click on Provisioning and then on Create Volume Set Continue on next screen
48
2 March 2009
Creating volumes
You may want to change the Volume Set Basename Enter the total number of volumes you want to create Enter the size of the volume Click on Apply to create the volumes
49
2 March 2009
Creating volumes
Click on the + next to the Vdisk. This will expand the Vdisk and you will see your new volumes
50
2 March 2009
Creating volumes
You may now check the Volume Overview by clicking on a volume in the left panel
TIPS: MSA2000 G2 allows you to expand or delete Volumes (LUNs) out of order
51 2 March 2009
Mapping a volume
Double click the icon on your desktop to launch the HP Systems Management Homepage Once the SMH has loaded, locate the block labeled STORAGE. Within the block, locate and click the link labeled EXTERNAL STORAGE CONNECTIONS
52
2 March 2009
Mapping a volume
This page shows details on the fibre channel HBA installed in the server. Locate and write down both of the 16 digit values next to the World Wide Port Name. This is the number displayed in the MSA2324fc LUN Mapping table.
53
2 March 2009
Mapping a volume
Return to the MSA2324fc Management GUI Highlight the first volume Select on the menu Provisioning then Explicit Mappings
54
2 March 2009
Mapping a volume
Select one of the Port WWNs Check the Map box Enter a LUN ID Pull down the Access drop down menu and select Read-Write Click on the port that you wish to allow ReadWrite access on. You must also Click on each port you wish to use for host access. Then click on Apply Click OK on the success window
55
2 March 2009
Mapping a volume
Notice that the mapping has changed for the port you have configured.
56
2 March 2009