Vous êtes sur la page 1sur 38

Here is Your Customized Document

Your Configuration is:


Action to Perform - Learn about storage system Information Type - Hardware and operational overview Storage-System Model - CX4-480

Reporting Problems
To send comments or report errors regarding this document, please email: UserCustomizedDocs@emc.com. For issues not related to this document, contact your service provider. Refer to Document ID: 1424630

Content Creation Date 2010/10/4

Content Creation Date 2010/10/4

Content Creation Date 2010/10/4

CX4-480 Storage Systems

Hardware and Operational Overview

This document describes the hardware, powerup and powerdown sequences, and status indicators for the CX4-480 storage systems with UltraFlex technology. Major topics are: Storage-system major components.................................................. Storage processor enclosure (SPE)................................................... Disk-array enclosures (DAEs)......................................................... Standby power supplies (SPSs)....................................................... Powerup and powerdown sequence ............................................... Status lights (LEDs) and indicators ................................................. 2 4 12 18 19 25

Storage-system major components


The storage system consists of: A storage processor enclosure (SPE) Two standby power supplies (SPSs) One Fibre Channel disk-array enclosure (DAE) with a minimum of five disk drives Optional DAEs

A DAE is sometimes referred to as a DAE3P.

The high-availability features for the storage system include: Redundant storage processors (SPs) configured with UltraFlex I/O modules Standby power supplies (SPS) Redundant power supply/cooling modules (referred to as power/cooling modules) The SPE is a highly available storage enclosure with redundant power and cooling. It is 2U high (a U is a NEMA unit; each unit is 1.75 inches) and includes two storage processors (SPs) and the power/cooling modules. Each storage processor (SP) uses UltraFlex I/O modules to facilitate: 4 Gb/s and/or 8 Gb/s Fibre Channel connectivity, and 1 Gb/s and/or 10 Gb/s Ethernet connectivity through its front-end ports to Windows, VMware, and UNIX hosts 10 Gb/s Ethernet Fibre Channel over Ethernet (FCoE) connectivity through its front-end ports to Windows, VMware, and Linux hosts. The FCoE I/O modules require FLARE 04.30.000.5.5xx or later on the storage system. 4 Gb/s Fibre Channel connectivity through its back-end ports to the storage systems disk-array enclosures (DAEs). The SP senses the speed of the incoming host I/O and sets the speed of its front-end ports to the lowest speed it senses. The speed of the
2

Hardware and Operational Overview

DAEs determine the speed of the back-end ports through which they are connected to the SPs. Table 1 gives the number of Fibre Channel, FCoE, and iSCSI I/O front-end ports and Fibre Channel back-end ports supported for each SP. The storage system cannot have the maximum number of Fibre Channel front-end ports, maximum number of FCoE front-end, and the maximum number of iSCSI front-end ports listed in Table 1. The actual number of Fibre Channel, FCoE, and iSCSI front-end ports for an SP is determined by the number and type of UltraFlex I/O modules in the storage system. For more information, refer to UltraFlex I/O modules, page 6 .
Table 1 Front-end and back-end ports per SP Storage system CX4-480 Fibre Channel front-end I/O ports 4 or 8 FCoE front-end I/O ports 1 or 2 iSCSI front-end I/O ports 2, 4, or 6 Fibre Channel back-end disk ports 4

The storage system requires at least five disks and works in conjunction with one or more disk-array enclosures (DAEs) to provide terabytes of highly available disk storage. A DAE is a disk enclosure with slots for up to 15 Fibre Channel or SATA disks. The disks within the DAE are connected through a 4 Gb/s point-to-point Fibre Channel fabric. Each DAE connects to the SPE or another DAE with simple FC-AL serial cabling. The CX4-480 storage system supports a total of 32 DAEs for a total of 480 disks on its four back-end buses. Each bus supports as many as eight DAEs for a total of 120 disks per bus. You can place the disk enclosures in the same cabinet as the SPE, or in one or more separate cabinets. High-availability features are standard.

Hardware and Operational Overview

Storage processor enclosure (SPE)


The SPE components include: A sheet-metal enclosure with a midplane and front bezel Two storage processors (SP A and SP B), each consisting of one CPU module and an I/O carrier with slots for I/O modules Four power supply/system cooling modules (referred to as power/cooling modules) two associated with one SP A and two associated with SP B. Two management modules one associated with SP A and one associated with SP B. Each module has SPS, management, and service connectors. Figure 1 and Figure 2 show the SPE components. If the enclosure provides slots for two identical components, the component in slot A is called component-name A. The second component is called component-name B. For increased clarity, the following figures depict the SPE outside of the rack cabinet. Your SPE may arrive installed in a rackmount cabinet.
Power/cooling modules A0 - A1 Power/cooling modules B0 - B1

CPU module A
Figure 1

CPU module B

CL4135

SPE components (front with bezel removed)

Hardware and Operational Overview

SP B

SP A

10/100/1000

10/100/1000

Management module B
Figure 2 SPE components (back)

Management module A

CL4184

Midplane
The midplane distributes power and signals to all the enclosure components. The CPU modules, I/O modules, and power/cooling modules plug directly into midplane connectors.

Front bezel
The front bezel has a key lock and two latch release buttons. Pressing the latch release buttons releases the bezel from the enclosure.

Storage processors (SPs)


The SP is the SPEs intelligent component and acts as the control center. Each SP includes: One CPU module with: One dual-core processor 8 GB of DDR-II DIMM (double data rate, dual in-line memory module) memory I/O module enclosure with five UltraFlex I/O module slots One management module with: One GbE Ethernet LAN port for management and backup (RJ45 connector) One GbE Ethernet LAN port for peer service (RJ45 connector) One serial port for connection to a standby power supply (SPS) (micro DB9 connector)
Hardware and Operational Overview
5

One serial port for RS-232 connection to a service console (micro DB9 connector) UltraFlex I/O modules Table 2 lists the number of I/O modules the storage system supports and the slots the I/O modules can occupy. More slots are available for optional I/O modules than the maximum number of optional I/O modules supported because some slots are occupied by required I/O modules. With the exception of slots A0 and B0, the slots occupied by the required I/O modules can vary between configurations. Figure 3 shows the I/O module slot locations and the I/O modules for the standard minimum configuration with 1 GbE iSCSI modules. The 1 GbE iSCSI modules shown in this example could be 10 GbE iSCSI or FCoE I/O modules.
Table 2 Number of supported I/O modules per SP All I/O modules Storage system CX4-480 Number supported per SP 5 SP A slots A0-A4 SP B slots B0-B4 Optional I/O modules Number supported per SP 2 SP A slots A2-A4 SP B slots B2-B4

10/100/1000

10/100/1000

B0
Figure 3

B1

B2

B3

B4

A0

A1

A2

A3

A4

CL4180

I/O module slot locations (1 GbE iSCSI and FC I/O modules for a standard minimum configuration shown)

The following types of modules are available: 4 or 8 Gb Fibre Channel (FC) modules with either: 2 back-end (BE) ports for disk bus connections and 2 front-end (FE) ports for server I/O connections (connection to a switch or server HBA). or 4 front-end (FE) ports for server I/O connections (connection to a switch or server HBA). The 8 Gb FC module requires FLARE 04.28.000.5.7xx or later.
6

Hardware and Operational Overview

10 Gb Ethernet (10 GbE) FCoE module with 2 FCoE front-end (FE) ports for server I/O connections (connection to a FCoE switch and from the switch to the server CNA). The 10 GbE FCoE module requires FLARE 04.30.000.5.5xx or later. 1 Gb Ethernet (1 GbE) or 10 Gb Ethernet (10 GbE) iSCSI module with 2 iSCSI front-end (FE) ports for network server iSCSI I/O connections (connection to a network switch, router, server NIC, or iSCSI HBA). The 10 GbE iSCSI module requires FLARE 04.29 or later.

Hardware and Operational Overview

Table 3 lists the I/O modules available for the storage system and the number of each module that is standard and/or optional.
Table 3 I/O modules per SP Number of modules per SP Module 4 or 8 Gb FC module (see note 1): 2 BE ports (0, 1) 2 FE ports (2, 3) 4 or 8 Gb FC module: 4 FE ports (0, 1, 2, 3) 10 GbE FCoE module: 2 FE ports (0, 1) 1 or 10 GbE iSCSI module: 2 FE ports (0, 1) Standard 2 Optional 0

0 1 or 0 (see note 2) 1 or 0 (see note 2)

1 1 or 2 (see note 3) 1 or 2 (see note 3)

Note 1: In a storage system that shipped from the factory, the FC modules with BE ports are either all 4 Gb FC modules or all 8 Gb FC modules. Note 2: The standard system has either 1 FCoE module or 1 iSCSI module per SP, but not both types. Note 3: The maximum number of 10 GbE FCoE or 10 GbE iSCSI modules per SP is 2.

IMPORTANT Always install I/O modules in pairs one module in SP A and one module in SP B. Both SPs must have the same type of I/O modules in the same slots. Slots A0 and B0 always contain a Fibre Channel I/O module with two back-end ports and two front-end ports. The other available slots can contain any type of I/O module that is supported for the storage system.

The actual number of each type of optional Fibre Channel, FCoE, and iSCSI I/O modules supported for a specific storage-system configuration is limited by the available slots and the maximum number of Fibre Channel, FCoE, and iSCSI front-end ports supported for the storage system. Table 4 lists the maximum number of Fibre Channel, FCoE, and iSCSI FE ports per SP for the storage system.

Hardware and Operational Overview

Table 4

Maximum number of front-end (FE) ports per SP Maximum Fibre Channel FE ports per SP Storage system CX4-480 8 4 Maximum FCoE FE ports per SP Maximum iSCSI FE ports per SP (see note) 8

Note: The maximum number of 10 GbE iSCSI ports per SP is 4.

Back-end (BE) port connectivity Each FC back-end port has a connector for a copper SFP-HSSDC2 (small form factor pluggable to high speed serial data connector) cable. Back-end connectivity cannot exceed 4 Gb/s regardless of the I/O modules speed. Table 5 lists the FC modules that support the back-end buses.
Table 5 FC I/O module ports supporting back-end buses Storage system and FC modules CX4-480 FC module in slots A0 and B0 FC module with both BE and FE ports, usually in slots A1 and B1 Bus 0 (port 0) Bus 1 (port 1) Bus 2 (port 0) Bus 3 (port 1) Back-end bus (module port)

Fibre Channel (FC) front-end connectivity Each 4 Gb or 8 Gb FC front-end port has an SFP shielded Fibre Channel connector for an optical cable. The FC front-end ports on a 4 Gb FC module support 1, 2, or 4 Gb/s connectivity, and the FC front-end ports on an 8 Gb FC module support 2, 4, or 8 Gb/s connectivity. You cannot use the FC front-end ports on an 8 Gb FC module in a 1 Gb/s Fibre Channel environment. You can use the FC front-end ports on a 4 Gb FC module in an 8 Gb/s Fibre Channel environment if the FC switch or HBA ports to which the modules FE ports connect auto-adjust their speed to 4 Gb/s. FCoE front-end connectivity Each FCoE front-end port on a 10 GbE FCoE module runs at a fixed 10 Gb/s speed, and must be cabled to an FCoE switch. Versions

Hardware and Operational Overview

that support fiber-optic cabling include SFP shielded connectors for optical Ethernet cable. Supported active twinaxial cables include SFP connectors at either end; the ports in FCoE modules intended for active twinaxial cabling do not include SFPs. iSCSI front-end connectivity Each iSCSI front-end port on a 1 GbE iSCSI module has a 1GBaseT copper connector for a copper Ethernet cable, and can auto-adjust the front-end port speed to 10 Mb/s, 100 Mb/s, or 1 Gb/s. Each iSCSI front-end port on a 10 GbE iSCSI module has an SFP shielded connector for an optical Ethernet cable, and runs at a fixed 10 Gb/s speed. You can connect 10 GbE iSCSI modules to supported switches with active twinaxial cable after removing the optical SFP connectors. Because the 1 GbE and the 10 GbE Ethernet iSCSI connection topologies are not interoperable, the 1 GbE and the 10 GbE iSCSI modules cannot operate on the same physical network.

Power/cooling modules
Each of the four power/cooling modules integrates one independent power supply and one blower into a single module. The power supply in each module is an auto-ranging, power-factor-corrected, multi-output, offline converter. The four power/cooling modules (A0, A1, B0, and B1) are located above the CPUs and are accessible from the front of the unit. A0 and A1 share load currents and provide power and cooling for SP A, and B0 and B1 share load currents and provide power and cooling for SP B. A0 and B0 share a line cord, and A1 and B1 share a line cord. An SP or power/cooling module with power-related faults does not adversely affect the operation of any other component. If one power/cooling module fails, the others take over.

SPE field-replaceable units (FRUs)


The following are field-replaceable units (FRUs) that you can replace while the system is powered up: Power/cooling modules Management modules

10

Hardware and Operational Overview

SFP modules, which plug into the 4 Gb and 8 Gb Fibre Channel front-end port connectors in the Fibre Channel I/O modules Contact your service provider to replace a failed CPU board, CPU memory module, or I/O module.

Hardware and Operational Overview

11

Disk-array enclosures (DAEs)


DAE UltraPoint (sometimes called "point-to-point") disk-array enclosures are highly available, high-performance, high-capacity storage-system components that use a Fibre Channel Arbitrated Loop (FC-AL) as the interconnect interface. A disk enclosure connects to another DAE or an SPE and is managed by storage-system software in RAID (redundant array of independent disks) configurations. The enclosure is only 3U (5.25 inches) high, but can include 15 hard disk drive/carrier modules. Its modular, scalable design allows for additional disk storage as your needs increase. A DAE includes either high-performance Fibre Channel disk modules or economical SATA (Serial Advanced Technology Attachment, SATA II) disk modules. CX4-480 systems also support solid state disk (SSD) Fibre Channel modules, also known as enterprise flash drive (EFD) Fibre Channel modules. You cannot mix SATA and Fibre Channel components within a DAE, but you can integrate and connect FC and SATA enclosures within a storage system. The enclosure operates at either a 2 or 4 Gb/s bus speed (2 Gb/s components, including disks, cannot operate on a 4 Gb/s bus). Simple serial cabling provides easy scalability. You can interconnect disk enclosures to form a large disk storage system; the number and size of buses depends on the capabilities of your storage processor. You can place the disk enclosures in the same cabinet, or in one or more separate cabinets. High-availability features are standard in the DAE. The DAE includes the following components: A sheet-metal enclosure with a midplane and front bezel Two FC-AL link control cards (LCCs) to manage disk modules As many as 15 disk modules Two power supply/system cooling modules (referred to as power/cooling modules) Any unoccupied disk module slot has a filler module to maintain air flow. The power supply and system cooling components of the power/cooling modules function independently of each other, but the assemblies are packaged together into a single field-replaceable unit (FRU).
12

Hardware and Operational Overview

The LCCs, disk modules, power supply/system cooling modules, and filler modules are field-replaceable units (FRUs), which can be added or replaced without hardware tools while the storage system is powered up. Figure 4 shows the disk enclosure components. Where the enclosure provides slots for two identical components, the components are called component-name A or component-name B, as shown in the illustrations.
For increased clarity, the following figures depict the DAE outside of the rack or cabinet. Your DAEs may arrive installed in a rackmount cabinet along with the SPE.

Power/cooling module B
! ! !

Link control card B

Fault LED (amber)

Power LED (green or blue)

EXP

PRI

Power/cooling module A
Figure 4

Link control card A

DAE outside the cabinet front and rear views

As shown in Figure 5, an enclosure address (EA) indicator is located on each LCC. (The EA is sometimes referred to as an enclosure ID.) Each link control card (LCC) includes a bus (loop) identification indicator. The storage processor initializes bus ID when the operating system is loaded.

EXP

PRI

#
!

PRI PRI EXP

EXP

Disk activity LED (green)

Fault LED (amber)

EMC3437

Hardware and Operational Overview

13

Bus ID 0 1 2 3

Enclosure address 0 1 2 3
PRI
!

EA selection (press here to change EA)

#
4 5 6 7 4 5 6 7
#
PRI EXP PRI EXP EXP PRI
!

! !

EXP

B
#

EMC3210

Figure 5

Disk enclosure bus (loop) and address indicators

The enclosure address is set at installation. Disk module IDs are numbered left to right (looking at the front of the unit) and are contiguous throughout a storage system: enclosure 0 contains modules 0-14; enclosure 1 contains modules 15-29; enclosure 2 includes 30-44, and so on.

Midplane
A midplane between the disk modules and the LCC and power/cooling modules distributes power and signals to all components in the enclosure. LCCs, power/cooling modules, and disk drives the enclosures field-replaceable units (FRUs) plug directly into the midplane.

Front bezel
The front bezel has a locking latch and an electromagnetic interference (EMI) shield. You must remove the bezel to remove and install drive modules. EMI compliance requires a properly installed front bezel.

Link control cards (LCCs)


An LCC supports and controls one Fibre Channel bus and monitors the DAE.

14

Hardware and Operational Overview

Expansion link active LED


EXP

Primary link active LED


PRI
!

Fault LED (amber) Power LED (green)

EXP

Figure 6

LCC connectors and status LEDs

A blue link active LED indicates a DAE enclosure operating at 4 Gb/s. The link active LED(s) is green in a DAE operating at 2 Gb/s.

The LCCs in a DAE connect to other Fibre Channel devices (processor enclosures, other DAEs) with twin-axial copper cables. The cables connect LCCs in a storage system together in a daisy-chain (loop) topology. Internally, each DAE LCC uses FC-AL protocols to emulate a loop; it connects to the drives in its enclosure in a point-to-point fashion through a switch. The LCC independently receives and electrically terminates incoming FC-AL signals. For traffic from the systems storage processors, the LCC switch passes the input signal from the primary port (PRI) to the drive being accessed; the switch then forwards the drives output signal to the expansion port (EXP), where cables connect it to the next DAE in the loop. (If the target drive is not in the LCCs enclosure, the switch passes the input signal directly to the EXP port.) At the unconnected expansion port (EXP) of the last LCC, the output signal (from the storage processor) is looped back to the input signal source (to the storage processor). For traffic directed to the systems storage processors, the switch passes input signals from the expansion port directly to the output signal destination of the primary port. Each LCC independently monitors the environmental status of the entire enclosure, using a microcomputer-controlled FRU (field-replaceable unit) monitor program. The monitor communicates
Hardware and Operational Overview
15

EXP

PRI

PRI

EXP

PRI
PRI

EXP
! !

#
PRI
!

EXP

EMC3226

status to the storage processor, which polls disk enclosure status. LCC firmware also controls the LCC port-bypass circuits and the disk-module status LEDs. LCCs do not communicate with or control each other. Captive screws on the LCC lock it into place to ensure proper connection to the midplane. You can add or replace an LCC while the disk enclosure is powered up.

Disk modules
Each disk module consists of one disk drive in a carrier. You can visually distinguish between module types by their different latch and handle mechanisms and by type, capacity, and speed labels on each module. An enclosure can include Fibre Channel or SATA disk modules, but not both types. You can add or remove a disk module while the DAE is powered up, but you should exercise special care when removing modules while they are in use. Drive modules are extremely sensitive electronic components. Disk drives The DAE supports Fibre Channel and SATA disks. The Fibre Channel (FC) disks, including enterprise flash (SSD) versions, conform to FC-AL specifications and 4 Gb/s Fibre Channel interface standards, and support dual-port FC-AL interconnects through the two LCCs. SATA disks conform to Serial ATA II Electrical Specification 1.0 and include dual-port SATA interconnects; a paddle card on each drive converts the assembly to Fibre Channel operation. The disk module slots in the enclosure accommodate 2.54 cm (1 in) by 8.75 cm (3.5 in) disk drives. The disks currently available for the storage system and the usable capacities for disks are listed in the EMC CX4 Series Storage Systems Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC Powerlink website. The vault disks must all have the same capacity and same speed. The 1 TB, 5.4K rpm SATA disks are available only in a DAE that is fully populated with these disks. Do not intermix 1 TB, 5.4K rpm SATA disks with 1 TB, 1.2K rpm SATA disks in the same DAE, and do not replace a 1 TB, 5.4K rpm SATA disk with a 1 TB, 1.2K rpm SATA disk, or vice versa.

16

Hardware and Operational Overview

The 1 TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FC disks, but have a 3 Gb/s bandwidth. Since they have a Fibre Channel interface to the back-end loop, these disks are sometimes referred to as Fibre Channel disks.

Disk power savings Some disks support power savings, which lets you assign power saving settings to these disks in a storage system running FLARE version 04.29.000.5.xxx or later, so that these disks transition to the low power state after being idle for at least 30 minutes. For the currently available disks that support power savings, refer to the EMC CX4 Series Storage Systems Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC Powerlink website. Drive carrier The disk drive carriers are metal and plastic assemblies that provide smooth, reliable contact with the enclosure slot guides and midplane connectors. Each carrier has a handle with a latch and spring clips. The latch holds the disk module in place to ensure proper connection with the midplane. Disk drive activity/fault LEDs are integrated into the carrier.

Power/cooling modules
The power/cooling modules are located above and below the LCCs. The units integrate independent power supply and dual-blower cooling assemblies into a single module. Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline converter with its own line cord. Each supply supports a fully configured DAE and shares load currents with the other supply. The drives and LCCs have individual soft-start switches that protect the disk drives and LCCs if they are installed while the disk enclosure is powered up. A FRU (disk, LCC, or power/cooling module) with power-related faults does not adversely affect the operation of any other FRU. The enclosure cooling system includes two dual-blower modules. If one blower fails, the others will speed up to compensate. If two blowers in a system (both in one power/cooling module, or one in each module) fail, the DAE goes offline within two minutes.

Hardware and Operational Overview

17

Standby power supplies (SPSs)


Two 1U 1200-watt DC SPSs provide backup power for one SP and the first (enclosure 0, bus 0) DAE adjacent to it. The SPSs allow write caching which prevents data loss during a power failure to continue. A faulted or not fully charged SPS disables the write cache. Each SPS rear panel has one AC inlet power connector with power switch, AC outlets for the SPE and the first DAE (EA 0, bus 0) respectively, and one phone-jack type connector for connection to an SP. Figure 7 shows the SPS connectors. A service provider can replace an SPS while the storage system is powered up.
Active LED (green) On battery LED (amber) Replace battery LED (amber)
EMC2292

SPE

SP interface

AC power connector

Power switch

Fault LED (amber)

Figure 7

1200 W SPS connectors

18

Hardware and Operational Overview

Powerup and powerdown sequence


The SPE and DAE do not have power switches.

Powering up the storage system


1. Verify the following: Master switch/circuit breakers for each cabinet/rack power strip are off. The two power cords for the SPE are plugged into the SPSs and the power cord retention bails are in place. Power cords for the first DAE (EA 0, bus 0; often called the DAE-OS) are plugged into the SPSs and the power cord retention bails are in place. The power cords for the SPSs and any other DAEs are plugged into the cabinets power strips. The power switches on the SPSs are in the on position. Any other devices in the cabinet are correctly installed and ready for powerup. 2. Turn on the master switch/circuit breakers for each cabinet/rack power strip. In the 40U-C cabinet, master switches are on the power distribution panels (PDPs), as shown in Figure 8 and Figure 9.

Hardware and Operational Overview

19

Each AC circuit in the 40U-C cabinet requires a source connection that can support a minimum of 4800 VA of single phase, 200-240 V AC input power. For high availability, the left and right sides of the cabinet must receive power from separate branch feed circuits. Each pair of power distribution panels (PDP) in the 40U-C cabinet can support a maximum of 24 A AC current draw from devices connected to its power distribution units (PDU). Most cabinet configurations draw less than 24 A AC power, and require only two discrete 240 V AC power sources. If the total AC current draw of all the devices in a single cabinet exceeds 24 A, the cabinet requires two additional 240 V power sources to support a second pair of PDPs. Use the published technical specifications and device rating labels to determine the current draw of each device in your cabinet and calculate the total.

20

Hardware and Operational Overview

ON I O OFF

ON I O OFF

! !

EXP

PRI

O OFF
! !

EXP

PRI
!

ON I O OFF
!

EXP

PRI
!

EXP

PRI
!

EXP

PRI
!

EXP

PRI
!

EXP

PRI
!

DAE-OS
EXP PRI
!

10/100/1000

10/100/1000

MGMT B

SLOT B0

SLOT B1

SLOT B2

SLOT B3

SLOT B4

MGMT A

SLOT A0

SLOT A1

SLOT A2

SLOT A3

SLOT A 4

Power source B
Figure 8

SPS switch

SPS switch

PDP master switches and power sources in the 40U-C cabinet with two PDPs used

Hardware and Operational Overview

ON I

SPE

O OFF

EXP

PRI

PRI

EXP

#
PRI
!

EXP

O OFF

ON I

EXP

PRI

PRI

EXP

#
PRI
!

EXP

ON I

EXP

PRI

O OFF

O OFF

ON I

Master switch

PRI
!

EXP

O OFF

PRI

EXP

EXP

PRI

PRI

EXP

#
PRI
!

EXP

EXP

PRI

PRI

EXP

#
PRI
!

EXP

EXP

PRI

PRI

EXP

PRI
!

EXP

PRI

PRI

EXP

PRI
!

EXP

PRI

ON I

#
! ! !

PRI PRI EXP

EXP
! ! ! ! ! ! ! ! ! ! !

ON I O OFF

#
EXP

ON I O OFF

#
EXP

O OFF ON I ON I

Master switch

Power source A
CL4128

21

ON I O OFF

ON I O OFF

! !

EXP

PRI

O OFF
! !

EXP

PRI
!

ON I O OFF
!

EXP

PRI
!

EXP

PRI
!

EXP

PRI
!

EXP

PRI
!

Master switch

EXP

PRI
!

DAE-OS
EXP PRI
!

MGMT B

SLOT B0

10/100/1000

10/100/1000

SLOT B1

SLOT B2

SLOT B3

SLOT B4

MGMT A

SLOT A0

SLOT A1

SLOT A2

SLOT A3

SLOT A 4

SPS switch Power source B Power source D


Figure 9

Power source C

PDP master switches and power sources in the 40U-C cabinet with four PDPs

The storage system can take 8 to 10 minutes to complete a typical powerup. Amber warning LEDs flash during the power on self-test (POST) and then go off. The front fault LED and the SPS recharge LEDs commonly stay on for several minutes while the SPSs are charging. The powerup is complete when the CPU power light on each SP is steady green.

22

Hardware and Operational Overview

ON I

SPE

O OFF

EXP

PRI

PRI

EXP

#
PRI
!

EXP

O OFF

ON I

EXP

PRI

PRI

EXP

#
PRI
!

EXP

ON I

EXP

PRI

PRI
!

EXP

O OFF

PRI

EXP

EXP

PRI

PRI

EXP

#
PRI
!

EXP

EXP

PRI

PRI

EXP

#
PRI
!

EXP

EXP

PRI

PRI

EXP

PRI
!

EXP

PRI

PRI

EXP

PRI
!

EXP

PRI

ON I

#
! ! !

PRI PRI EXP

EXP
! ! ! ! ! ! ! ! ! ! !

ON I O OFF

#
EXP

ON I O OFF

#
EXP

O OFF O OFF O OFF ON I ON I ON I

Master switch

SPS switch Power source A


CL4129

The CPU status lights are visible on the SPE when the front bezel is removed (Figure 10).

SP A
Figure 10 Location of CPU status lights

SP B

CL4095

If amber LEDs on the front or back of the storage system remain on for more than 10 minutes, make sure the storage system is correctly cabled, and then refer to the troubleshooting flowcharts for your storage system on the CLARiiON Tools page on the EMC Powerlink website (http://Powerlink.EMC.com). If you cannot determine any reasons for the fault, contact your authorized service provider.

Powering down the storage system


1. Stop all I/O activity to the SPE. If the server connected to the SPE is running the AIX, HP-UX, Linux, or Solaris operating system, back up critical data and then unmount the file systems. Stopping I/O allows the SP to destage cache data, and may take some time. The length of time depends on criteria such as the size of the cache, the amount of data in the cache, the type of data in the cache, and the target location on the disks, but it is typically less than one minute. We recommend that you wait five minutes before proceeding.
Hardware and Operational Overview
23

2. After five minutes, use the power switch on each SPS to turn off power. The SPE and primary DAE power down within two minutes.

CAUTION Never unplug the power supplies to shut down an SPE. Bypassing the SPS in that manner prevents the storage system from saving write cache data to the vault drives, and results in data loss. You will lose access to data, and the storage processor log displays an error message similar to the following:
Enclosure 0 Disk 5 0x90a (Cant Assign - Cache Dirty) 0 0xafb40 0x14362c

Contact your service provider if this situation occurs. This turns off power to the SPE and the first DAE (EA 0, bus 0). You do not need to turn off power to the other connected DAEs.

24

Hardware and Operational Overview

Status lights (LEDs) and indicators


Status lights made up of light emitting diodes (LEDs) on the SPE, its FRUs, the SPSs, and the DAEs and their FRUs indicate the components current status.

Storage processor enclosure (SPE) LEDs


This section describes status LEDs visible from the front and the rear of the SPE. SPE front status LEDs Figure 11 and Figure 12 show the location of the SPE status LEDs that are visible from the front of the enclosure. Table 6 and Table 7 describe these LEDs.

CL4092

Figure 11

SPE front status LEDs (bezel in place)

Hardware and Operational Overview

25

SP A
Figure 12 Table 6 LED Power Symbol

SP B
SPE front status LEDs (bezel removed) Meaning of the SPE front status LEDs (bezel in place) Quantity 1 State Off Solid blue Meaning SPE is powered down. SPE is powered up. SPE is operating normally.

CL4095

System fault

Off Solid amber

A fault condition exists in the SPE. If the fault is not obvious from another fault LED on the front, look at the rear of the enclosure.

Table 7 LED Power cooling module status Symbol None

Meaning of the SPE front status LEDs (bezel removed) Quantity 1 per module State Off Solid green Amber Meaning Power cooling module is not powered up. Module is powered up and operating normally. Module is faulted. CPU is not powered up. CPU is powered up and operating normally.

CPU power

1 per CPU

Off Solid green

26

Hardware and Operational Overview

LED CPU fault

Symbol

Quantity 1 per CPU

State Blinking amber Solid amber Blinking blue Solid blue

Meaning Running powerup tests. CPU is faulted. OS is loaded. CPU is degraded. DO NOT REMOVE MODULE while this light is on.

Unsafe to remove

1 per CPU

Solid white

SPE rear status LEDs Table 8 describes the status LEDs that are visible from the rear of the SPE.
Table 8 LED Management module status (see note 1) Symbol None Meaning of the SPE rear status LEDs Quantity 1 per module State Solid green Off Amber I/O module status (see note 1) None 1 per module Solid green Off Amber BE port link (see note 2) None 1 per Fibre Channel back-end port Off Meaning Power is being supplied to module. Power is not being supplied to module. Module is faulted. Power is being supplied to module. Power is not being supplied to module. Module is faulted. No link because of one of the following conditions: the cable is disconnected, the cable is faulted or it is not a supported type. 1 Gb/s or 2 Gb/s link speed. 4 Gb/s link speed. Cable fault.

Solid green Solid blue Blinking green then blue

Hardware and Operational Overview

27

LED FE port link (see note 2)

Symbol None

Quantity 1 per Fibre Channel front-end port

State Off

Meaning No link because of one of the following conditions: the host is down, the cable is disconnected, an SFP is not in the port slot, the SFP is faulted or it is not a supported type. 1 Gb/s or 2 Gb/s link speed. 4 Gb/s link speed. SFP or cable fault.

Solid green Solid blue Blinking green then blue Note 1: LED is on the module handle. Note 2: LED is next to the port connector.

DAE status LEDs


This section describes the following status LEDs and indicators: Front DAE and disk modules status LEDs Enclosure address and bus ID indicators LCC and power/cooling module status LEDs Front DAE and disk modules status LEDs Figure 13 shows the location of the DAE and disk module status LEDs that are visible from the front of the enclosure. Table 9 describes these LEDs.

28

Hardware and Operational Overview

Fault LED (Amber)

Power LED (Green or Blue)

Disk Activity LED (Green)


Figure 13

Fault LED (Amber)


EMC3422

Front DAE and disk modules status LEDs (bezel removed)

Hardware and Operational Overview

29

Table 9 LED DAE power Quantity 1

Meaning of the front DAE and disk module status LEDs State Off Solid green Solid blue Meaning DAE is not powered up. DAE is powered up and back-end bus is running at 2 Gb/s. DAE is powered up and back-end bus is running at 4 Gb/s. On when any fault condition exists; if the fault is not obvious from a disk module LED, look at the back of the enclosure. Slot is empty or contains a filler module or the disk is powered down by command, for example, as the result of a temperature fault. Drive has power but is not handling any I/O activity (the ready state). Drive is spinning and handling I/O activity. Drive is spinning up or spinning down normally. Drive is powered up but not spinning; this is a normal part of the spin-up sequence, occurring during the spin-up delay of a slot. On when the disk module is faulty, or as an indication to remove the drive.

DAE fault

Solid amber

Disk activity

1 per disk module

Off

Solid green Blinking green, mostly on Blink green at a constant rate Blinking green, mostly off

Disk fault

1 per disk module

Solid amber

Enclosure address and bus ID indicators Figure 14 shows the location of the enclosure address and bus ID indicators that are visible from the rear of the enclosure. In this example, the DAE is enclosure 2 on bus (loop) 1; note that the indicators for LCC A and LCC B always match. Table 10 describes these indicators.

30

Hardware and Operational Overview

Bus ID 0 1 2 3

Enclosure address 0 1 2 3

EA selection

#
4 5 6 7 4 5 6 7
!

! !

EXP

PRI

EA selection
Figure 14 Table 10 LED Enclosure address Bus ID Quantity 8 8

Enclosure address

Bus ID
EMC3178

Location of enclosure address and bus ID indicators Meaning of enclosure address and bus ID indicators State Green Blue Meaning Displayed number indicates enclosure address. Displayed number indicates bus ID. Blinking bus ID indicates invalid cabling LCC A and LCC B are not connected to the same bus or the maximum number of DAEs allowed on the bus is exceeded.

DAE power/cooling module status LEDs Figure 15 shows the location of the status LEDs for the power supply/system cooling modules (referred to as power/cooling modules). Table 11 describes these LEDs.

Hardware and Operational Overview

EXP

PRI

#
!

PRI PRI EXP

EXP

4 5 6 7

4 5 6 7

#
0 1 2 3 0 1 2 3
31

Power LED (green) Power fault LED (amber) Blower fault LED (amber)

! !

! !

EXP

PRI

Blower fault LED (amber) Power fault LED (amber) Power LED (green)
EMC3179

Figure 15 Table 11 LEDs Power supply active Power supply fault (see note) Quantity 1 per supply 1 per supply

DAE power/cooling module status LEDs Meaning of DAE power/cooling module status LEDs State Green Amber Meaning On when the power supply is operating. On when the power supply is faulty or is not receiving AC line voltage. Flashing when either a multiple blower or ambient over-temperature condition has shut off power to the system. On when a single blower in the power supply is faulty.

Blower fault (see note)

1 per cooling module

Amber

Note: The DAE continues running with a single power supply and three of its four blowers. Removing a power/cooling module constitutes a multiple blower fault condition, and will power down the enclosure unless you replace a blower within two minutes.

DAE LCC status LEDs Figure 16 shows the location of the status LEDs for a link control card (LCC). Table 12 describes these LEDs.

32

Hardware and Operational Overview

EXP

PRI

#
!

PRI PRI EXP

EXP

Expansion link active LED (2 Gb/s - green 4 Gb/s - blue)

Primary link active LED (green or blue)


EXP PRI
!

Fault LED (amber) Power LED (green)

EXP

Power LED (green) Fault LED (amber)


!

PRI

EXP

Primary link active LED


Figure 16 Table 12 Light LCC power LCC fault Primary link active Quantity 1 per LCC 1 per LCC 1 per LCC DAE LCC status LEDs Meaning of DAE LCC status LEDs State Green Amber Green Blue Expansion link active 1 per LCC Green Blue Meaning

Expansion link active LED


EMC3184

On when the LCC is powered up. On when either the LCC or a Fibre Channel connection is faulty. Also on during power on self test (POST). On when 2 Gb/s primary connection is active. On when 4 Gb/s primary connection is active. On when 2 Gb/s expansion connection is active. On when 4 Gb/s expansion connection is active.

SPS status LEDs


Figure 17 shows the location of SPS status LEDs that are visible from rear. Table 13 describes these LEDs.

Hardware and Operational Overview

EXP

EXP

PRI

PRI

PRI

EXP

PRI
PRI

EXP
! !

#
PRI
!

EXP

33

Active LED (green) On battery LED (amber) Replace battery LED (amber)
EMC3421

Fault LED (amber)

Figure 17 Table 13 LED Active Quantity 1 per SPS

1200 W SPS status LEDs Meaning of 1200 W SPS status LEDs State Green Meaning When this LED is steady, the SPS is ready and operating normally. When this LED flashes, the batteries are being recharged. In either case, the output from the SPS is supplied by AC line input. The AC line power is no longer available and the SPS is supplying output power from its battery. When battery power comes on, and no other online SPS is connected to the SPE, the file server writes all cached data to disk, and the event log records the event. Also on briefly during the battery test. The SPS battery is not fully charged and may not be able to serve its cache flushing function. With the battery in this state, and no other online SPS connected to the SPE, the storage system disables write caching, writing any modified pages to disk first. Replace the SPS as soon as possible. The SPS has an internal fault. The SPS may still be able to run online, but write caching cannot occur. Replace the SPS as soon as possible.

On battery

1 per SPS

Amber

Replace battery

1 per SPS

Amber

Fault

1 per SPS

Amber

34

Hardware and Operational Overview

Copyright 2008-2010 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks mentioned herein are the property of their respective owners.
Hardware and Operational Overview
35

Vous aimerez peut-être aussi