Académique Documents
Professionnel Documents
Culture Documents
Overview ............................................................................................... 2
Pre-implementation tasks ................................................................... 4
Implementation worksheets ............................................................... 5
Connect external network cables ....................................................... 7
Configure storage for a Fibre Channel enabled system ............... 10
Configure the network ...................................................................... 21
Create a file system ............................................................................ 22
Delete the NFS export created during startup............................... 25
Create NFS exports ............................................................................ 26
Configure hosts .................................................................................. 30
Configure and test standby relationships....................................... 31
Appendix............................................................................................. 38
Overview
This section contains an overview of the NFS implementation
procedure overview and host requirements for NFS implementation.
Procedure overview
Host requirements
for NFS
Software
Celerra Network Server version 5.6.
For secure NFS using UNIX or Linux-based Kerberos:
Sun Enterprise Authentication Mechanism (SEAM) software
or Linux KDC running Kerberos version 5
Note: KDCs from other UNIX systems have not been tested.
Hardware
No specific hardware requirements
Network
No specific network requirements
Storage
No specific storage requirements
Pre-implementation tasks
Before you begin this NFS implementation procedure ensure that you
have completed the following tasks.
Create a Powerlink
account
Download and
install the
Navisphere Service
Taskbar (NST)
The NST is available for download from the CLARiiON Tools page
on Powerlink and on the Applications and Tools CD that was
shipped with your system.
Use the NST to add new disk array enclosures (DAEs) to fully
implement your Celerra (Not available for NX4).
Implementation worksheets
Before you begin this implementation procedure take a moment to fill
out the following implementation worksheets with the values of the
various devices you will need to create.
Create interface
worksheet
Table 1
Data
Mover
number
Device name or
virtual device
name
IP address
Netmask
Maximum
Transmission
Unit (MTU)
(optional)
Virtual LAN
(VLAN)
identifier
(optional)
Devices (optional)
The Create File System step creates a file system on a Data Mover.
This step can be repeated as needed to create additional file systems.
Read/Write Data Mover: server_2 server_3
No
No
NFS export
worksheet
Connecting the 1U
X-blade network
ports
The Celerra NS20 and NS40 integrated systems and the NX4, NS-120,
and NS-480 unified storage systems have 1U blade enclosures. There
are three possible X-blades available to fill the 1U blade enclosure,
depending on the Celerra:
4-port copper Ethernet X-blade
2-port copper Ethernet and 2-port optical 1 GbE X-blade
2-port copper Ethernet and 2-port optical 10 GbE X-blade
To connect the desired blade network ports to your network system,
follow these guidelines depending on your specific blade
configuration.
Figure 1 shows the 4-port copper Ethernet X-blade. This X-blade is
available to the NX4, NS20, NS40, NS-120 or NS-480. It has four
copper Ethernet ports available for connections to the network
system labeled cge0-cge3. Cable these ports as desired.
Internal
management module
cge0
cge1
cge2
cge3
Com 1
Com 2
BE 0 BE 1
AUX 0 AUX 1
CIP-000560
Figure 1
fge0
fge1
cge0
cge1
Com 1
Com 2
BE 0 BE 1
AUX 0 AUX 1
CIP-000559
Figure 2
fxg0
fxg1
cge0
cge1
Com 1
Com 2
BE 0 BE 1
AUX 0 AUX 1
CNS-001256
Figure 3
Connecting the 3U
X-blade I/O module
network ports
Currently, the Celerra NS-960 is the only unified storage system with
a 3U blade enclosure. The 3U blade enclosure utilizes different I/O
modules for personalized blade configurations. To connect the
desired blade I/O module network ports to your network system,
follow these guidelines based on your specific I/O module
configuration.
In the four-port Fibre Channel I/O module located in slot 0, the first
two ports, port 0 (BE 0) and port 1 (BE 1), connect to the array. The
next two ports, port 2 (AUX 0) and port 3 (AUX 1) optionally connect
to tape backup.Do not use these ports for connections to the network
system.
The other I/O modules appear in various slot I/O positions
(represented by x I/O slot ID) to create the supported blade options.
The ports in these I/O modules are available to connect to the
network system.
The four-port copper I/O module has four copper-wire
Gigabit Ethernet (GbE) ports, 0 to 3 (logically, cge-x-0 to
cge-x-3). Cable these ports as desired.
The two-port GbE copper and two-port GbE optical I/O
module has ports, 0 to 3 (logically, cge-x-0, cge-x-1, fge-x-0,
and fge-x-1). Cable these ports as desired.
The one-port 10-GbE I/O module has one port, 0 (logically,
fxg-x-0). Cable these ports as desired.
Note: The x in the port names above are variables representing the slot
position of the I/O module. Thus for a four port copper I/O module in
slot 2, its ports would be cge-2-0 -cge-2-3.
Configure storage
with Navisphere
Manager
10
Figure 4
11
Figure 5
5. Enter the RAID Group properties and click Apply to create the
new RAID group, as shown in an example in Figure 6 on page 13.
The RAID Group Parameter values should be applicable to
your system.
For more information, see NAS Support Matrix on
http://Powerlink.EMC.com.
6. To complete the previous step, click Yes to confirm the operation,
click OK for the Success dialog box, and close (or click Cancel)
the Create RAID Group window.
12
Figure 6
7. Right-click the newly created RAID Group and select Bind LUN
to create a new LUN in the new RAID Group, as shown in the
example in Figure 7 on page 14.
8. Enter the new LUN(s) properties and click Apply.
Note: The LUN ID must be greater than or equal to 16 if the LUN is to be
managed with Celerra Manager.
13
Figure 7
14
For ATA disks, all LUNs in a RAID group must belong to the
same SP. The Celerra system will stripe across pairs of disks
from different RAID groups. ATA disks can be configured as
RAID 5 with seven disks or RAID 3 with five or nine disks.
Use the same LUN ID number for the Host ID number.
We recommend the following settings when creating the user
LUNs:
RAID Type: RAID 5 or RAID 1 for FC disks and RAID 5 or
RAID 3 for ATA disks.
LUN ID: Select the first available value, greater than or equal
to 16.
Element Size: 128
Rebuild Priority: ASAP
Verify Priority: ASAP
Enable Read Cache: Selected
Enable Write Cache: Selected
Enable Auto Assign: the checkbox is unmarked (off)
Number of LUNs to Bind: 2
Alignment Offset: 0
LUN Size: Must not exceed 2 TB
Note: If you are creating 4+1 RAID 3 LUNs, the Number of LUNs to Bind
value should be 1.
15
Figure 8
16
WARNING
Do not uncheck LUNs that are already checked in the Available
LUNs section when you open the dialog box. Unchecking these LUNs
will render your system inoperable.
12. In the Selected LUNs section, click in the white space in the Host
ID column and select a Host ID greater than 15 for each new
LUN to be added to the Celerra storage group. Click OK to add
the new LUN(s) to the Celerra storage group, as shown in
Figure 9 on page 18.
WARNING
The Host ID must be greater than or equal to 16 for user LUNs. An
incorrect Host ID value can cause serious problems.
13. To complete the previous step, click Yes to confirm the operation
to add LUNs to the storage group, and click OK for the Success
dialog box.
17
Figure 9
14. To make the new LUN(s) available to the Celerra system, Celerra
Manager must be used. Launch the Celerra Manager by opening
Celerra Manager using the following URL:
https://<control_station>
18
16. At the login prompt, log in as user root. The default password is
nasadmin.
17. If a security warning appears about the systems security
certificate being issued by an untrusted source, click Yes to accept
the certificate.
18. If a warning about a hostname mismatch appears, click Yes.
19. On the Celerra > Storage Systems page, click Rescan, as shown
in Figure 10.
Figure 10
19
CAUTION
Do not change the host LUN identifier of the Celerra LUNs after
rescanning. This may cause data loss or unavailability.
The user LUNs are now available for the Celerra system. When you
have finished the Configure storage for a Fibre Channel enabled
system go to Configure the network on page 21.
20
21
Figure 11
3. Select the Storage Pool radio button to select where the file
system will be created from, as shown in Figure 12 on page 23.
22
Figure 12
6. Designate the Storage Capacity of the file system and select any
other desired options.
23
Figure 13
24
Click OK.
When you have completed Delete the NFS export created during
startup go to Create NFS exports on page 26.
25
Figure 14
3. Select a Data Mover that manages the file system from the
Choose Data Mover drop-down list on the New NFS export
page, as shown in Figure 15 on page 27.
26
Figure 15
27
The new NFS export will now appear on the NFS export screen,
as shown in Figure 16 on page 29.
28
Figure 16
29
Configure hosts
To mount an NFS export you need the source, including the address
or the hostname of the server. You can collect these values from the
NFS export implementation worksheet. To use this new NFS export
on the network do the following:
1. Open a UNIX prompt on the client computer connected to the
same subnet as the Celerra system. Use the values found on the
NFS worksheet on page 5 to complete this section.
2. Log in as root.
3. Enter the following command at the UNIX prompt to mount the
NFS export:
mount <data_mover_IP>:/<fs_export_name> /<mount_point>
30
Configure a standby
relationship
CAUTION
The standby blade(s) must have the same network capabilities
(NICs and cables) as the primary blades with which it will be
associated. This is because the standby blade will assume the
faulted primary blades network identity (NIC IP and MAC
addresses), storage identity (controlled file systems), and
service identity (controlled shares and exports).
2. Define the standby configuration using Celerra Manager
following the blade standby configuration recommendation:
a. Select <Celerra_name> > Data Movers >
<desired_primary_blade> from the left-hand navigation panel.
b. On the Data Mover Properties screen, configure the standby
blade for the selected primary blade by checking the box of the
desired Standby Mover and define the Failover Policy.
31
Figure 17
c. Click Apply.
Note: The blade configured as standby will now reboot.
32
CAUTION
You must cable the failover blade identically to its primary blade.
If configured network ports are left uncabled when a failover
occurs, access to files systems will be disrupted.
To test the failover configuration, do the following:
1. Open a SSH session to the Control Station with an SSH client like
PuTTY using the CS.
2. Log in to the CS as nasadmin. Change to the root user by
entering the following command:
su root
Note: The default password for root is nasadmin.
Sample output:
id type
1 1
2 4
Note: In the command output above provides the state name, the
names of the blades. Also, the type column designates the blade type
as 1 (primary) and 4 (standby).
33
Example:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i
Name
Mtu
Ibytes
Ierror
Obytes
Oerror
PhysAddr
****************************************************************************
fxg0
fxg1
mge0
mge1
cge0
cge1
9000
9000
9000
9000
9000
9000
0
0
851321
28714095
614247
0
0
0
0
0
0
0
0
0
812531
1267209
2022
0
0
0
0
0
0
0
0:60:16:32:4a:30
0:60:16:32:4a:31
0:60:16:2c:43:2
0:60:16:2c:43:1
0:60:16:2b:49:12
0:60:16:2b:49:13
Example:
[nasadmin@rtpplat11cs0 ~]$ server_standby server_2
-activate mover
server_2 :
server_2 : going offline
server_3 : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
server_2 : renamed as server_2.faulted.server_3
server_3 : renamed as server_2
Note: This command will rename the primary and standby blades. In the
example above, server_2, the primary blade, was rebooted and renamed
server_2.faulter.server_3 and server_3 was renamed as server_2.
34
Sample output:
id type
1 1
2 1
Note: In the command output above each blades state name has
changed and the type column designates both blades as type 1
(primary).
Sample output:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i
Name
Mtu
Ibytes
Ierror
Obytes
Oerror
PhysAddr
****************************************************************************
fxg0
9000 0
0
0
0
0:60:16:32:4b:18
fxg1
9000 0
0
0
0
0:60:16:32:4b:19
mge0
9000 14390362
0
786537
0
0:60:16:2c:43:30
mge1
9000 16946
0
3256
0
0:60:16:2c:43:31
cge0
9000 415447
0
3251
0
0:60:16:2b:49:12
cge1
9000 0
0
0
0
0:60:16:2b:48:ad
Note: The WWNs in the PhysAddr column have changed, thus
reflecting that the failover completed successfully.
35
8. After the blades appear with reason code 5, manually restore the
failed over blade to its primary status by typing the following
command:
# server_standby <primary_blade> -restore mover
Example:
server_standby server_2 -restore mover
server_2 :
server_2 : going standby
server_2.faulted.server_3 : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
server_2 : renamed as server_3
server_2.faulted.server_3 : renamed as server_2
Note: This command will rename the primary and standby blades. In the
example above, server_2, the standing primary blade, was rebooted and
renamed server_3 and server_2.faulter.server_3 was renamed as
server_2.
Sample output:
id type
1 1
2 4
36
Sample output:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i
Name
Mtu
Ibytes
Ierror
Obytes
Oerror
PhysAddr
****************************************************************************
fxg0
fxg1
mge0
mge1
cge0
cge1
9000
9000
9000
9000
9000
9000
0
0
851321
28714095
314427
0
0
0
0
0
0
0
0
0
812531
1267209
1324
0
0
0
0
0
0
0
0:60:16:32:4a:30
0:60:16:32:4a:31
0:60:16:2c:43:2
0:60:16:2c:43:1
0:60:16:2b:49:12
0:60:16:2b:49:13
37
Appendix
Disk group and disk
volume
configurations
Table 2
38
Attach type
Storage profile
RAID 5 8+1
Fibre Channel
RAID 5 4+1
RAID 1
Fibre Channel
clar_r5_performance
clar_r1
RAID 5 4+1
Fibre Channel
clar_r5_performance
RAID 1
Fibre Channel
clar_r1
RAID 6 4+2
RAID 6 12+2
Fibre Channel
clar_r6
RAID 5 6+1
ATA
clarata_archive
RAID 5 4+1
(CX3 only)
ATA
clarata_archive
RAID 3 4+1
RAID 3 8+1
ATA
clarata_r3
RAID 6 4+2
RAID 6 12+2
ATA
clarata_r6
RAID 5 6+1
(CX3 only)
LCFC
clarata_archive
RAID 5 4+1
(CX3 only)
LCFC
clarata_archive
RAID 3 4+1
RAID 3 8+1
LCFC
clarata_r3
Table 2
Attach type
Storage profile
RAID 6 4+2
RAID 6 12+2
LCFC
clarata_r6
RAID 5 2+1
SATA
clarata_archive
RAID 5 3+1
SATA
clarata_archive
RAID 5 4+1
SATA
clarata_archive
RAID 5 5+1
SATA
clarata_archive
SATA
clarata_r10
RAID 6 4+2
SATA
clarata_r6
RAID 5 2+1
SAS
clarsas_archive
RAID 5 3+1
SAS
clarsas_archive
RAID 5 4+1
SAS
clarsas_archive
RAID 5 5+1
SAS
clarsas_archive
SAS
clarsas_r10
RAID 6 4+2
SAS
clarsas_r6
39
40