Vous êtes sur la page 1sur 67

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Foundations

2005 EMC Corporation. All rights reserved.

Welcome to NAS Foundations. The AUDIO portion of this course is supplemental to the material and is not a replacement for the student notes accompanying this course. EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes in their entirety. Copyright 2005 EMC Corporation. All rights reserved. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. Celerra, CLARalert, CLARiiON, Connectrix, Dantz, Documentum, EMC, EMC2, HighRoad, Legato, Navisphere, PowerPath, ResourcePak, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, where information lives are registered trademarks. Access Logix, AutoAdvice, Automated Resource Manager, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, Centera, CentraStar, CLARevent, CopyCross, CopyPoint, DatabaseXtender, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, EMC Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover, MirrorView, NetWin, OnAlert, OpenScale, Powerlink, PowerVolume, RepliCare, SafeLine, SAN Architect, SAN Copy, SAN Manager, SDMS, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, Universal Data Tone, VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners. All other trademarks used herein are the property of their respective owners.

NAS Foundations

-1

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Foundations
Upon completion of this course, you will be able to:
y y y y y y Identify the concepts and value of Network Attached Storage List Environmental Aspects of NAS Identify the EMC NAS Platforms and their differences Identify and describe key Celerra Software Features Identify and describe the Celerra Management Software offerings Identify and describe key Windows Specific Options with respect to EMC NAS environments y Identify and describe NAS Business Continuity and Replication Options with respect to the various EMC NAS platforms y Identify and describe key NAS Backup and Recovery options

2005 EMC Corporation. All rights reserved.

Module Title - 2

These are the learning objectives for this training. Please take a moment to read them.

NAS Foundations

-2

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Network Attached Storage

NAS OVERVIEW

2005 EMC Corporation. All rights reserved.

Module Title - 3

Lets start by looking at an overview of Network Attached Storage (NAS).

NAS Foundations

-3

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

What Is Network-Attached Storage


y Built on the concept of shared storage on a Local Area Network y Leverages the benefits of a network file server and network storage y Utilizes industry-standard network and file sharing protocols
File Server + Network-attached storage = NAS
2005 EMC Corporation. All rights reserved. Module Title - 4

Client Application

Application

Application

Unix Client

Windows Client

Unix Client

Network

The benefit of NAS is that it now brings the advantages of networked storage to the desktop through file-level sharing of data via a dedicated device. NAS is network-centric. Typically used for client storage consolidation on a LAN, NAS is a preferred storage capacity solution for enabling clients to access files quickly and directly. This eliminates the bottlenecks users often encounter when accessing files from a general-purpose server. NAS provides security and performs all file and storage services through standard network protocols, using TCP/IP for data transfer, Ethernet and Gigabit Ethernet for media access, and CIFS, http, ftp, and NFS for remote file service. In addition, NAS can serve both UNIX and Microsoft Windows users seamlessly, sharing the same data between the different architectures. For client users, NAS is the technology of choice for providing storage with unencumbered access to files. Although NAS trades some performance for manageability and simplicity, it is by no means a lazy technology. Gigabit Ethernet allows NAS to scale to high performance and low latency, making it possible to support a myriad of clients through a single interface. Many NAS devices support multiple interfaces and can support multiple networks at the same time.

NAS Foundations

-4

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Why NAS?
y Highest availability y Scales for growth y Avoids file replication
Internet
S1

Firewall

Internal Network

y Increases flexibility y Reduces complexity y Improves security y Costs


2005 EMC Corporation. All rights reserved.

S2
. . . .

NAS

Sn Web Servers

Data Center

Module Title - 5

Shared applications can now achieve the availability and scalability benefits of networked storage. Centralizing file storage reduces system complexity and system administration costs. Backup, restore, and disaster recovery can be simplified.

NAS Foundations

-5

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Operations
y All IO operations use file level IO protocols
No awareness of disk volumes or disk sectors
NAS
Application

y File system is mounted remotely using a network file access protocol, such as:
Network File System (NFS) Common Internet File System(CIFS)

IP Network

y IO is redirected to remote system y Utilizes mature data transport (e.g., TCP/IP) and media access protocols y NAS device assumes responsibility for organizing data (R/W) on disk and managing cache
2005 EMC Corporation. All rights reserved.

NAS Device Direct Attach OR SAN

Disk

Module Title - 6

One of the key differences of a NAS disk device, compared to DAS or other networked storage solutions such as SAN, is that all I/O operations use file level I/O protocols. File I/O is a high level type of request that, in essence, specifies only the file to be accessed, but does not directly address the storage device. This is done later by other operating system functions in the remote NAS appliance. A file I/O specifies the file. It also indicates an offset into the file. For instance, the I/O may specify Go to byte 1000 in the file (as if the file were a set of contiguous bytes), and read the next 256 bytes beginning at that position. Unlike block I/O, there is no awareness of a disk volume or disk sector in a file I/O request. Inside the NAS appliance, the operating system keeps track of where files are located on disk. The OS issues a block I/O request to the disks to fulfill the file I/O read and write requests it receives. The disk resources can be either directly attached to the NAS device, or they can be attached using a SAN.

NAS Foundations

-6

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Architecture
To Storage Application Remote I/O Request Operating System I/O Redirector NFS/CIFS

y NFS and CIFS handle file requests to remote file system y I/O is encapsulated by TCP/IP Stack to move over the network y NAS device converts requests to block IO and reads or writes data to NAS disk storage

Storage Network Protocol (Fibre Channel) Drive Protocol (SCSI) NAS Operating System Network File Protocol Handler TCP/IP Stack Network Interface

TCP/IP Stack Network Interface

File I/O to NAS

2005 EMC Corporation. All rights reserved.

Module Title - 7

The Network File System (NFS) protocol and Common Internet File System (CIFS) protocol handle file I/O requests to the remote file system, which is located in the NAS device. I/O requests are packaged by the initiator into the TCP/IP protocols to move across the IP network. The remote NAS file system converts the request to block I/O, and reads or writes the data to the NAS disk storage. To return data to the requesting client application, the NAS appliance software re-packages the data to move it back across the network. Here, we see an example of an IO being directed to the remote NAS device, and the different protocols that play a part in moving the request back and forth, to the remote file system located on the NAS server.

NAS Foundations

-7

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Device
y Single-purpose machine or component, serves as a dedicated, high-performance, high-speed communication of file data y Is sometimes called a filer or a network appliance y Uses one or more Network Interface Cards (NICs) to connect to the customer network y Uses proprietary optimized operating system; DART, Data Access in Real Time, is EMCs NAS operating system y Uses industry standard storage protocols to connect to storage resources
2005 EMC Corporation. All rights reserved.

Client Application

IP Network

NAS Device
Network Drivers and Protocols NFS CIFS

NAS Device OS (DART) Storage Drivers and Protocols

Disk Storage

Module Title - 8

A NAS server is not a general-purpose computer; it is a significantly streamlined/tuned OS, in comparison to a general purpose computer. It is sometimes called a filer because it focuses all of its processing power solely on file service and file storage. The NAS device is sometimes called a network appliance, referring to the plug and play design of many NAS devices. Common network interface cards (NICs) include gigabit Ethernet (1000 Mb/s), Fast Ethernet (10Mb/s), ATM, and FIDDI. Some NAS also supports NDMP, Novell Netware, and HTTP protocols. The NAS operating system for Network Appliance products is called Data ONTAP. The NAS operating system for EMC Celerra is DART - Data Access in Real Time. These operating systems are tuned to perform file operations including open, close, read, write, etc. The NAS device will generally use a standard drive protocol to manage data to and from the disk resources.

NAS Foundations

-8

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Applications
y CAD/CAM environments, where widely dispersed engineers have to share and modify design drawings y Serving Web pages to thousands of workstations at the same time y Easily sharing company-wide information among employees y Database application
Low transaction rate Low data volatility Smaller in size Not performance constrained

2005 EMC Corporation. All rights reserved.

Module Title - 9

Database applications have traditionally been implemented in a SAN architecture. The primary reason is the deterministic performance of a SAN. This characteristic is especially applicable for very large, on-line transactional applications with high transaction rates and high data volatility. However, NAS might be appropriate where the database transaction rate is low and performance is not constrained. Extensive application profiling should be done in order to understand the specific database application requirement and if, in fact, a NAS solution would be appropriate. When considering a NAS solution, the databases should: y be sequentially accessed, non-indexed or have a flat file structure y have a low transaction rate y have low data volatility y be relatively small y not have performance/timing constraints

NAS Foundations

-9

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Components and Networking Infrastructure

AN INTRODUCTION

2005 EMC Corporation. All rights reserved.

Module Title - 10

This section will introduce NAS components and networking infrastructures.

NAS Foundations

- 10

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

What is a Network?
y LAN y Physical Media y WAN

Site 1

Site 2
2005 EMC Corporation. All rights reserved. Module Title - 11

LAN A network is any collection of independent computers that communicate with one another over a shared network medium. LANs are networks usually confined to a geographic area, such as a single building or a college campus. LANs can be small, linking as few as three computers, but often linking hundreds of computers used by thousands of people. Physical Media An important part of designing and installing a network is selecting the appropriate medium. There are several types in use today: Ethernet, Fiber Distributed Data Interface (FDDI), Asynchronous Transfer Mode (ATM), and Token Ring. Ethernet is popular because it strikes a good balance between speed, cost, and ease of installation. These benefits, combined with wide acceptance in the computer marketplace, and the ability to support virtually all popular network protocols, make Ethernet an ideal networking technology for most computer users today. WAN Wide area networking combines multiple LANs that are geographically separated. This is accomplished by connecting the different LANs using services such as dedicated leased phone lines, dial-up phone lines (both synchronous and asynchronous), satellite links, and data packet carrier services. Wide area networking can be as simple as a modem and remote access server for employees to dial into, or it can be as complex as hundreds of branch offices globally linked, using special routing protocols and filters to minimize the expense of sending data sent over vast distances.

NAS Foundations

- 11

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Physical Components
y Network Interface Card (NIC) y Switches y Routers
NIC Router Switch 155.10.10.XX NIC

NIC 155.10.20.XX Switch NIC


2005 EMC Corporation. All rights reserved. Module Title - 12

Network Interface Card A network topology is the geometric arrangement of nodes and cable links in a LAN, and is used in two general configurations: bus and star. Network interface cards, commonly referred to as NICs, are used to connect a Host, Server, Workstation, PC, etc. to a network. The NIC provides a physical connection between the networking cable and the computer's internal bus. The rate at which data passes back and forth can be different. Switches LAN switches can link multiple network connections together. Todays switches will accept and analyze the entire packet of data to catch certain packet errors, and keep them from propagating through the network before forwarding it to its destination. Each of the segments attached to an Ethernet switch has the full bandwidth of the switch 10Mb/100Mb/1Gigabit. Routers Routers pass traffic between networks. Routers also divide networks logically instead of physically. An IP router can divide a network into various subnets, so that only traffic destined for particular IP addresses can pass between segments.

NAS Foundations

- 12

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Network Protocols
y Network transport Protocols y Network filesystem Protocols
Switch NIC

155.10.10.XX

NIC Router

NIC Switch 155.10.20.XX

NIC
2005 EMC Corporation. All rights reserved. Module Title - 13

Network transport protocols are standards that allow computers to communicate. A protocol defines how computers identify one another on a network, the form that the data should take in transit, and how this information is processed once it reaches its final destination. Protocols also define procedures for handling lost or damaged transmissions, or "packets". Network transport protocols are used to manage the movement of data packets to devices communicating across the network. UDP and TCP are examples of transport protocol. UDP is used in non-connection oriented networks, while TCP is used to manage the movement of data packets in connection oriented networks. In a non-connection oriented communication model, the data is sent out to a recipient using a best effort approach, with no acknowledgement of the receipt of the data being sent back to the originator, by the recipient. Error correction and resend must be controlled by a higher layer application to ensure data integrity. In a connection oriented model, all data packets sent by an originator are acknowledged by the recipient, and transmission errors/lost data packets are managed at the protocol layer. TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms), IPX (for Novell NetWare), DECnet (for networking Digital Equipment Corp. computers), AppleTalk (for Macintosh computers), and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks) are examples of network transport protocols in use today. Network filesystem protocols are used to manage how a data request will be processed, once it reaches its final destination. NFS, Network File System protocol, is used to manage file access in a networked UNIX environment; it is supported by both UDP and TCP transport protocols. CIFS, Common Internet File system protocol, is used to manage file access in a networked Windows environment, and it is supported by both UDP and TCP transport protocols.

NAS Foundations

- 13

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Network Addressing
155.10.10. 14 Host name Mary 155.10.10.13 Host Name Peter

y IP Addressing
DNS Server

DHCP Server 155.10.10.XX

y DHCP
155.10.10.11

155.10.10.12 Router 155.10.20.XX

y DNS

Switch

Host Name = Account1


2005 EMC Corporation. All rights reserved.

155.10.20.11

Module Title - 14

Several things must happen in order for computers, attached to a network, to be able to communicate data across the network. First, the computer must have a unique network address, referred to as the IP Address. It is a four octet number in the commonly used IP version 4, for example 155.10.20.11, that uniquely identifies this computer to all other computers connected to the network. An address can be assigned in one of two ways: dynamically or statically. A static address requires entering the IP address that the computer will use in a local file. This can be quite a problem from an administrative view, as well as a source of conflict. If two computers on the same subnet are assigned the same IP address, they would not be able to communicate. Another approach is to set up a computer on the network to dynamically assign an IP address to a host when it joins the network. This is called the Dynamic Host Configuration Protocol (DHCP Server). In our example, the host Mary is assigned an IP address 155.10.10.14, and the host Peter is assigned an IP address 155.10.10.13 by the DHCP server. The NAS device, Account1, is a File server. Servers normally will have a statically assigned IP address. In this example, it has the IP address 155.10.20.11. A second requirement for communications is to know the address of the recipient of the communication. The more common approach is to communicate by name, as for example, the name you place on a letter. However, the network uses numerical addresses. IP addresses can be managed in three ways. The first approach is to enter the IP address into the application (IP address in place of www.x.com in your browser). The second is to maintain a local file with host names and associated IP addresses. The third is a hierarchical database called Domain Name Service (DNS), which resolves host names to IP addresses. In our example, if someone on host Mary wants to talk to host Peter, it is the DNS server that resolves Peter to 155.10.20.13.

NAS Foundations

- 14

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Volume and Files


155.10.10. 14 Host name Mary 155.10.10.13 Host Name Peter

y Create Volumes
DNS Server

DHCP Server

y Create Network Filesystem

155.10.10.11

155.10.10.12 Router NAS Account1

155.10.20.11 File System


2005 EMC Corporation. All rights reserved.

Array

/Acct_ Rep
Module Title - 15

Create Array Volume The first step in a network attached storage environment is to create logical volumes on the array and assign it a LUN Identifier. The LUN will then be presented to the NAS device. Create NAS Volume The NAS device will perform a discovery operation when it first starts, or when directed. In the discovery operation, the NAS device will see the array LUN as a physical drive. The next task is to create logical volumes at the NAS device level. The Celerra will create meta volumes using the volume resources presented by the array. Create Network File When the logical volumes are created on the Celerra, it can use them to create a file system. In this example, we have created a file system /Acct_Rep on the NAS server Account1. Mount File System Once the file system has been created, it must be mounted. With the file system mounted, we can then move to the next step, which is publishing the file system on the network.

NAS Foundations

- 15

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Publish
155.10.10. 14 Host name Mary User Mary MS Windows Share

Group Name = SALES 155.10.10. 13 Host name Peter User Peter Unix Export DHCP Server

y Export
DNS Server

y Share

155.10.10.11

155.10.10.12 Router

Group Name = Accounting NAS

155.10.20.11 ACCOUNT1 /Acct_ Rep


2005 EMC Corporation. All rights reserved.

Array

Module Title - 16

Now that a network file system has been created, there are two ways it can be accessed using the network. The first method is through the UNIX environment. This is accomplished by performing an Export. The Export publishes to those UNIX clients who can mount (access) the remote file system. The export is published using NFS. Access permissions are assigned when the export is published. The second method is through the Windows environment. This is accomplished by publishing a share. The share publishes to those Windows clients who map a drive to access the remote file system. The share is published using CIFS. Access permission are assigned when the share is published. In our example, we may only allow Mary and Peter, who are in the Sales organization, share or export access. At this level, NFS and CIFS are performing the same function, but are used in different environments. In our example, all members of the Group SALES, which include the users Mary and Peter, are granted access to /Acct_Rep.

NAS Foundations

- 16

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Client Access
155.10.10. 14 Host name Mary User Mary MS Windows

Group Name = SALES 155.10.10. 13 Host name Peter User Peter Unix DNS Server DHCP Server nfsmount

y Mount y MAP

MAP

155.10.10.11

155.10.10.12 Router

Group Name = Accounting NAS

155.10.20.11 ACCOUNT1 /Acct_ Rep


2005 EMC Corporation. All rights reserved.

Array

Module Title - 17

To access the network file system, the client must either mount a directory or map a drive pointing to the remote file system. Mount is a UNIX command performed by a UNIX client to set a local directory pointer to the remote file system. The mount command uses NFS protocol to mount the export locally. For a UNIX client to perform this task, it will execute the nfsmount command. The format for the command is: y nfsmount /name of the NAS server:name of the remote file system/name of the local directory For example: y nfsmount/Account1:Acct_Rep /localAcct_Rep. For a Windows client to perform this task, it will execute a map network drive. The sequence is my computer> tools>map network drive. Select the drive letter and provide the server name and share name in the Folder field. For example: y G: y \\Account1\Acct_Rep If you make a comparison, the same information is provided: the local drive (Windows) or the local directory, the name of the NAS server, and the name of the export or share.

NAS Foundations

- 17

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

File Permissions
155.10.10. 14 Host name Mary User Mary MS Windows

Group Name = SALES 155.10.10. 13 Host name Peter User Peter Unix

y Creates File y File Request

DNS Server

DHCP Server

155.10.10.11

155.10.10.12 Router

Group Name = Accounting NAS

Account1 /Acct_ Rep Files


2005 EMC Corporation. All rights reserved.

155.10.20.11 PRPT2

Array

MRPT1

Module Title - 18

Creates file Once access is gained by the client, files can be created on the remote file system. When a file is created by a client, normal permission is assigned. The Client can also modify the original permissions assigned to a file. File permission is changed in UNIX using the chmod command. File permission in Windows is changed through right clicking on the selected file, then selecting Properties> Security, add or remove group, add or remove permissions. It should be noted that in order to modify the file permissions, one must have the permission to make the change. File request If a request for a file is received by the NAS server, the NAS server will first authenticate the user either locally, or over the network. If the user identity is confirmed, then the user will be allowed to perform operations contained in the file permissions for the user of the Group to which the user is a member. In our example, user Mary on host Mary creates a file MRPT1 on the NAS server Account1. She assigns herself the normal permission for this file, which allows her to read and write to this file. She also limits file permissions to other members of the Group Sales to read only. User Peter on host Peter is a member of the Group SALES. Peter has access to the export /Acct_Rep. If user Peter attempts to write to file MRPT1, he would be denied the permission to write to the file.

NAS Foundations

- 18

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

EMC NAS Platforms

PRODUCTS

2005 EMC Corporation. All rights reserved.

Module Title - 19

Lets examine the current NAS products offered by EMC.

NAS Foundations

- 19

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

EMC NAS Platforms


Broadest Range of NAS Products Simple Web-based Management
Data integrity Intel-based Server NAS direct attach CLARiiON AX100 WSS 2003 Data integrity Intel-based Server NAS gateway to SAN CLARiiON WSS 2003 High availability 1 or 2 Data Movers Integrated NAS CLARiiON DART High availability 1 or 2 Data Movers NAS gateway to SAN Advanced clustering 4 Data Movers NAS gateway to SAN Advanced clustering 214 Data Movers NAS gateway to SAN

CLARiiON, Symmetrix CLARiiON, Symmetrix CLARiiON, Symmetrix DART DART DART

NetWin 110

NetWin 110, 200

NS500 NS600 NS700

NS500G NS600G NS700G

NS704G

CNS

2005 EMC Corporation. All rights reserved.

Module Title - 20

An important decision customers must make is: What is the right information platform that meets my business requirements? EMC offers the broadest range of NAS platforms. EMC makes it easy. Rate your requirements and choose your solution.

NAS Foundations

- 20

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra NAS - SAN Scalability


y Consolidated storage infrastructure for all applications y NAS front end scales independently of SAN back end
Connect to multiple Symmetrix and CLARiiONs Improved utilization
Celerra Golden Eagle/ Eagle

Celerra NSX00G NSX00G S

CLARiiON CX Family

Connectrix SAN

y Allocate storage to Celerra and servers as needed


Easy to move filesystems among Data Movers Online filesystem growth

y Centralized management for SAN and NAS

Windows UNIX

Symmetrix DMX Family

2005 EMC Corporation. All rights reserved.

Module Title - 21

One of the reasons that Celerra Golden Eagle scales impressively is due to the architecture that separates the NAS front end (Data Movers) from the SAN back end (Symmetrix or CLARiiON). This allows the front end and back end to grow independently. Customers can merely add Data Movers to the Celerra Golden Eagle to scale the front-end performance to handle more clients. As the amount of data increases, you can add more disks, or the Celerra Golden Eagle can access multiple Symmetrix or CLARiiONs. This flexibility leads to improved disk utilization. Celerra Golden Eagle supports simultaneous SAN and NAS access to the CLARiiON and Symmetrix. Celerra Golden Eagle can be added to an existing SAN, and general purpose servers can now access unused back-end capacity. This extends the improved utilization, centralized management, and TCO benefits of SAN plus NAS consolidation to Celerra Golden Eagle, Symmetrix, and CLARiiON. The configuration can also be reconfigured via software. Since all Data Movers can see the entire file space, it is easy to reassign filesystems to balance the load. In addition, filesystems can be extended online as they fill. Even though the architecture splits the front end among multiple Data Movers and a separate SAN back end, the entire NAS solution can be managed as a single entity. The Celerra NSx00G (configured with two Data Movers) and the Celerra NSx00GS (configured with a single Data Mover) connect to a CLARiiON CX array through a fibre channel switch. Celerra NSx00G/NSx00GS supports simultaneous SAN and NAS access to the CLARiiON CX family.

NAS Foundations

- 21

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Hardware

NAS FRAME BUILDING BLOCKS

2005 EMC Corporation. All rights reserved.

Module Title - 22

Lets take a closer look at the hardware components of the Celerra family.

NAS Foundations

- 22

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Control Station Hardware


y CNS / CFS Style
Golden Eagle and Eagle Frame

Control Station

2005 EMC Corporation. All rights reserved.

Module Title - 23

Control Station provides the controlling subsystem of the Celerra, as well as the management interface to all file server components. The Control Station provides a secure user interface as a single point of administration and management for the whole Celerra solution. Control Station administrative functions are accessible via the local console, Telnet, or a Web Browser. The Control station is single Intel processor based, with high memory capacity. Dependent on the model, the Control Stations may have internal storage. Currently, the NS and Golden Eagle frame series only have this feature.

NAS Foundations

- 23

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Control Station Hardware (cont.)


y NS Series Style

Disk Array Enclosures

2005 EMC Corporation. All rights reserved.

Module Title - 24

This is the NS range Control Station format.

NAS Foundations

- 24

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Data Mover Hardware


y Single or Dual Intel Processors y PCI or PCI-X based y High memory capacity y Multi-port Network cards y Fibre Channel connectivity to storage arrays y No internal storage devices y Redundancy mechanism
Data Mover Golden Eagle and Eagle Frame NS 6XX Frame Data Mover

2005 EMC Corporation. All rights reserved.

Module Title - 25

Each Data Mover is an independent, autonomous file server that transfers requested files to clients and will remain unaffected, should a problem arise with another Data Mover. The multiple Data Movers (up to 14 in the Eagle and Golden Eagle frames) are managed as a single entity. Data Movers are hot pluggable and can be configured with standbys to implement N to 1 availability. A Data Mover (DM) connects to a LAN through FastEthernet, or Gigabit Ethernet. The default name for a Data Mover is server_n, where n is its slot location. For example, in the Golden Eagle/Eagle frame, a Data Mover can be in slot location 2 through 15 (i.e. server_2 - server_15 in Celerra Golden Eagle/Eagle frame). There is no remote login capability on the DM, nor do they run any binaries (very secure). Data Mover redundancy is the mechanism by which the Celerra family reduces the network data outage in the event of a Data Mover failure. The ability to failover the Data Movers is achieved by the creation of a Data Mover configuration database on the Control Station system volumes, and is managed via the Control Station. No Data Mover failover will occur if the Control Station is not available for some reason.

NAS Foundations

- 25

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

NAS Reference Documentation


y NAS Support Matrix
Data Movers Control Stations Software supported features

www.emc.com/horizontal/interoperability

2005 EMC Corporation. All rights reserved.

Module Title - 26

The NAS Support Matrix provides support information on the Data Movers and Control Station models, NAS software version, supported features, storage models, and microcode. This interoperability reference can be found at: http://www.emc.com/horizontal/interoperability

NAS Foundations

- 26

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Software

SOFTWARE OPERATING SYSTEM

2005 EMC Corporation. All rights reserved.

Module Title - 27

Now, lets look at operating system software used by the Celerra Family.

NAS Foundations

- 27

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Software Operating Systems


y Linux 7.2
This is an industry hardened and EMC modified Operating System loaded on the Control Station to provide
Secure NAS management environment Growing in popularity and corporate acceptance

y DART Data Access in Real Time


This is a highly specialized Operating System designed to optimize network traffic Input/Output throughput and is loaded on the Data Movers Is multi-threaded to optimize load balancing capabilities of the multiprocessor Data Movers Advanced volume management - UxFS
Large file size and filesystem support Ability to extend filesystems online Metadata logging for fast recovery Striped volume support

Feature rich to support the varied specialized capabilities of the Celerra range

2005 EMC Corporation. All rights reserved.

Module Title - 28

Linux OS is installed on the Control Station. Control Station OS software is used to install, manage, and configure the Data Movers, monitor the environmental conditions and performance of all components, and implement the Call Home and dial-in support feature. Typical Administration functions include volume and filesystem management, configuration of network interfaces, creation of filesystems, exporting filesystems to clients, performing filesystem consistency checks, and extending filesystems. The OS that the Data Movers run is EMCs Data Access in Real Time (DART) embedded system software, which is optimized for file I/O, to move data from the EMC storage array to the network. DART supports standard network and file access protocols: NFS, CIFS, and FTP.

NAS Foundations

- 28

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family

SOME KEY HIGH AVAILABILITY FEATURES

2005 EMC Corporation. All rights reserved.

Module Title - 29

Lets examine some of the high availability features found in the Celerra family.

NAS Foundations

- 29

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Control Station and Data Mover Standby


y For hardware high availability the EMC NAS frames implement both Control Station and Data Mover failover capabilities y This means that in the simplest configuration there will be an equivalent system within the frame awaiting a possible failure of the active component, in order to assume the configuration and production role, with minimal outage to the end-users y As the standby system is the equivalent of the production system, there will be no performance or management impact to the environment
2005 EMC Corporation. All rights reserved. Module Title - 30

Hardware high availability is achieved by having equivalent systems contained within the NAS frame configured as standby units for one or more primary systems. This is made possible by the configuration database maintained on the Control Station and managed failover is controlled by the Control Station. A standby system is pointed to a specific location in the configuration database so that it can assume the complete personality of the failed primary system. However if the Control Station itself is not available when a primary system fails then failover will not be able to occur until the Control station is restored. Standby Data Mover configuration options: 1. Each standby Data Mover, as a standby for a single primary Data Mover; 2. Each standby Data Mover, as a standby for a group of primary Data Movers; 3. Multiple standby Data Movers for a primary Data Mover. These Standby Data Movers are powered and ready to assume the personality of their associated Primary Data Movers, in the event of a failure.

NAS Foundations

- 30

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Data Mover Standby (continued)


y Data Mover failover is a policy driven mechanism controlled by the Control Station y The policy is as follows
Automatic
When configured like this, the Control Station will detect a Data Mover failure, power down the failed Data Mover and bring the designated standby Data Mover on line with the failed DM personality

Retry
With this configuration, the Control Station will detect a Data Mover failure, try and reboot the failed Data Mover, if the reboot does not clear the error, power down the failed Data Mover and bring the designated standby Data Mover on line with the failed DM personality

Manual
When configured like this no action is taken and administrator intervention is required for failover

y In all cases failback is a manual process


2005 EMC Corporation. All rights reserved. Module Title - 31

How does a Data Mover failover work? Through constant Data Mover monitoring by the Control Station. This is a policy driven solution and the automatic failover setting of the policy works in the following fashion: y the Control Station detects a Data Mover problem y the failing Data Mover is taken offline y the pre-defined standby Data Mover assumes the network identity of the failed Data Mover including the MAC and IP addresses This process takes seconds to minutes to complete. The standby Data Mover continues serving files to the failed Data Mover's NFS and CIFS clients. Once the failed Data Mover is replaced, it will resume its role as the active Data Mover with administrator managed failback, and the standby Data Mover will resume its standby role. A single Celerra Data Mover can be configured to act as a standby for several Data Movers. There can also be many standby Data Movers in a single Celerra cabinet, each backing up their own group of Data Movers. The number of standbys configured depends on how critical the application is and how much risk can be tolerated.

NAS Foundations

- 31

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Network FailSafe Device


y Network outages, due to environmental failure, are more common than Data Mover failures y Network FailSafe Device
DART OS mechanism to minimize data access disruption due to these failures Logical device is created using either physical ports, or other logical ports, combined together to create redundant groups of ports Logically grouped Data Mover network ports monitor network traffic on the ports Active FailSafe Device port senses traffic disruption Standby (non-active) port assumes the IP Address and Media Access Control address in a very short space of time, thus reducing data access disruption
2005 EMC Corporation. All rights reserved. Module Title - 32

Having discussed the maintenance of data access via redundant Data Movers, we will now discuss the same concept utilizing network port mechanisms. First, lets look at the Network Failsafe device. Network outages due to environmental failures are more common than Data Mover failures. To minimize data access disruption due to these failures, the DART OS has a mechanism, the Network FailSafe Device, which is environment agnostic. This is a mechanism by which the Network ports of a Data Mover may be logically grouped into a partnership, which will monitor network traffic on the ports. If the currently active port senses a disruption of traffic, the standby (non-active) port will assume the active role in a very short space of time, thus reducing data access disruption. The way this works is that a logical device is created, using either physical ports or other logical ports, combined together to create redundant groups of ports. In normal operation, the active port will carry all network traffic. The standby (nonactive port) will remain passive until a failure is detected. Once a failure has been detected by the FailSafe Device, this port will assume the network identity of the active port, including IP Address and Media Access Control address. Having assumed the failed port identity, the standby port will now continue the network traffic. Network disruption due to this change over is minimal, and may only be noticed in a high transaction oriented NAS implementation, or in CIFS environments due to the connection-oriented nature of the protocol. There are several benefits achieved by configuring the network FailSafe device: 1. Configuration is handled transparently to client access; 2. the ports that make up the FailSafe device need not be of the same type; 3. Rapid recovery from a detected failure; 4. can be combined with logical Aggregated Port devices to provide even higher levels of redundancy. Although the ports that make up the FailSafe device need not be of the same type, care must be taken to ensure that once failover has occurred, client expected response times remain relatively the same, and data access paths are maintained.

NAS Foundations

- 32

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Link Aggregation - High Availability


y Link aggregation is the combining of two or more data channels into a single data channel for high availability
Two Methods: IEEE 802.3ad LACP & CISCO FastEtherChannel

y IEEE 802.3ad LACP


Combining links for improved availability If one port fails, other ports take over Industry standard IEEE 802.3ad Combines 212 Ethernet ports into a single virtual link Deterministic behavior Does not increase single client throughput
2005 EMC Corporation. All rights reserved. Module Title - 33

LINK Industry Standard Switch

Celerra

Having discussed the network FailSafe device, the next methodologies we will look at are the two Link Aggregation methodologies. Link aggregation is the combining of two or more data channels into a single data channel. There are two methodologies that are supported by EMC NAS devices. They are IEEE 802.3ad Link Aggregation Control Protocol and CISCO FastEtherChannel using Port Aggregation Protocol (PAgP). The purpose for combining data channels in the EMC implementation is to achieve redundancy and fault tolerance of network connectivity. It is commonly assumed that link aggregation will provide a single client with a data channel bandwidth equal to the sum of the bandwidths of individual member channels. This is not, in fact, the case due to the methodology of channel utilization, and it may only be achieved with very special considerations to the client environment. The overall channel bandwidth is increased, but the client will only receive, under normal working conditions, the bandwidth equal to one of the component channels. To implement Link Aggregation, the network switches must support the IEEE 802.3ad standard. It is a technique for combining several links together to enhance availability of network access, and applies to a single Data Mover, but not across Data Movers. The current implementation focuses on availability. Only full duplex operation is currently supported. Always check the NAS Interoperability Matrix for supported features at the following address: http://www.emc.com/horizontal/interoperability

NAS Foundations

- 33

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Link Aggregation - High Availability (continued)


y CISCO FastEtherChannel
Port grouping for improved availability Combines 2, 4, or 8 Ethernet ports into a single virtual device Inter-operates with trunkingcapable switches High availabilityif one port fails, other ports take over Does not increase single client throughput
Celerra Channel CISCO Switch

2005 EMC Corporation. All rights reserved.

Module Title - 34

Ethernet Trunking (Ether Channel) increases availability. It provides statistical load sharing by connecting different clients to different ports. It does not increase single-client throughput. Different clients get allocated to different ports. With only one client, the client will access Celerra via the same port for every access. This DART OS feature interoperates FastEtherChannel capable Cisco switches. FastEtherChannel is Cisco proprietary.

IEEE 802.3ad/FastEtherChannel - Comparison

NAS Foundations

- 34

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Network Redundancy - High Availability

y An example of FSN and Port aggregation co-operation

2005 EMC Corporation. All rights reserved.

Module Title - 35

This example shows a fail-safe network (FSN) device that consists of a FastEtherChannel comprising the four ports of an Ethernet NIC and one Gigabit Ethernet port. The FastEtherChannel could be the primary device but, per recommended practices, the ports of the FSN would not be marked primary or secondary. FSN provides the ability to configure a standby network port for a primary port, and two or more ports can be connected to different switches. The secondary port remains passive until the primary port link status is broken, then the secondary port takes over operation. An FSN device is a virtual device that combines 2 virtual ports. A virtual port can consist of a single physical link or an aggregation of links (EtherChannel, LACP). The port types, or number, need not be the same when creating a failsafe device group. For example, a quad Ethernet card can be first trunked and then coupled with a single Gigabit Ethernet port. In this case, all four ports in the trunk would need to fail before FSN would implement failover to the Gigabit port. Thus, Celerra could tolerate four network failures before losing the connection. Note: an active primary port/active standby port configuration on the Data Mover is not recommended practice.

NAS Foundations

- 35

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Environment Management Integration

VIRTUAL LOCAL AREA NETWORKS

2005 EMC Corporation. All rights reserved.

Module Title - 36

Environmental management tools used in the NAS space include Virtual Local Area Networks, or VLANS. We will now discuss how EMC NAS integrates into this strategy.

NAS Foundations

- 36

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

VLAN Support
y Create logical LAN segment
Divide a single LAN into logical segments Join multiple separate segments into one logical LAN
Broadcast Domain LAN Bridge or Switch
Hub Hub Hub

VLAN A

Collision Domain LAN Segment Collision Domain LAN Segment


Hub

VLAN B

y VLAN Tagging
802.1q

y Simplified Management
No network reconfiguration required for member relocation
Bridge or Switch

Collision Domain LAN Segment


Hub Hub

VLAN A

Workstation VLAN B

Router

Broadcast Domain LAN

2005 EMC Corporation. All rights reserved.

Module Title - 37

Network domains are categorized into Collision, a LAN segment within which data collisions are contained, or Broadcast, the portions of the network through which broadcast and multicast traffic is propagated. Collision domains are determined by hardware components and how they are connected together. The components are usually client computers, hubs, and repeaters. Separation of a Collision domain from a Broadcast domain is accomplished by a network switch, or a router, that generally does not forward broadcast traffic. VLANs allow multiple, distinct, possibly geographically separate network segments to be connected into one logical segment. This can be done either by subnetting or by using VLAN tags (802.1q.), which is an address added to network packets to identify the VLANs to which the packet belongs. This could allow servers that were connected to physically separate networks to communicate more efficiently, and it could prevent servers that were attached to the same physical network from impeding one another. By using VLANs to logically segment the Broadcast Domains, the equipment contained within this logical environment need not be physically located together. This now means that if a mobile client moves location, an administrator need not do any physical network or software configuration for the relocation, as bridging technology would now be used, and a router would only be needed to communicate between VLANS. There are two commonly practiced ways of implementing this technology: . IP Address subnetting . VLAN Ethernet packet tagging

NAS Foundations

- 37

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

VLAN Implementation Methodologies


y There are two primary methodologies of implementing VLANS
By IP address subnetting
Using this methodology an administrator will configure the broadcast domains to encompass the whole network area for specific groups of computers, by using BridgeRouter technology

By VLAN Tagging
Using this methodology an administrator will configure groups of user computers to embed an identification tag embedded into all of their Ethernet packet traffic

2005 EMC Corporation. All rights reserved.

Module Title - 38

When using the IP address subnetting methodology, the administrator will configure the broadcast domains to encompass the whole network area for specific groups of computers, by using BridgeRouter technology. When using the VLAN tagging methodology, the members of a specific group will have an identification tag embedded into all of their Ethernet packet traffic. VLAN Tagging allows a single Gigabit Data Mover port to service multiple logical LANs (Virtual LANs). This allows data network nodes to be configured (added and moved as well as other changes) quickly and conveniently from the management console, rather than in the wiring closet. VLAN also allows a customer to limit traffic to specific elements of a corporate network, and protect against broadcasts (such as denial of service) affecting whole networks. Standard router based security mechanisms can be used with VLANs to restrict access and improve security.

NAS Foundations

- 38

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

VLAN - Benefits
y Performance
This is client related as packets not destined for a machine in a particular VLAN will not be processed by the client

y Reduced Router Overhead y Reduced Costs


expensive routers and billable traffic routing costs can be reduced

y Security
placing users into a tagged VLAN environment will prevent unauthorized access to network packets
2005 EMC Corporation. All rights reserved.

VLAN-A

VLAN S

VLAN E

Module Title - 39

The benefits of VLAN support include: y Performance: In all networks, there is a large amount of broadcast and multicast traffic and VLANS can reduce the amount of traffic being processed by all clients. y Virtual Collaborative Work Divisions: by placing widely dispersed collaborative users into a VLAN, broadcast and multicast traffic between these users will be kept from affecting other network clients, and reduce the amount of routing overhead placed on their traffic. y Simplified Administration: with the large amount of mobile computing today, physical user relocation generates a lot of administrative user reconfiguration (adding, moving and changing). If the user has not changed company functionality, but has only relocated, VLANs can perpetuate undisrupted job functionality. y Reduced Cost by using VLANS: expensive routers and billable traffic routing costs can be reduced. y Security, by placing users into a tagged VLAN environment, external access to sensitive broadcast data traffic can be reduced. VLAN support enables a single Data Mover with Gigabit Ethernet port(s) to be the standby for multiple primary Data Movers with Gigabit Ethernet port(s). Each primary Data Mover's Gigabit Ethernet port(s) can be connected to different switches. Each of these switches can be in a different subnet and different VLAN. The standby Data Mover's Gigabit Ethernet port is connected to a switch which is connected to all the other switches.

NAS Foundations

- 39

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Software Management

USER INTERFACES

2005 EMC Corporation. All rights reserved.

Module Title - 40

In this section, we will examine the different user interfaces. These interfaces include the Command line, Celerra Manager, and EMC ControlCenter.

NAS Foundations

- 40

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Management Command Line


y The command line can be accessed on the Control Station via
An ssh interface tool, e.g. PuTTy Telnet

y Its primary function is for the scripting of common repetitive tasks that may run on a predetermined schedule to ease administrative burden y It has approximately 60 UNIX command-like commands
nas_ - These commands are generally for the configuration and management of global resources server_ - These commands are generally for the configuration and management of Data Mover specific resources

2005 EMC Corporation. All rights reserved.

Module Title - 41

Telnet access is disabled, by default, on the Control Station due to the possibility of unauthorized access if the Control Station is placed on a publicly accessible network. If this is the case, it is strongly recommended that this service is not enabled. The preferred mechanism of accessing the Control Station is the SSH (Secure Shell) daemon via an SSH client such as PuTTy.

NAS Foundations

- 41

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Manager GUI Management

2005 EMC Corporation. All rights reserved.

Module Title - 42

With the release of DART v5.2, the GUI management has become consolidated into one product with two options: y Celerra Management Basic Edition y Celerra Management Advanced Edition The Basic Edition will be installed, along with the DART OS, and will provide a comprehensive set of common management functionality for a single Celerra at a time. The Advanced Edition will add multiple Celerra support, along with some advanced feature GUI management, and will be licensed separately from the DART code.

NAS Foundations

- 42

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Manager GUI Wizards

2005 EMC Corporation. All rights reserved.

Module Title - 43

Celerra Manager V5.2 and higher will offer a number of configuration Wizards for various tasks to assist with new administrator ease of implementation.

NAS Foundations

- 43

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Manager GUI Tools

2005 EMC Corporation. All rights reserved.

Module Title - 44

Celerra Manager V5.2 will offer a set of tools to integrate Celerra monitoring functionality and launch Navisphere Manager. With the addition of the Navisphere Manager Launch capability, the SAN/NAS administrator will have a more consolidated management environment.

NAS Foundations

- 44

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

EMC ControlCenter V5.x.x NAS Support


y Discovery and monitoring
Data Movers Devices and volumes Network adapters and IP interfaces Mount points Exports Filesystems (including snapshots and checkpoints)

2005 EMC Corporation. All rights reserved.

Module Title - 45

The EMC flagship management product, EMC ControlCenter, has the capability of an assisted discovery of both EMC NAS and third party NAS products, namely NetApps filers. Currently, management of the EMC NAS family is deferred to the specific product management products, due to the highly specialized nature of the NAS environment. Therefore, this product functionality (shown on this slide) is focused mainly around discovery, monitoring, and product management software launch capability ControlCenter V5.x.x has enhanced device management support for the Celerra family. The ControlCenter Celerra Agent runs on Windows and has enhanced discovery and monitoring capabilities. You can now view properties information on Celerra Data Movers, devices, network adapters and interfaces, mount points, exports, filesystems (including snapshots and checkpoints), and volumes from the ControlCenter Console. You can also view alerting information for the Celerra family as well.

NAS Foundations

- 45

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family File System Management

AUTOMATIC VOLUME MANAGEMENT

2005 EMC Corporation. All rights reserved.

Module Title - 46

For ease of use and implementation, the DART operating system utilizes an Automatic Volume Manager (AVM). This allows the NAS manager to quickly create, deploy and manage NAS file systems with known, predictable performance and management parameters.

NAS Foundations

- 46

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Automatic Volume Management - AVM


y Celerra uses AVM to create a more array friendly methodology for laying out volumes and file systems
Automates volume and file system creation and management Arranges volumes into storage pools dependent upon the array disk layout characteristics
System defined
Profiles are predefined rules that define how devices are aggregated and put into system-defined storage pools

User defined
This storage pools allow customers more flexibility to define their own volume characteristics to meet their own specific needs

y Volume creation from the GUI management interface will use AVM by default, but it can also be invoked from the Command Line Interface, CLI
2005 EMC Corporation. All rights reserved. Module Title - 47

The Automatic Volume Management (AVM) feature of the Celerra File Server automates volume creation and management. By using Celerra command options and interfaces that support AVM, you can create and expand file systems without manually creating and managing their underlying volumes. A Storage Pool is a container for one or more member volumes. All storage pools have attributes, some of which are modifiable. There are two types of storage pools: y System-defined System-defined storage pools with NAS 5.3 are what used to be called system profiles in prior releases. AVM controls the allocation of storage to a file system when you create the file system by allocating space from a system-defined storage pool. The system-defined storage pools ship with the Celerra. They are designed to optimize performance based on the hardware configuration. y User-defined User-defined storage pools allow for more flexibility in that you choose what storage should be included in the pool. If the user defines the storage pool, the user must explicitly add and remove storage from the storage pool and define the attributes for the storage pool. Profiles provide the rules that define how devices are aggregated and put into system-defined storage pools. Users cannot create, delete, or modify these profiles. There are two types of profiles: y Volume Volume profiles define how new disk volumes are added to a system-defined storage pool. y Storage Storage profiles define how the raw physical spindles are aggregated into Celerra disk volumes. Note: Both volume profiles and storage profiles are associated with system-defined storage pools and are unique and predefined for each storage system. It is NOT recommended to mix the types of volume management methodologies, AVM and Manual, on a system, as mixing these may result in nonoptimized disk utilization leading poor system utilization and performance.

NAS Foundations

- 47

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

AVM System Defined Storage Pools


y symm_std
Highest performance, medium cost, using Symmetrix STD disk volumes

y symm_std_rdf_src
Highest performance, medium cost, using SRDF

y clar_r1
High performance, low cost, using CLARiiON CLSTD disk volumes in RAID 1

y clar_r5_performance
Medium performance, low cost, using CLARiiON CLSTD disk volumes in 4+1 RAID 5

y clar_r5_economy
Medium performance, lowest cost, using CLARiiON CLSTD disk volumes in 8+1 RAID 5

y clarata_archive
6+1, low performance, high capacity, using CLARiiON Serial ATA disk volumes
2005 EMC Corporation. All rights reserved. Module Title - 48

STD = Standard CLSTD = CLARiiON Standard Clarata = CLARiiON ATA drives

NAS Foundations

- 48

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Management Software

WINDOWS SPECIFIC INTEGRATION OPTIONS

2005 EMC Corporation. All rights reserved.

Module Title - 49

EMC NAS frames integrate traditionally into UNIX environments seamlessly due to its roots in the NFS protocol environment. However, with the addition of support for the CIFS protocol, which is the Microsoft networking domain, there has been the need for very specific integration methodologies to ensure the seamless integration and management into this environment.

NAS Foundations

- 49

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Windows Environment Integration


y To achieve a tightly integrated Windows active directory environment, EMC NAS uses several software features
usermapper
This is a feature that will help the Celerra automatically assign UNIX user and group identifiers, UIDs and GIDs, to Windows users and groups. This assists Windows administrators with the integration of these specialized NAS frames, as there is minimal or no user environment modification

UNIX User Management


Active Directory migration tool MMC plug-in extension for Active Directory uses and computers Celerra Management tool snap-in (MMC Console)
2005 EMC Corporation. All rights reserved. Module Title - 50

Celerra offers a number of Windows 2000 management tools with the Windows 2000 look and feel. For example, Celerra shares and quotas can be managed by the standard Microsoft Management Console (MMC). The tools include: y The Active Directory (AD) Migration tool Migrates the Windows/UNIX user and group mappings to Active Directory. The matching users/groups are displayed in a property page with a separate sheet for users and groups. The administrator selects the users/groups that should be migrated and de-selects those that should not be migrated, or should be removed from Active Directory. y The Microsoft Management Console (MMC) Snap-in extension for AD users and computers. This adds a property page to the users property sheet to specify UID (user ID)/GID (group ID)/Comment and adds a property page to the group property sheet to specify GID/Comment. You can only manage users and groups of the local tree. y The Celerra Management Tool (MMC Console) Snap-in extension for Dart UNIX User Management displays Windows users/groups which are mapped to UNIX attributes. It also displays all domains that are known to the local domain (Local Tree, Trusted domains).

NAS Foundations

- 50

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Windows Environment Integration (continued)


Virus Checker Management
Celerra Management tool MMC Console

Home Directory snap-in


Allows multiple points of entry to a single share

Data Mover security snap-in


Manage user rights and auditing

2005 EMC Corporation. All rights reserved.

Module Title - 51

Further tools are y The Celerra Management Tool (MMC Console)Snap-in extension for Dart Virus Checker Management which manages parameters for the DART Virus Checker. y The Home Directories capability in the Celerra allows a customer to set up multiple points of entry to a single Share/Export so as to avoid sharing out many hundreds of points of entry to a filesystem, for each individual user for storing their Home Directories. The MMC Snap-in provides a simple and familiar management interface for Windows administrators for this capability. y The Data Mover Security Settings Snap-in provides a standard Windows interface for managing user rights assignments, as well as the settings for which statistics Celerra should audit, based on the NT V4 style auditing policies.

NAS Foundations

- 51

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Virtual Data Movers


y Another improvement to the Windows integration is the ability to create multiple virtual CIFS servers on each Data Mover y This is achieved by creating Virtual Data Mover environments
This is a huge benefit to the consolidation of multiple server, file serving functionality onto single Data Movers, as each virtual Data Mover can maintain isolated CIFS servers with their own root filesystem environment This will allow whole Virtual Data Mover environments to be loaded, unloaded, or even replicated between physical Data Movers for ease in Windows environmental management

2005 EMC Corporation. All rights reserved.

Module Title - 52

Currently, in pre DART v5.2, a Data Mover supports one NFS server and multiple CIFS servers, where each server has the same view of all the resources. The CIFS servers are not logically isolated and although they are very useful in consolidating multiple servers into one data mover, they do not provide the isolation between servers as needed in some environments, such as data from disjoint departments hosted on the same data mover. In v5.2, VDMs support separate isolated CIFS servers, allowing you to place one or multiple CIFS servers into a VDM, along with their file systems. The servers residing in a VDM store their dynamic configuration information (such as local groups, shares, security credentials, and audit logs, etc.) in a configuration file system. A VDM can then be loaded and unloaded, moved from Data Mover to Data Mover, or even replicated to a remote Data Mover as an autonomous unit. The servers, their file systems, and all of the configuration data that allows clients to access the file systems are available in one virtual container. VDMs provide virtual partitioning of the physical resources, and independently contain all the information necessary to support the contained CIFS servers. Having the file systems and the configuration information contained in a VDM does the following: y enables administrators to separate CIFS servers and give them access to specified shares; y allows replication of the CIFS environment from primary to secondary without impacting server access; y enables administrators to easily move CIFS servers from one physical Data Mover to another. A VDM can contain one or more CIFS servers. The only requirement is that you have at least one interface available for each CIFS server you create. The CIFS servers in each VDM have access only to the file systems mounted to that VDM, and therefore can only create shares on those file systems mounted to the VDM. This allows a user to administratively partition or group their file systems and CIFS servers.

NAS Foundations

- 52

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family Business Continuity

DISK BASED REPLICATION AND RECOVERY SOLUTIONS

2005 EMC Corporation. All rights reserved.

Module Title - 53

Now we can examine some of the replication and recovery solutions available in the Celerra family.

NAS Foundations

- 53

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Disk-Based Replication and Recovery Solutions


Celerra/Symmetrix

Synchronous Disaster Recovery


SRDF FUNCTIONALITY

FileFile-based Replication
Celerra /CLARiiON
TimeFinder/FS Celerra Replicator EMC OnCourse

File Restoration
Celerra SnapSure
Celerra/FC4700 Celerra NS600

RECOVERY TIME

Hours
2005 EMC Corporation. All rights reserved.

Minutes

Seconds
Module Title - 54

High-end environments require non-stop access to the information pool. From a practical perspective, not all data carries the same value. The following illustrates that EMC Celerra provides a range of disk-based replication tools for each recovery time requirement. File restoration: This is the information archived to disk and typically saved to tape. Here we measure recovery in hours. Celerra SnapSure enables local point-in-time replication for file undeletes and backups. File-based replication: This information is recoverable in time frames measured in minutes. Information is mirrored to disk by TimeFinder, and the copy is made accessible with TimeFinder/FS. The Celerra Replicator creates replicas of production filesystems either locally, or at a remote site. Recovery time from the secondary site depends on the bandwidth of the IP connection between the two sites. EMC OnCourse provides secure, policy-based file transfers. The Replicator feature supports data recovery for both CIFS and NFS by allowing the secondary filesystem (SFS) to be manually switched to read/write mode after the Replicator session has been stopped, either manually or due to a destructive event. Note: There is no re-synch or failback capability. Synchronous disaster recovery: this is the information requiring disaster recovery with no loss of transactions. This strategy allows customers to have data recovery in seconds. SRDF, in synchronous mode, facilitates real-time remote mirroring in campus environments (up to 60 km). File restoration and file-based replication (Celerra Replicator, EMC OnCourse) are available with Celerra /CLARiiON. The entire suite of file restoration, file-based replication, and synchronous disaster recovery are available with Celerra /Symmetrix.

NAS Foundations

- 54

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Disaster Recovery

CELERRA SYMMETRIX REMOTE DATA FACILITY

2005 EMC Corporation. All rights reserved.

Module Title - 55

In this section, we will look at the Celerra disaster recovery solution.

NAS Foundations

- 55

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra SRDF Disaster Recovery


Increases data availability by combining the high availability of the Celerra family with the Symmetrix Remote Data Facility
Network Campus (60 km) distance Uni or bi-directional

Celerra

Celerra

y Celerra synchronous disaster recovery solution


Allows an administrator to configure remote standby Data Movers, waiting to assume primary roles, in the event of a disaster occurring at the primary data site SRDF allows administrator to achieve a remote synchronous copy of production filesystems at a remote location Real-time, logically synchronized and consistent copies of selected volumes Uni-directional and bi-directional support Resilient against drive, link, and server failures No lost I/Os in the event of a disaster Independent of CPU, operating system, application, or database Simplifies disaster recovery switchover and back
2005 EMC Corporation. All rights reserved.

In the NAS environment, data availability is one of the key aspects for implementation determination. By combining the high availability of the Celerra family with the Symmetrix Remote Data Facility, data availability increases exponentially. What the SRDF feature allows an administrator to achieve is a remote synchronous copy of production filesystems at a remote location. However, as this entails the creation of Symmetrix specific R1 and R2 data volumes, this functionality is currently restricted to Celerra/Symmetrix implementations only. This feature allows an administrator to configure remote standby Data Movers waiting to assume primary roles in the event of a disaster occurring at the primary data site. Due to data latency issues, this solution is restricted to a campus distance of separation between the two data sites (60 network km). The SRDF solution for Celerra can leverage an existing SRDF transport infrastructure to support the full range of supported SAN (storage area network) and DAS (direct-attached storage) connected general purpose server platforms. After establishing the connection and properly configuring the Celerra, users gain continued access to filesystems in the event that the local Celerra and/or the Symmetrix becomes unavailable. The Celerra systems communicate over the network to ensure the primary and secondary Data Movers are synchronized with respect to meta data, while the physical data is transported over the SRDF link. In order to ensure an up to date and consistent copy of the filesystems on the remote Celerra, the synchronous mode of SRDF operation is currently the only supported SRDF operational mode, but both modes of SRDF operation, active-passive and active-active, are supported. This means that active data can be configured to be only on one side of the SRDF link or on both sides, dependent on the customers needs.

NAS Foundations

- 56

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Data Replication

SNAPSURE, TIMEFINDER/FS & CELERRA REPLICATOR

2005 EMC Corporation. All rights reserved.

Module Title - 57

Next, we will examine several of the Celerra data replication solutions.

NAS Foundations

- 57

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra SnapSure - Data Replication


y Enables speedy recovery
Low volume activity, read-only applications Simple file undelete Incremental backup
Celerra CLARiiON or Symmetrix

y Logical point-in-time view of Celerra data


Works for all Celerra Implementations Saves disk space Maintains pointers to track changes Production filesystem to the primary filesystem Not a mirror; creation of specialized volumes (R1/R2, BCVs) not required
2005 EMC Corporation. All rights reserved.

Checkpoint

Module Title - 58

Due to the business demands for high data availability and speedy recovery, there are many methodologies utilized to facilitate this requirement. The first methodology discussed is the SnapSure feature of the Celerra family. This methodology uses a logical point-in-time view of a Production Filesystem to facilitate Incremental backup views of a Production File System, PFS, individual file recovery, and roll back of an entire filesystem to a previous point-in-time image. SnapSure maintains pointers to changes to the primary file system, and reads data from either the primary filesystem or a copy area. The copy area is defined as a metavolume (SavVol). One of the obvious benefits of this solution is that it is storage array agnostic, i.e. works for all NAS DART implementations. This also means that there are no specialized volumes that need to be configured for this feature to function. Some other replication methodologies, such as SRDF and TimeFinder/FS, are dependent on the creation of Symmetrix Remote Data Facility and Business Continuity Volumes in the Symmetrix. SnapSure does not require any specialized volume creation and will therefore work with any back-end storage array (CLARiiON or Symmetrix). Multiple Checkpoints can be done on the Production Filesystem and, thereby, facilitate the ability to recover different point-in-time images of files or filesystems. Without using any other similar replication methodologies, e.g. Celerra Replicator, the currently supported maximum of Checkpoints per filesystem is 32.

NAS Foundations

- 58

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra SnapSure - Management


y Multiple Checkpoints for recovery of different point-in-time images
GUI Checkpoint schedule manipulation Checkpoint out of order delete Automatic mounting upon creation

2005 EMC Corporation. All rights reserved.

Module Title - 59

For ease of management, Checkpoints can be manipulated with the GUI management interfaces, along with the ability to schedule the frequency of the Checkpoints. Most Checkpoint technology is chronologically linked; however, the DART 5.2 solution will support out of order deletion of checkpoints, while maintaining SnapSure integrity. SnapSure Enhancements allow customers to delete a Checkpoint out of order. This feature allows customers to delete any Checkpoint instead of being constrained to having to delete Checkpoints from the oldest to maintain integrity. A customer may also delete an individual scheduled checkpoint instead of the entire schedule, and may refresh any checkpoint instead of only the oldest. Checkpoints created in DART v5.2 are automatically mounted on creation, and maintenance of a hidden checkpoint directory in any subdirectory. This new hidden directory will now also allow changing the default name (yyy_dd_hh_mm_ss_GMT) into something more administratively friendly.

NAS Foundations

- 59

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra TimeFinder/FS - Data Replication


y Point in time copy of file system y Provides an independent mirror copy of Celerra data for out-of-band business processes and support functions y Provides read and write functionality independent of the original y Requires Symmetrix storage y Celerra controlled features
Point-in-time copies Dynamic mirroring Multiple BCVs Spans volumes Entire filesystem Backup and restore Data warehouses Live test data Batch jobs

Symmetrix

Celerra

Point-in-time copy FSA PFS PFS Copy

y Applications
BCV = Business Continuance Volume

2005 EMC Corporation. All rights reserved.

A second Celerra data replication method that provides high availability and rapid recovery is TimeFinder/FS. It uses a specially defined volume, called a Business Continuance Volume (BCV), to facilitate this functionality. As only the Symmetrix Array is currently able to define a BCV, TimeFinder/FS on the Celerra Family is currently restricted to implementations with Symmetrix only. The TimeFinder/FS implementation is different from a standard TimeFinder implementation as it is file system based, which is implemented upon a volume based feature. A BCV, which attaches to a standard volume on which a file system resides, provides the foundation for the file system copy. File systems can share BCVs, although the BCV remains dedicated to a volume. What this means is that if multiple file systems share a single BCV when one of the file systems is saved as point in time, all other file systems are in an unknown state. This, therefore, precludes the ability for recovery from this copy as the unknown state file systems would also be recovered and the underlying technology is volume based. TimeFinder/FS creates a point-in-time copy, or a dynamic mirror, of a filesystem. Integrated into the Celerra Control Station. The TimeFinder/FS option allows users to create filesystem copies (with only a brief suspension of access to the original file system), for independent read/write copies of data, useful for non-disruptive file backups, live copy test beds for new applications, and mirror copies of files for redundancy and business continuity. It will facilitate backup and restore of older versions of a specific file, directory, (by mounting the snapshot filesystem and manually recovering the file or directory) or complete file system. It can also function in mirroring and continuous updates mode for an active file system.

NAS Foundations

- 60

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

TimeFinder/FS Near Copy


y Synchronous disk-based disaster recovery and data replication solution
Requires Symmetrix storage R1/R2 data is synchronized for disaster recovery Read-only R2 data accessible via BCV for backup and testing purposes Synchronous SRDF as base technology
60 km distance

Windows

UNIX

Data Network

SRDF
R1 R2 BCV Celerra Symmetrix Symmetrix Celerra

ESCON/Fibre Channel

2005 EMC Corporation. All rights reserved.

Module Title - 61

Combining the TimeFinder/FS product with another mature Symmetrix product, Symmetrix Remote Data Facility (SRDF), has enabled the TimeFinder/FS concept to be utilized for Disaster Recovery, as the stand alone copy of the data can now be synchronously updated at a remote site. It allows this solution to be utilized as both a Data Replication and Disaster Recovery solution. This is known as TimeFinder/FS Near Copy, as the supported SRDF network distance between the two sites is 60 km (campus), due to the synchronous nature of the R2 volume updates. The Remote TimeFinder/FS Near Copy solution applies to environments that have a requirement for real-time, synchronous, disk-based recovery. Synchronous SRDF is used to maintain the R1/R2 pair. TimeFinder BCVs can be generated from the R2 and made available (read-only) to independent Data Movers in the remote Celerra. The Celerra at the remote site can make the content available for secondary business processes, such as testing or backup. This solution works for environments with SRDF active-active mode, where R1s and R2s exist in both sites, as well as active-passive, where all the R1s are located in one site, with SRDF to a passive R2 only Symmetrix. Synchronous SRDF operates over ESCON/Fibre Channel, and is limited to 60 km distances. The BCV at the R2 site is read-only, and restore must be done manually.

NAS Foundations

- 61

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

TimeFinder/FS with SRDF Far Copy


Windows UNIX Windows UNIX

y Asynchronous data replication solution


Replicated point-in time copy of primary site filesystem Data replication time will not impact production file system performance Requires Symmetrix storage Uses SRDF Adaptive Copy as base technology Sites can be geographically distant

R1 Site Network

R2 Site Network

STD R1/ BCV Celerra Symmetrix

SRDF
Adaptive Copy

STD

R2/ BCV Symmetrix Celerra

2005 EMC Corporation. All rights reserved.

Module Title - 62

All remote copies of data are not designated for Disaster Recovery, but could be used for data replication, such as Web Site data replication, Inventory Replication, Off-site Backup facility, or Employee directories. To facilitate these kinds of solutions, where time taken to replicate the data will not impact the performance of the Production Filesystem, TimeFinder/FS Far Copy can be utilized. The Remote TimeFinder/FS Far Copy solution applies to environments that have a requirement for remote point-in-time copies of the filesystems beyond the typical distances associated with synchronous SRDFthat is, greater than 60 km. Adaptive SRDF is used to replicate the information over geographical distances. The read-only copy at the remote site can be made available for secondary business processes such as testing or backup. Implementation of this solution allows data to be replicated asynchronously over a very wide area to where it is needed. It will not affect the Production file system, PFS, as would the Celerra SRDF solution if the distances were over 60 kilometers, because a BCV copy of the PFS is first made, and then the BCV is copied to the remote location, while the Production Filesystem continues serving data to the clients uninterruptedly. As this solution is dependent on the TimeFinder/FS, it will only be supported with the Celerra/Symmetrix configuration. The process for performing this action is: 1. Create a R1/BCV of STD 2. Sync R1/BCV with R2/BCV over SRDF link 3. Restore R2/BCV to Local STD (read-only if the relationship between the R2BCV needs to be maintained) 4. Import File System on R2 Celerra

NAS Foundations

- 62

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Replicator - Data Replication


Point-in-time read-only file system UNIX copy over IP Production filesystem available Windows during replication R1 Site Network Only sends changed data over wire (after initial synchronization) One-way link for single target Production Filesystem content distribution Primary Asynchronous Data Recovery SavVol Log Data recovery for CIFS data
Celerra/Symmetrix UNIX Windows
R2 Site Network

IP Network

Secondary Filesystem Secondary SavVol

Celerra NSX00

Replication Process
2005 EMC Corporation. All rights reserved. Module Title - 63

Celerra Replicator is an IP-based replication solution. Replication between a primary and a secondary filesystem can be on the same, or a remote Celerra system. Celerra Replicator Events: y Manually synchronize production file system and secondary file system. y Any changes to the production filesystems are recorded in the log - Log Changes. y Transfer is triggered by special events. The trigger can be controlled by a user-defined policy or via an explicit request on the Control Station. y Remote replication copies log changes to primary SavVol, and begins movement to secondary SavVol. An IP protocol transfer is set up with the remote replica for the newly updated set of SavVol blocks. While the transfer is in process, read/write activity on the PFS is not halted and a new log area is set up to track subsequent changes. Concurrently with the copy process, the newly created delta set is transferred to the secondary over IP. In local replication (when the secondary Data Mover is in the same cabinet as the primary), no transfer is required. The delta set is accessible as a shared volume. y Playback on Secondary: When the delta set arrives at the SavVol of the secondary and has been flagged as valid, the secondary will start to replay the blocks from its local SavVol, and apply the delta set to the SFS. This operation occurs transparently and with almost no interruption to SFS access. The Replicator feature is able to support data recovery for both CIFS and NFS.

NAS Foundations

- 63

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Family

TAPE-BASED BACKUP AND RESTORE OPTIONS

2005 EMC Corporation. All rights reserved.

Module Title - 64

Lets take a look at backup and restore options available with the Celerra Family.

NAS Foundations

- 64

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Tape-Based Backup and Restore Options


z

Network Backup

Most backup utilities z NFS/CIFS mounts over the client network or separate sub-network
Celerra

Network DATA Server Tape

NDMP Backup
(LAN(LAN-less local backup)

VERITAS NetBackup Legato Networker z CommVault Galaxy z HP OmniBack z Atempo Time Navigator
z

Celerra Data Network Server Symmetrix or CLARiiON Tape

2005 EMC Corporation. All rights reserved.

NAS is a fast growing market. Many NAS implementations have mission-critical data availability requirements and this is what Celerra does best. Fast and efficient backup and restore is an absolute requirement. There are a number of options as far as backup and restore is concerned. Network backups entail simply mounting the filesystems across the network and backing up to the backup server. NDMP backups only use the LAN for control information (LAN-less) and the data is transferred to the local backup device. VERITAS NetBackup, Legato Networker, CommVault Galaxy, HP OmniBack, and Atempo Time Navigator support Celerra NDMP backups. NDMP backups preserve the bilingual file information.

NAS Foundations

- 65

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Celerra Backup NDMP Backup


y Third-party NDMP
VERITAS NetBackup Legato NetWorker CommVault Galaxy HP OmniBack Atempo Time Navigator
Production Network Client with NDMP backup software Data Mover NDMP Agent

y Celerra backs up data to directly attached tape library unit (TLU) y Backup is performed by client running NDMP-compliant ISV software y No LAN performance impact: only control data goes via the LAN y Multi-protocol support: both CIFS and NFS filesystem attributes
2005 EMC Corporation. All rights reserved.

Tape Library Unit

Celerra

CLARiiON or Symmetrix

Data Flow Control Information Flow

Module Title - 66

VERITAS NetBackup, Legato Networker, Commvault Galaxy, HP OmniBack, and Atempo Time Navigator also support Celerra NDMP backups. Backup activity can be localized to a single backup Data Mover, thus requiring only one Data Mover be physically attached to the TLU (tape library unit). This option is implemented through TimeFinder/FS. Filesystems are split off and mounted to the backup Data Mover and backed up with no impact to the primary filesystem. Tape library units are connected to a Data Mover via a SCSI interface. Backup traffic is offloaded from the network and allows for dual accessed filesystems to be backed up, preserving both permission structures on the filesystem.

NAS Foundations

- 66

Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

Course Summary
Key points covered in this course are:
y y y y y y Identify the concepts and value of Network Attached Storage List Environmental Aspects of NAS Identify the EMC NAS Platforms and their differences Identify and describe key Celerra Software Features Identify and describe the Celerra Management Software offerings Identify and describe key Windows Specific Options with respect to EMC NAS environments y Identify and describe NAS Business Continuity and Replication Options with respect to the various EMC NAS platforms y Identify and describe key NAS Backup and Recovery options

2005 EMC Corporation. All rights reserved.

Module Title - 67

These are the key points covered in this training. Please take a moment to review them. This concludes the training. In order to receive credit for this course, please proceed to the Course Completion slide to update your transcript and access the Assessment.

NAS Foundations

- 67

Vous aimerez peut-être aussi